forked from Fediversity/Fediversity
Introduce test for deploying all services with nixops4 apply
(#329)
Closes Fediversity/Fediversity#276 This PR adds a CLI deployment test. It builds on top of Fediversity/Fediversity#323. This test features a deployer node and four target nodes. The deployer node runs `nixops4 apply` on a deployment built with our actual code in `deployment/default.nix`, which pushes onto the four target machines combinations of Garage/Mastodon/Peertube/Pixelfed depending on a JSON payload. We check that the expected services are indeed deployed on the machines. Getting there involved reworking the existing basic test to extract common patterns, and adding support for ACME certificates negotiation inside the NixOS test. What works: - deployer successfully runs `nixops4 apply` with various payloads - target machines indeed get the right services pushed onto them and removed - services on target machines successfully negotiate ACME certificates What does not work: the services themselves depend a lot on DNS and that is not taken care of at all, so they are probably very broken. Still, this is a good milestone. Test it yourself by running `nix build .#checks.x86_64-linux.deployment-basic -vL` and `nix build .#checks.x86_64-linux.deployment-cli -vL`. On the very beefy machine that I am using, the basic test runs in ~4 minutes and the CLI test in ~17 minutes. We know from Fediversity/Fediversity#323 that the basic test runs in ~12 minutes on the CI runner, so maybe about an hour for the CLI test? Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io> Reviewed-on: Fediversity/Fediversity#329 Reviewed-by: kiara Grouwstra <kiara@procolix.eu> Reviewed-by: Valentin Gagarin <valentin.gagarin@tweag.io> Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com> Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
This commit is contained in:
parent
5f66a034f3
commit
ee5c2b90b7
23 changed files with 842 additions and 261 deletions
|
@ -32,3 +32,9 @@ jobs:
|
|||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- run: nix build .#checks.x86_64-linux.deployment-basic -L
|
||||
|
||||
check-deployment-cli:
|
||||
runs-on: native
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- run: nix build .#checks.x86_64-linux.deployment-cli -L
|
||||
|
|
|
@ -1,6 +1,116 @@
|
|||
# Deployment
|
||||
|
||||
This repository contains work to generate a full Fediversity deployment from a
|
||||
minimal configuration. This is different from [`../services/`](../services) that
|
||||
focuses on one machine, providing a polished and unified interface to different
|
||||
Fediverse services.
|
||||
This directory contains work to generate a full Fediversity deployment from a minimal configuration.
|
||||
This is different from [`../services/`](../services) that focuses on one machine, providing a polished and unified interface to different Fediverse services.
|
||||
|
||||
## Checks
|
||||
|
||||
There are three levels of deployment checks: `basic`, `cli`, `panel`.
|
||||
They can be found in subdirectories of [`check/`](./check).
|
||||
They can be run as part of `nix flake check` or individually as:
|
||||
|
||||
``` console
|
||||
$ nix build .#checks.<system>.deployment-<name> -vL
|
||||
```
|
||||
|
||||
Since `nixops4 apply` operates on a flake, the tests take this repository's flake as a template.
|
||||
This also why there are some dummy files that will be overwritten inside the test.
|
||||
|
||||
### Basic deployment check
|
||||
|
||||
The basic deployment check is here as a building block and sanity check.
|
||||
It does not actually use any of the code in this directory but checks that our test strategy is sound and that basic NixOps4 functionalities are here.
|
||||
|
||||
It is a NixOS test featuring one deployer machine and two target machines.
|
||||
The deployment simply adds `pkgs.hello` to one and `pkgs.cowsay` to the other.
|
||||
It is heavily inspired by [a similar test in `nixops4-nixos`].
|
||||
|
||||
[a similar test in nixops4-nixos]: https://github.com/nixops4/nixops4-nixos/blob/main/test/default/nixosTest.nix
|
||||
|
||||
This test involves three nodes:
|
||||
|
||||
- `deployer` is the node that will perform the deployment using `nixops4 apply`.
|
||||
Because the test runs in a sandboxed environment, `deployer` will not have access to internet, and therefore it must already have all store paths needed for the target nodes.
|
||||
|
||||
- “target machines” are two eponymous nodes on which the packages `hello` and `cowsay` will be deployed.
|
||||
They start with a minimal configuration.
|
||||
|
||||
``` mermaid
|
||||
flowchart LR
|
||||
deployer["deployer<br><font size='1'>has store paths<br>runs nixops4</font>"]
|
||||
|
||||
subgraph target_machines["target machines"]
|
||||
direction TB
|
||||
hello
|
||||
cowsay
|
||||
end
|
||||
|
||||
deployer -->|deploys| target_machines
|
||||
```
|
||||
|
||||
### Service deployment check using `nixops4 apply`
|
||||
|
||||
This check omits the panel by running a direct invocation of NixOps4.
|
||||
It deploys some services and checks that they are indeed on the target machines, then cleans them up and checks whether that works, too.
|
||||
It builds upon the basic deployment check.
|
||||
|
||||
This test involves seven nodes:
|
||||
|
||||
- `deployer` is the node that will perform the deployment using `nixops4 apply`.
|
||||
Because the test runs in a sandboxed environment, `deployer` will not have access to internet, and therefore it must already have all store paths needed for the target nodes.
|
||||
|
||||
- “target machines” are four nodes — `garage`, `mastodon`, `peertube`, and `pixelfed` — on which the services will be deployed.
|
||||
They start with a minimal configuration.
|
||||
|
||||
- `acme` is a node that runs [Pebble], a miniature ACME server to deliver the certificates that the services expect.
|
||||
|
||||
- [WIP] `client` is a node that runs a browser controlled by some Selenium scripts in order to check that the services are indeed running and are accessible.
|
||||
|
||||
[Pebble]: https://github.com/letsencrypt/pebble
|
||||
|
||||
``` mermaid
|
||||
flowchart LR
|
||||
|
||||
classDef invisible fill:none,stroke:none
|
||||
|
||||
subgraph left [" "]
|
||||
direction TB
|
||||
|
||||
deployer["deployer<br><font size='1'>has store paths<br>runs nixops4</font>"]
|
||||
client["client<br><font size='1'>Selenium scripts</font>"]
|
||||
end
|
||||
|
||||
subgraph middle [" "]
|
||||
subgraph target_machines["target machines"]
|
||||
direction TB
|
||||
|
||||
garage
|
||||
mastodon
|
||||
peertube
|
||||
pixelfed
|
||||
end
|
||||
end
|
||||
|
||||
subgraph right [" "]
|
||||
direction TB
|
||||
|
||||
acme["acme<br><font size='1'>runs Pebble</font>"]
|
||||
end
|
||||
|
||||
left ~~~ middle ~~~ right
|
||||
class left,middle,right invisible
|
||||
|
||||
deployer -->|deploys| target_machines
|
||||
|
||||
client -->|tests| mastodon
|
||||
client -->|tests| peertube
|
||||
client -->|tests| pixelfed
|
||||
|
||||
target_machines -->|get certs| acme
|
||||
```
|
||||
|
||||
### [WIP] Service deployment check from the panel
|
||||
|
||||
This is a full deployment check running the panel on the deployer machine, deploying some services through the panel and checking that they are indeed on the target machines, then cleans them up and checks whether that works, too.
|
||||
|
||||
It builds upon the basic and CLI deployment checks.
|
||||
|
|
|
@ -1,9 +0,0 @@
|
|||
# Basic deployment test
|
||||
|
||||
Basic deployment test with one deployer machine, one target machine, and a
|
||||
simple target application, namely cowsay. The goal is to check that basic
|
||||
functionalities are here.
|
||||
|
||||
It is heavily inspired by a similar test in nixops4-nixos:
|
||||
|
||||
https://github.com/nixops4/nixops4-nixos/blob/main/test/default/nixosTest.nix
|
|
@ -1 +0,0 @@
|
|||
## This file is just a placeholder. It is overwritten by the test.
|
|
@ -1,32 +0,0 @@
|
|||
{
|
||||
inputs,
|
||||
lib,
|
||||
providers,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
providers.local = inputs.nixops4.modules.nixops4Provider.local;
|
||||
|
||||
resources.target = {
|
||||
type = providers.local.exec;
|
||||
imports = [ inputs.nixops4-nixos.modules.nixops4Resource.nixos ];
|
||||
|
||||
ssh = {
|
||||
host = "target";
|
||||
hostPublicKey = builtins.readFile ./target_host_key.pub;
|
||||
};
|
||||
|
||||
nixpkgs = inputs.nixpkgs;
|
||||
nixos.module =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
imports = [
|
||||
./minimalTarget.nix
|
||||
(lib.modules.importJSON ./target-network.json)
|
||||
];
|
||||
nixpkgs.hostPlatform = "x86_64-linux";
|
||||
environment.systemPackages = [ pkgs.cowsay ];
|
||||
};
|
||||
};
|
||||
}
|
|
@ -1,21 +1,54 @@
|
|||
{ inputs, ... }:
|
||||
|
||||
{
|
||||
nixops4Deployments.check-deployment-basic =
|
||||
{ ... }:
|
||||
self,
|
||||
inputs,
|
||||
lib,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (lib) genAttrs;
|
||||
|
||||
targetMachines = [
|
||||
"hello"
|
||||
"cowsay"
|
||||
];
|
||||
pathToRoot = /. + (builtins.unsafeDiscardStringContext self);
|
||||
pathFromRoot = ./.;
|
||||
|
||||
in
|
||||
{
|
||||
perSystem =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
checks.deployment-basic = pkgs.testers.runNixOSTest {
|
||||
imports = [
|
||||
./deployment.nix
|
||||
../common/nixosTest.nix
|
||||
./nixosTest.nix
|
||||
];
|
||||
_module.args.inputs = inputs;
|
||||
inherit targetMachines pathToRoot pathFromRoot;
|
||||
};
|
||||
};
|
||||
|
||||
perSystem =
|
||||
{ inputs', pkgs, ... }:
|
||||
nixops4Deployments.check-deployment-basic =
|
||||
{ providers, ... }:
|
||||
{
|
||||
checks.deployment-basic = pkgs.callPackage ./nixosTest.nix {
|
||||
nixops4-flake-in-a-bottle = inputs'.nixops4.packages.flake-in-a-bottle;
|
||||
inherit inputs;
|
||||
providers = {
|
||||
inherit (inputs.nixops4.modules.nixops4Provider) local;
|
||||
};
|
||||
resources = genAttrs targetMachines (nodeName: {
|
||||
type = providers.local.exec;
|
||||
imports = [
|
||||
inputs.nixops4-nixos.modules.nixops4Resource.nixos
|
||||
../common/targetResource.nix
|
||||
];
|
||||
_module.args.inputs = inputs;
|
||||
inherit nodeName pathToRoot pathFromRoot;
|
||||
nixos.module =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
environment.systemPackages = [ pkgs.${nodeName} ];
|
||||
};
|
||||
});
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,35 +0,0 @@
|
|||
{
|
||||
lib,
|
||||
modulesPath,
|
||||
...
|
||||
}:
|
||||
|
||||
{
|
||||
imports = [
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
(modulesPath + "/../lib/testing/nixos-test-base.nix")
|
||||
];
|
||||
|
||||
## Test framework disables switching by default. That might be OK by itself,
|
||||
## but we also use this config for getting the dependencies in
|
||||
## `deployer.system.extraDependencies`.
|
||||
system.switch.enable = true;
|
||||
|
||||
nix = {
|
||||
## Not used; save a large copy operation
|
||||
channel.enable = false;
|
||||
registry = lib.mkForce { };
|
||||
};
|
||||
|
||||
services.openssh = {
|
||||
enable = true;
|
||||
settings.PermitRootLogin = "yes";
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = [ 22 ];
|
||||
|
||||
users.users.root.openssh.authorizedKeys.keyFiles = [ ./deployer.pub ];
|
||||
|
||||
## Test VMs don't have a bootloader by default.
|
||||
boot.loader.grub.enable = false;
|
||||
}
|
|
@ -1,161 +1,35 @@
|
|||
{ inputs, ... }:
|
||||
|
||||
{
|
||||
testers,
|
||||
inputs,
|
||||
runCommandNoCC,
|
||||
nixops4-flake-in-a-bottle,
|
||||
...
|
||||
}:
|
||||
|
||||
testers.runNixOSTest (
|
||||
{
|
||||
lib,
|
||||
config,
|
||||
hostPkgs,
|
||||
...
|
||||
}:
|
||||
let
|
||||
vmSystem = config.node.pkgs.hostPlatform.system;
|
||||
|
||||
pathToRoot = ../../..;
|
||||
pathFromRoot = "deployment/check/basic";
|
||||
deploymentName = "check-deployment-basic";
|
||||
|
||||
## TODO: sanity check the existence of (pathToRoot + "/flake.nix")
|
||||
## TODO: sanity check that (pathToRoot + "/${pathFromRoot}" == ./.)
|
||||
|
||||
## The whole repository, with the flake at its root.
|
||||
src = lib.fileset.toSource {
|
||||
fileset = pathToRoot;
|
||||
root = pathToRoot;
|
||||
};
|
||||
|
||||
## We will need to override some inputs by the empty flake, so we make one.
|
||||
emptyFlake = runCommandNoCC "empty-flake" { } ''
|
||||
mkdir $out
|
||||
echo "{ outputs = { self }: {}; }" > $out/flake.nix
|
||||
'';
|
||||
|
||||
targetNetworkJSON = hostPkgs.writeText "target-network.json" (
|
||||
builtins.toJSON config.nodes.target.system.build.networkConfig
|
||||
);
|
||||
|
||||
in
|
||||
{
|
||||
name = "deployment-basic";
|
||||
imports = [
|
||||
inputs.nixops4-nixos.modules.nixosTest.static
|
||||
];
|
||||
|
||||
nodes = {
|
||||
deployer =
|
||||
{ pkgs, nodes, ... }:
|
||||
nodes.deployer =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
environment.systemPackages = [
|
||||
inputs.nixops4.packages.${vmSystem}.default
|
||||
inputs.nixops4.packages.${pkgs.system}.default
|
||||
];
|
||||
|
||||
virtualisation = {
|
||||
## Memory use is expected to be dominated by the NixOS evaluation,
|
||||
## which happens on the deployer.
|
||||
memorySize = 4096;
|
||||
diskSize = 10 * 1024;
|
||||
cores = 2;
|
||||
system.extraDependenciesFromModule =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
environment.systemPackages = with pkgs; [
|
||||
hello
|
||||
cowsay
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
nix.settings = {
|
||||
substituters = lib.mkForce [ ];
|
||||
hashed-mirrors = null;
|
||||
connect-timeout = 1;
|
||||
};
|
||||
|
||||
system.extraDependencies =
|
||||
[
|
||||
"${inputs.flake-parts}"
|
||||
"${inputs.flake-parts.inputs.nixpkgs-lib}"
|
||||
"${inputs.nixops4}"
|
||||
"${inputs.nixops4-nixos}"
|
||||
"${inputs.nixpkgs}"
|
||||
|
||||
pkgs.stdenv
|
||||
pkgs.stdenvNoCC
|
||||
|
||||
pkgs.cowsay
|
||||
pkgs.cowsay.inputDerivation # NOTE: Crucial!!!
|
||||
|
||||
## Some derivations will be different compared to target's initial
|
||||
## state, so we'll need to be able to build something similar.
|
||||
## Generally the derivation inputs aren't that different, so we
|
||||
## use the initial state of the target as a base.
|
||||
nodes.target.system.build.toplevel.inputDerivation
|
||||
nodes.target.system.build.etc.inputDerivation
|
||||
nodes.target.system.path.inputDerivation
|
||||
nodes.target.system.build.bootStage1.inputDerivation
|
||||
nodes.target.system.build.bootStage2.inputDerivation
|
||||
]
|
||||
++ lib.concatLists (
|
||||
lib.mapAttrsToList (
|
||||
_k: v: if v ? source.inputDerivation then [ v.source.inputDerivation ] else [ ]
|
||||
) nodes.target.environment.etc
|
||||
);
|
||||
};
|
||||
|
||||
target.imports = [ ./minimalTarget.nix ];
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
start_all()
|
||||
|
||||
target.wait_for_unit("multi-user.target")
|
||||
deployer.wait_for_unit("multi-user.target")
|
||||
|
||||
with subtest("Unpacking"):
|
||||
deployer.succeed("cp -r --no-preserve=mode ${src} work")
|
||||
|
||||
with subtest("Configure the network"):
|
||||
deployer.copy_from_host("${targetNetworkJSON}", "/root/target-network.json")
|
||||
deployer.succeed("mv /root/target-network.json work/${pathFromRoot}/target-network.json")
|
||||
|
||||
with subtest("Configure the deployer key"):
|
||||
deployer.succeed("""mkdir -p ~/.ssh && ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa""")
|
||||
deployer_key = deployer.succeed("cat ~/.ssh/id_rsa.pub").strip()
|
||||
deployer.succeed(f"echo '{deployer_key}' > work/${pathFromRoot}/deployer.pub")
|
||||
target.succeed(f"mkdir -p /root/.ssh && echo '{deployer_key}' >> /root/.ssh/authorized_keys")
|
||||
|
||||
with subtest("Configure the target host key"):
|
||||
target_host_key = target.succeed("ssh-keyscan target | grep -v '^#' | cut -f 2- -d ' ' | head -n 1")
|
||||
deployer.succeed(f"echo '{target_host_key}' > work/${pathFromRoot}/target_host_key.pub")
|
||||
|
||||
## NOTE: This is super slow. It could probably be optimised in Nix, for
|
||||
## instance by allowing to grab things directly from the host's store.
|
||||
with subtest("Override the lock"):
|
||||
deployer.succeed("""
|
||||
cd work
|
||||
nix flake lock --extra-experimental-features 'flakes nix-command' \
|
||||
--offline -v \
|
||||
--override-input flake-parts ${inputs.flake-parts} \
|
||||
--override-input nixops4 ${nixops4-flake-in-a-bottle} \
|
||||
\
|
||||
--override-input nixops4-nixos ${inputs.nixops4-nixos} \
|
||||
--override-input nixops4-nixos/flake-parts ${inputs.nixops4-nixos.inputs.flake-parts} \
|
||||
--override-input nixops4-nixos/flake-parts/nixpkgs-lib ${inputs.nixops4-nixos.inputs.flake-parts.inputs.nixpkgs-lib} \
|
||||
--override-input nixops4-nixos/nixops4-nixos ${emptyFlake} \
|
||||
--override-input nixops4-nixos/nixpkgs ${inputs.nixops4-nixos.inputs.nixpkgs} \
|
||||
--override-input nixops4-nixos/nixops4 ${nixops4-flake-in-a-bottle} \
|
||||
--override-input nixops4-nixos/git-hooks-nix ${emptyFlake} \
|
||||
\
|
||||
--override-input nixpkgs ${inputs.nixpkgs} \
|
||||
--override-input git-hooks ${inputs.git-hooks} \
|
||||
;
|
||||
""")
|
||||
|
||||
extraTestScript = ''
|
||||
with subtest("Check the status before deployment"):
|
||||
target.fail("cowsay hi 1>&2")
|
||||
hello.fail("hello 1>&2")
|
||||
cowsay.fail("cowsay 1>&2")
|
||||
|
||||
with subtest("Run the deployment"):
|
||||
deployer.succeed("cd work && nixops4 apply ${deploymentName} --show-trace --no-interactive")
|
||||
deployer.succeed("nixops4 apply check-deployment-basic --show-trace --no-interactive 1>&2")
|
||||
|
||||
with subtest("Check the deployment"):
|
||||
target.succeed("cowsay hi 1>&2")
|
||||
hello.succeed("hello 1>&2")
|
||||
cowsay.succeed("cowsay hi 1>&2")
|
||||
'';
|
||||
}
|
||||
)
|
||||
}
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
{"comment": "This file is just a placeholder. It is overwritten by the test."}
|
|
@ -1 +0,0 @@
|
|||
## This file is just a placeholder. It is overwritten by the test.
|
1
deployment/check/cli/deployer.pub
Normal file
1
deployment/check/cli/deployer.pub
Normal file
|
@ -0,0 +1 @@
|
|||
## This is a placeholder file. It will be overwritten by the test.
|
87
deployment/check/cli/flake-part.nix
Normal file
87
deployment/check/cli/flake-part.nix
Normal file
|
@ -0,0 +1,87 @@
|
|||
{
|
||||
self,
|
||||
inputs,
|
||||
lib,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (builtins) fromJSON readFile listToAttrs;
|
||||
|
||||
targetMachines = [
|
||||
"garage"
|
||||
"mastodon"
|
||||
"peertube"
|
||||
"pixelfed"
|
||||
];
|
||||
pathToRoot = /. + (builtins.unsafeDiscardStringContext self);
|
||||
pathFromRoot = ./.;
|
||||
enableAcme = true;
|
||||
|
||||
in
|
||||
{
|
||||
perSystem =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
checks.deployment-cli = pkgs.testers.runNixOSTest {
|
||||
imports = [
|
||||
../common/nixosTest.nix
|
||||
./nixosTest.nix
|
||||
];
|
||||
_module.args.inputs = inputs;
|
||||
inherit
|
||||
targetMachines
|
||||
pathToRoot
|
||||
pathFromRoot
|
||||
enableAcme
|
||||
;
|
||||
};
|
||||
};
|
||||
|
||||
nixops4Deployments =
|
||||
let
|
||||
makeTargetResource = nodeName: {
|
||||
imports = [ ../common/targetResource.nix ];
|
||||
_module.args.inputs = inputs;
|
||||
inherit
|
||||
nodeName
|
||||
pathToRoot
|
||||
pathFromRoot
|
||||
enableAcme
|
||||
;
|
||||
};
|
||||
|
||||
## The deployment function - what we are here to test!
|
||||
##
|
||||
## TODO: Modularise `deployment/default.nix` to get rid of the nested
|
||||
## function calls.
|
||||
makeTestDeployment =
|
||||
args:
|
||||
(import ../..)
|
||||
{
|
||||
inherit lib;
|
||||
inherit (inputs) nixops4 nixops4-nixos;
|
||||
fediversity = import ../../../services/fediversity;
|
||||
}
|
||||
(listToAttrs (
|
||||
map (nodeName: {
|
||||
name = "${nodeName}ConfigurationResource";
|
||||
value = makeTargetResource nodeName;
|
||||
}) targetMachines
|
||||
))
|
||||
(fromJSON (readFile ../../configuration.sample.json) // args);
|
||||
|
||||
in
|
||||
{
|
||||
check-deployment-cli-nothing = makeTestDeployment { };
|
||||
|
||||
check-deployment-cli-mastodon-pixelfed = makeTestDeployment {
|
||||
mastodon.enable = true;
|
||||
pixelfed.enable = true;
|
||||
};
|
||||
|
||||
check-deployment-cli-peertube = makeTestDeployment {
|
||||
peertube.enable = true;
|
||||
};
|
||||
};
|
||||
}
|
109
deployment/check/cli/nixosTest.nix
Normal file
109
deployment/check/cli/nixosTest.nix
Normal file
|
@ -0,0 +1,109 @@
|
|||
{ inputs, hostPkgs, ... }:
|
||||
|
||||
let
|
||||
## Some places need a dummy file that will in fact never be used. We create
|
||||
## it here.
|
||||
dummyFile = hostPkgs.writeText "dummy" "";
|
||||
in
|
||||
|
||||
{
|
||||
name = "deployment-cli";
|
||||
|
||||
nodes.deployer =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
environment.systemPackages = [
|
||||
inputs.nixops4.packages.${pkgs.system}.default
|
||||
];
|
||||
|
||||
## FIXME: The following dependencies are necessary but I do not
|
||||
## understand why they are not covered by the fake node.
|
||||
system.extraDependencies = with pkgs; [
|
||||
peertube
|
||||
peertube.inputDerivation
|
||||
gixy
|
||||
gixy.inputDerivation
|
||||
];
|
||||
|
||||
system.extraDependenciesFromModule = {
|
||||
imports = [ ../../../services/fediversity ];
|
||||
fediversity = {
|
||||
domain = "fediversity.net"; # would write `dummy` but that would not type
|
||||
garage.enable = true;
|
||||
mastodon = {
|
||||
enable = true;
|
||||
s3AccessKeyFile = dummyFile;
|
||||
s3SecretKeyFile = dummyFile;
|
||||
};
|
||||
peertube = {
|
||||
enable = true;
|
||||
secretsFile = dummyFile;
|
||||
s3AccessKeyFile = dummyFile;
|
||||
s3SecretKeyFile = dummyFile;
|
||||
};
|
||||
pixelfed = {
|
||||
enable = true;
|
||||
s3AccessKeyFile = dummyFile;
|
||||
s3SecretKeyFile = dummyFile;
|
||||
};
|
||||
temp.cores = 1;
|
||||
temp.initialUser = {
|
||||
username = "dummy";
|
||||
displayName = "dummy";
|
||||
email = "dummy";
|
||||
passwordFile = dummyFile;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
## NOTE: The target machines may need more RAM than the default to handle
|
||||
## being deployed to, otherwise we get something like:
|
||||
##
|
||||
## pixelfed # [ 616.785499 ] sshd-session[1167]: Conection closed by 2001:db8:1::2 port 45004
|
||||
## deployer # error: writing to file: No space left on device
|
||||
## pixelfed # [ 616.788538 ] sshd-session[1151]: pam_unix(sshd:session): session closed for user port
|
||||
## pixelfed # [ 616.793929 ] systemd-logind[719]: Session 4 logged out. Waiting for processes to exit.
|
||||
## deployer # Error: Could not create resource
|
||||
##
|
||||
## These values have been trimmed down to the gigabyte.
|
||||
nodes.mastodon.virtualisation.memorySize = 4 * 1024;
|
||||
nodes.pixelfed.virtualisation.memorySize = 4 * 1024;
|
||||
nodes.peertube.virtualisation.memorySize = 5 * 1024;
|
||||
|
||||
## FIXME: The test of presence of the services are very simple: we only
|
||||
## check that there is a systemd service of the expected name on the
|
||||
## machine. This proves at least that NixOps4 did something, and we cannot
|
||||
## really do more for now because the services aren't actually working
|
||||
## properly, in particular because of DNS issues. We should fix the services
|
||||
## and check that they are working properly.
|
||||
|
||||
extraTestScript = ''
|
||||
with subtest("Run deployment with no services enabled"):
|
||||
deployer.succeed("nixops4 apply check-deployment-cli-nothing --show-trace --no-interactive 1>&2")
|
||||
|
||||
with subtest("Check the status of the services - there should be none"):
|
||||
garage.fail("systemctl status garage.service")
|
||||
mastodon.fail("systemctl status mastodon-web.service")
|
||||
peertube.fail("systemctl status peertube.service")
|
||||
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
|
||||
|
||||
with subtest("Run deployment with Mastodon and Pixelfed enabled"):
|
||||
deployer.succeed("nixops4 apply check-deployment-cli-mastodon-pixelfed --show-trace --no-interactive 1>&2")
|
||||
|
||||
with subtest("Check the status of the services - expecting Garage, Mastodon and Pixelfed"):
|
||||
garage.succeed("systemctl status garage.service")
|
||||
mastodon.succeed("systemctl status mastodon-web.service")
|
||||
peertube.fail("systemctl status peertube.service")
|
||||
pixelfed.succeed("systemctl status phpfpm-pixelfed.service")
|
||||
|
||||
with subtest("Run deployment with only Peertube enabled"):
|
||||
deployer.succeed("nixops4 apply check-deployment-cli-peertube --show-trace --no-interactive 1>&2")
|
||||
|
||||
with subtest("Check the status of the services - expecting Garage and Peertube"):
|
||||
garage.succeed("systemctl status garage.service")
|
||||
mastodon.fail("systemctl status mastodon-web.service")
|
||||
peertube.succeed("systemctl status peertube.service")
|
||||
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
|
||||
'';
|
||||
}
|
101
deployment/check/common/deployerNode.nix
Normal file
101
deployment/check/common/deployerNode.nix
Normal file
|
@ -0,0 +1,101 @@
|
|||
{
|
||||
inputs,
|
||||
lib,
|
||||
pkgs,
|
||||
config,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (lib)
|
||||
mkOption
|
||||
mkForce
|
||||
concatLists
|
||||
types
|
||||
;
|
||||
|
||||
in
|
||||
{
|
||||
imports = [ ./sharedOptions.nix ];
|
||||
|
||||
options.system.extraDependenciesFromModule = mkOption {
|
||||
type = types.deferredModule;
|
||||
description = ''
|
||||
Grab the derivations needed to build the given module and dump them in
|
||||
system.extraDependencies. You want to put in this module a superset of
|
||||
all the things that you will need on your target machines.
|
||||
|
||||
NOTE: This will work as long as the union of all these configurations do
|
||||
not have conflicts that would prevent evaluation.
|
||||
'';
|
||||
default = { };
|
||||
};
|
||||
|
||||
config = {
|
||||
virtualisation = {
|
||||
## NOTE: The deployer machines needs more RAM and default than the
|
||||
## default. These values have been trimmed down to the gigabyte.
|
||||
## Memory use is expected to be dominated by the NixOS evaluation,
|
||||
## which happens on the deployer.
|
||||
memorySize = 4 * 1024;
|
||||
diskSize = 4 * 1024;
|
||||
cores = 2;
|
||||
};
|
||||
|
||||
nix.settings = {
|
||||
substituters = mkForce [ ];
|
||||
hashed-mirrors = null;
|
||||
connect-timeout = 1;
|
||||
extra-experimental-features = "flakes";
|
||||
};
|
||||
|
||||
system.extraDependencies =
|
||||
[
|
||||
"${inputs.flake-parts}"
|
||||
"${inputs.flake-parts.inputs.nixpkgs-lib}"
|
||||
"${inputs.nixops4}"
|
||||
"${inputs.nixops4-nixos}"
|
||||
"${inputs.nixpkgs}"
|
||||
|
||||
pkgs.stdenv
|
||||
pkgs.stdenvNoCC
|
||||
]
|
||||
++ (
|
||||
let
|
||||
## We build a whole NixOS system that contains the module
|
||||
## `system.extraDependenciesFromModule`, only to grab its
|
||||
## configuration and the store paths needed to build it and
|
||||
## dump them in `system.extraDependencies`.
|
||||
machine =
|
||||
(pkgs.nixos [
|
||||
./targetNode.nix
|
||||
config.system.extraDependenciesFromModule
|
||||
{
|
||||
nixpkgs.hostPlatform = "x86_64-linux";
|
||||
_module.args.inputs = inputs;
|
||||
enableAcme = config.enableAcme;
|
||||
acmeNodeIP = config.acmeNodeIP;
|
||||
}
|
||||
]).config;
|
||||
|
||||
in
|
||||
[
|
||||
machine.system.build.toplevel.inputDerivation
|
||||
machine.system.build.etc.inputDerivation
|
||||
machine.system.build.etcBasedir.inputDerivation
|
||||
machine.system.build.etcMetadataImage.inputDerivation
|
||||
machine.system.build.extraUtils.inputDerivation
|
||||
machine.system.path.inputDerivation
|
||||
machine.system.build.setEnvironment.inputDerivation
|
||||
machine.system.build.vm.inputDerivation
|
||||
machine.system.build.bootStage1.inputDerivation
|
||||
machine.system.build.bootStage2.inputDerivation
|
||||
]
|
||||
++ concatLists (
|
||||
lib.mapAttrsToList (
|
||||
_k: v: if v ? source.inputDerivation then [ v.source.inputDerivation ] else [ ]
|
||||
) machine.environment.etc
|
||||
)
|
||||
);
|
||||
};
|
||||
}
|
164
deployment/check/common/nixosTest.nix
Normal file
164
deployment/check/common/nixosTest.nix
Normal file
|
@ -0,0 +1,164 @@
|
|||
{
|
||||
inputs,
|
||||
lib,
|
||||
config,
|
||||
hostPkgs,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (builtins)
|
||||
concatStringsSep
|
||||
toJSON
|
||||
;
|
||||
inherit (lib)
|
||||
fileset
|
||||
mkOption
|
||||
genAttrs
|
||||
attrNames
|
||||
optionalString
|
||||
;
|
||||
inherit (hostPkgs)
|
||||
runCommandNoCC
|
||||
writeText
|
||||
system
|
||||
;
|
||||
|
||||
forConcat = xs: f: concatStringsSep "\n" (map f xs);
|
||||
|
||||
## The whole repository, with the flake at its root.
|
||||
## FIXME: We could probably have fileset be the union of ./. with flake.nix
|
||||
## and flake.lock - I doubt we need anything else.
|
||||
src = fileset.toSource {
|
||||
fileset = config.pathToRoot;
|
||||
root = config.pathToRoot;
|
||||
};
|
||||
|
||||
## We will need to override some inputs by the empty flake, so we make one.
|
||||
emptyFlake = runCommandNoCC "empty-flake" { } ''
|
||||
mkdir $out
|
||||
echo "{ outputs = { self }: {}; }" > $out/flake.nix
|
||||
'';
|
||||
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
./sharedOptions.nix
|
||||
];
|
||||
|
||||
options = {
|
||||
## FIXME: I wish I could just use `testScript` but with something like
|
||||
## `mkOrder` to put this module's string before something else.
|
||||
extraTestScript = mkOption { };
|
||||
};
|
||||
|
||||
config = {
|
||||
|
||||
nodes =
|
||||
{
|
||||
deployer = {
|
||||
imports = [ ./deployerNode.nix ];
|
||||
_module.args.inputs = inputs;
|
||||
enableAcme = config.enableAcme;
|
||||
acmeNodeIP = config.nodes.acme.networking.primaryIPAddress;
|
||||
};
|
||||
}
|
||||
|
||||
//
|
||||
|
||||
(
|
||||
if config.enableAcme then
|
||||
{
|
||||
acme = {
|
||||
## FIXME: This makes `nodes.acme` into a local resolver. Maybe this will
|
||||
## break things once we play with DNS?
|
||||
imports = [ "${inputs.nixpkgs}/nixos/tests/common/acme/server" ];
|
||||
## We aren't testing ACME - we just want certificates.
|
||||
systemd.services.pebble.environment.PEBBLE_VA_ALWAYS_VALID = "1";
|
||||
};
|
||||
}
|
||||
else
|
||||
{ }
|
||||
)
|
||||
|
||||
//
|
||||
|
||||
genAttrs config.targetMachines (_: {
|
||||
imports = [ ./targetNode.nix ];
|
||||
_module.args.inputs = inputs;
|
||||
enableAcme = config.enableAcme;
|
||||
acmeNodeIP = if config.enableAcme then config.nodes.acme.networking.primaryIPAddress else null;
|
||||
});
|
||||
|
||||
testScript = ''
|
||||
${forConcat (attrNames config.nodes) (n: ''
|
||||
${n}.start()
|
||||
'')}
|
||||
|
||||
${forConcat (attrNames config.nodes) (n: ''
|
||||
${n}.wait_for_unit("multi-user.target")
|
||||
'')}
|
||||
|
||||
with subtest("Unpacking"):
|
||||
deployer.succeed("cp -r --no-preserve=mode ${src}/* .")
|
||||
|
||||
with subtest("Configure the network"):
|
||||
${forConcat config.targetMachines (
|
||||
tm:
|
||||
let
|
||||
targetNetworkJSON = writeText "target-network.json" (
|
||||
toJSON config.nodes.${tm}.system.build.networkConfig
|
||||
);
|
||||
in
|
||||
''
|
||||
deployer.copy_from_host("${targetNetworkJSON}", "${config.pathFromRoot}/${tm}-network.json")
|
||||
''
|
||||
)}
|
||||
|
||||
with subtest("Configure the deployer key"):
|
||||
deployer.succeed("""mkdir -p ~/.ssh && ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa""")
|
||||
deployer_key = deployer.succeed("cat ~/.ssh/id_rsa.pub").strip()
|
||||
deployer.succeed(f"echo '{deployer_key}' > ${config.pathFromRoot}/deployer.pub")
|
||||
${forConcat config.targetMachines (tm: ''
|
||||
${tm}.succeed(f"mkdir -p /root/.ssh && echo '{deployer_key}' >> /root/.ssh/authorized_keys")
|
||||
'')}
|
||||
|
||||
with subtest("Configure the target host key"):
|
||||
${forConcat config.targetMachines (tm: ''
|
||||
host_key = ${tm}.succeed("ssh-keyscan ${tm} | grep -v '^#' | cut -f 2- -d ' ' | head -n 1")
|
||||
deployer.succeed(f"echo '{host_key}' > ${config.pathFromRoot}/${tm}_host_key.pub")
|
||||
'')}
|
||||
|
||||
## NOTE: This is super slow. It could probably be optimised in Nix, for
|
||||
## instance by allowing to grab things directly from the host's store.
|
||||
with subtest("Override the lock"):
|
||||
deployer.succeed("""
|
||||
nix flake lock --extra-experimental-features 'flakes nix-command' \
|
||||
--offline -v \
|
||||
--override-input flake-parts ${inputs.flake-parts} \
|
||||
--override-input nixops4 ${inputs.nixops4.packages.${system}.flake-in-a-bottle} \
|
||||
\
|
||||
--override-input nixops4-nixos ${inputs.nixops4-nixos} \
|
||||
--override-input nixops4-nixos/flake-parts ${inputs.nixops4-nixos.inputs.flake-parts} \
|
||||
--override-input nixops4-nixos/flake-parts/nixpkgs-lib ${inputs.nixops4-nixos.inputs.flake-parts.inputs.nixpkgs-lib} \
|
||||
--override-input nixops4-nixos/nixops4-nixos ${emptyFlake} \
|
||||
--override-input nixops4-nixos/nixpkgs ${inputs.nixops4-nixos.inputs.nixpkgs} \
|
||||
--override-input nixops4-nixos/nixops4 ${
|
||||
inputs.nixops4-nixos.inputs.nixops4.packages.${system}.flake-in-a-bottle
|
||||
} \
|
||||
--override-input nixops4-nixos/git-hooks-nix ${emptyFlake} \
|
||||
\
|
||||
--override-input nixpkgs ${inputs.nixpkgs} \
|
||||
--override-input git-hooks ${inputs.git-hooks} \
|
||||
;
|
||||
""")
|
||||
|
||||
${optionalString config.enableAcme ''
|
||||
with subtest("Set up handmade DNS"):
|
||||
deployer.succeed("echo '${config.nodes.acme.networking.primaryIPAddress}' > ${config.pathFromRoot}/acme_server_ip")
|
||||
''}
|
||||
|
||||
${config.extraTestScript}
|
||||
'';
|
||||
};
|
||||
}
|
67
deployment/check/common/sharedOptions.nix
Normal file
67
deployment/check/common/sharedOptions.nix
Normal file
|
@ -0,0 +1,67 @@
|
|||
/**
|
||||
This file contains options shared by various components of the integration test, i.e. deployment resources, test nodes, target configurations, etc.
|
||||
All these components are declared as modules, but are part of different evaluations, which is the options in this file can't be shared "directly".
|
||||
Instead, each component imports this module and the same values are set for each of them from a common call site.
|
||||
Not all components will use all the options, which allows not setting all the values.
|
||||
*/
|
||||
|
||||
{ config, lib, ... }:
|
||||
|
||||
let
|
||||
inherit (lib) mkOption types;
|
||||
|
||||
in
|
||||
{
|
||||
options = {
|
||||
targetMachines = mkOption {
|
||||
type = with types; listOf str;
|
||||
description = ''
|
||||
Names of the nodes in the NixOS test that are “target machines”. This is
|
||||
used by the infrastructure to extract their network configuration, among
|
||||
other things, and re-import it in the deployment.
|
||||
'';
|
||||
};
|
||||
|
||||
pathToRoot = mkOption {
|
||||
type = types.path;
|
||||
description = ''
|
||||
Path from the location of the working directory to the root of the
|
||||
repository.
|
||||
'';
|
||||
};
|
||||
|
||||
pathFromRoot = mkOption {
|
||||
type = types.path;
|
||||
description = ''
|
||||
Path from the root of the repository to the working directory.
|
||||
'';
|
||||
apply = x: lib.path.removePrefix config.pathToRoot x;
|
||||
};
|
||||
|
||||
pathToCwd = mkOption {
|
||||
type = types.path;
|
||||
description = ''
|
||||
Path to the current working directory. This is a shortcut for
|
||||
pathToRoot/pathFromRoot.
|
||||
'';
|
||||
default = config.pathToRoot + "/${config.pathFromRoot}";
|
||||
};
|
||||
|
||||
enableAcme = mkOption {
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Whether to enable ACME in the NixOS test. This will add an ACME server
|
||||
to the node and connect all the target machines to it.
|
||||
'';
|
||||
default = false;
|
||||
};
|
||||
|
||||
acmeNodeIP = mkOption {
|
||||
type = types.str;
|
||||
description = ''
|
||||
The IP of the ACME node in the NixOS test. This option will be set
|
||||
during the test to the correct value.
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
62
deployment/check/common/targetNode.nix
Normal file
62
deployment/check/common/targetNode.nix
Normal file
|
@ -0,0 +1,62 @@
|
|||
{
|
||||
inputs,
|
||||
config,
|
||||
lib,
|
||||
modulesPath,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
testCerts = import "${inputs.nixpkgs}/nixos/tests/common/acme/server/snakeoil-certs.nix";
|
||||
inherit (lib) mkIf mkMerge;
|
||||
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
(modulesPath + "/../lib/testing/nixos-test-base.nix")
|
||||
./sharedOptions.nix
|
||||
];
|
||||
|
||||
config = mkMerge [
|
||||
{
|
||||
## Test framework disables switching by default. That might be OK by itself,
|
||||
## but we also use this config for getting the dependencies in
|
||||
## `deployer.system.extraDependencies`.
|
||||
system.switch.enable = true;
|
||||
|
||||
nix = {
|
||||
## Not used; save a large copy operation
|
||||
channel.enable = false;
|
||||
registry = lib.mkForce { };
|
||||
};
|
||||
|
||||
services.openssh = {
|
||||
enable = true;
|
||||
settings.PermitRootLogin = "yes";
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = [ 22 ];
|
||||
|
||||
## Test VMs don't have a bootloader by default.
|
||||
boot.loader.grub.enable = false;
|
||||
}
|
||||
|
||||
(mkIf config.enableAcme {
|
||||
security.acme = {
|
||||
acceptTerms = true;
|
||||
defaults.email = "test@test.com";
|
||||
defaults.server = "https://acme.test/dir";
|
||||
};
|
||||
|
||||
security.pki.certificateFiles = [
|
||||
testCerts.ca.cert
|
||||
];
|
||||
|
||||
## FIXME: it is a bit sad that all this logistics is necessary. look into
|
||||
## better DNS stuff
|
||||
networking.extraHosts = "${config.acmeNodeIP} acme.test";
|
||||
|
||||
})
|
||||
];
|
||||
}
|
48
deployment/check/common/targetResource.nix
Normal file
48
deployment/check/common/targetResource.nix
Normal file
|
@ -0,0 +1,48 @@
|
|||
{
|
||||
inputs,
|
||||
lib,
|
||||
config,
|
||||
...
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (builtins) readFile;
|
||||
inherit (lib) trim mkOption types;
|
||||
|
||||
in
|
||||
|
||||
{
|
||||
imports = [ ./sharedOptions.nix ];
|
||||
|
||||
options = {
|
||||
nodeName = mkOption {
|
||||
type = types.str;
|
||||
description = ''
|
||||
The name of the node in the NixOS test;
|
||||
needed for recovering the node configuration to prepare its deployment.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = {
|
||||
ssh = {
|
||||
host = config.nodeName;
|
||||
hostPublicKey = readFile (config.pathToCwd + "/${config.nodeName}_host_key.pub");
|
||||
};
|
||||
|
||||
nixpkgs = inputs.nixpkgs;
|
||||
|
||||
nixos.module = {
|
||||
imports = [
|
||||
./targetNode.nix
|
||||
(lib.modules.importJSON (config.pathToCwd + "/${config.nodeName}-network.json"))
|
||||
];
|
||||
|
||||
_module.args.inputs = inputs;
|
||||
enableAcme = config.enableAcme;
|
||||
acmeNodeIP = trim (readFile (config.pathToCwd + "/acme_server_ip"));
|
||||
|
||||
nixpkgs.hostPlatform = "x86_64-linux";
|
||||
};
|
||||
};
|
||||
}
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"domain": "abundos.eu",
|
||||
"domain": "fediversity.net",
|
||||
"mastodon": { "enable": false },
|
||||
"peertube": { "enable": false },
|
||||
"pixelfed": { "enable": false },
|
|
@ -1,3 +1,6 @@
|
|||
{
|
||||
imports = [ ./check/basic/flake-part.nix ];
|
||||
imports = [
|
||||
./check/basic/flake-part.nix
|
||||
./check/cli/flake-part.nix
|
||||
];
|
||||
}
|
||||
|
|
|
@ -167,7 +167,7 @@ in
|
|||
if env != "" then
|
||||
env
|
||||
else
|
||||
builtins.trace "env var DEPLOYMENT not set, falling back to ./test-machines/configuration.json!" (readFile ./test-machines/configuration.json)
|
||||
builtins.trace "env var DEPLOYMENT not set, falling back to ../deployment/configuration.sample.json!" (readFile ../deployment/configuration.sample.json)
|
||||
)
|
||||
);
|
||||
};
|
||||
|
|
|
@ -63,14 +63,4 @@ in
|
|||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = {
|
||||
## FIXME: This should clearly go somewhere else; and we should have a
|
||||
## `staging` vs. `production` setting somewhere.
|
||||
security.acme = {
|
||||
acceptTerms = true;
|
||||
defaults.email = "nicolas.jeannerod+fediversity@moduscreate.com";
|
||||
# defaults.server = "https://acme-staging-v02.api.letsencrypt.org/directory";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -39,4 +39,9 @@ in
|
|||
guest.port = config.fediversity.garage.web.internalPort;
|
||||
}
|
||||
];
|
||||
|
||||
security.acme = {
|
||||
acceptTerms = true;
|
||||
defaults.email = "something@fediversity.eu";
|
||||
};
|
||||
}
|
||||
|
|
Loading…
Add table
Reference in a new issue