Compare commits

...

76 commits

Author SHA1 Message Date
1de353fada
switch reusable script to package, facilitating inspection and reuse 2025-09-03 12:51:06 +02:00
18bcae835e
fix nixops4 for adjusted arguments 2025-09-03 11:53:02 +02:00
9a60948765
WIP: proxmox deployment 2025-09-03 09:20:17 +02:00
7f1dabe7cb
restore path-based behavior for non-data-model tests 2025-09-01 15:20:07 +02:00
253352616b
reusable TF deployment
note that, other than being easier to call, this maintains the TF
deployment's status of remaining a glorified wrapper of the SSH
deployment.
2025-09-01 15:20:07 +02:00
de19210d1d
stablize pathToRoot for TF 2025-09-01 15:20:07 +02:00
a3ffd6d23b
fix pathFromRoot to work on strings, as its removePrefix does not actually work with store versions of sub-folders 2025-09-01 15:20:07 +02:00
dd6e8850f3
stablize pathToRoot by builtins.path 2025-09-01 15:20:07 +02:00
d35de0b457
add data model test for TF 2025-09-01 15:20:07 +02:00
51b72c79c7
simplify deployment/nixos.nix 2025-09-01 15:20:07 +02:00
76a07a6cf1
split tests to allow running the faster ssh test separately 2025-09-01 15:20:07 +02:00
0aeac419e4
factor out data model 2025-09-01 15:20:07 +02:00
220fe52612
add nixops4 data model test 2025-09-01 15:20:07 +02:00
d6731bbc7d
adjust deployment type
this is a cop-out possible until
fricklerhandwerk/Fediversity#15.
after that, this will require actually figuring out how to get `options`
for `deployment.nix` - which may need `evalModules` with
`data-model.nix`.
2025-09-01 15:20:07 +02:00
a245f52e5b
restore data model with { resources } wrappers, this time working 2025-09-01 15:20:07 +02:00
2ae4ca3f68
simpler data model, not sure it's desirable but at least it's consistent 2025-09-01 15:20:06 +02:00
9907404e94
actually rely on user package from data model 2025-09-01 15:20:06 +02:00
22accb50c0
pass system 2025-09-01 15:20:06 +02:00
6e70640caa
update test 2025-09-01 15:20:06 +02:00
5ce2f2e8ed
update deployment 2025-09-01 15:20:06 +02:00
733c500cd1
simplify auth to not accept password 2025-09-01 15:20:06 +02:00
c818e55194
rename deployment to deployment-type, disambiguating from environments' deployment 2025-09-01 15:20:06 +02:00
f705e56707
fix attrTag by adding mkOption 2025-09-01 15:20:06 +02:00
0249324d86
wrap application resources to match the input of apply 2025-09-01 15:20:06 +02:00
4d348fb9cb
stylize user-specified names by quotes to clarify their status 2025-09-01 15:20:06 +02:00
06fc1e8666
fix a bug of mismatching names in data model test
matches the name of `shell` to `operator-environment`.
2025-09-01 15:20:06 +02:00
dc07eb68c3
try and use deployment 2025-09-01 15:20:06 +02:00
871384d51f
spacing 2025-09-01 15:20:06 +02:00
d37a90723f
simplify imputDerivations 2025-09-01 15:20:06 +02:00
ebfe19ab5c
unimport qemu-guest 2025-09-01 15:20:06 +02:00
95a450023a
simplify inputDerivations 2025-09-01 15:20:06 +02:00
f75fb5eec0
simplify deployment 2025-09-01 15:20:06 +02:00
1f35ca5fe8
skip is-active sshd 2025-09-01 15:20:06 +02:00
a76b3cc4a3
- auto 2025-09-01 15:20:06 +02:00
871b6bd906
move fail in 2025-09-01 15:20:06 +02:00
eec987af06
- BatchMode 2025-09-01 15:20:06 +02:00
240d68617e
rm unused ssh settings 2025-09-01 15:20:06 +02:00
c9dc6ee392
dedupe inputDerivations 2025-09-01 15:20:06 +02:00
98599cebf4
rm cowsay 2025-09-01 15:20:06 +02:00
9746ad0e92
remove unused JSON-serialized args (sources) 2025-09-01 15:20:06 +02:00
bcb0fd5318
factor out to nixos.nix 2025-09-01 15:20:06 +02:00
41b4fa6476
rm users 2025-09-01 15:20:06 +02:00
13a97eadaf
simplify grub 2025-09-01 15:20:06 +02:00
a0e330eb85
rm users 2025-09-01 15:20:06 +02:00
fe4916c854
reenable ipv6 2025-09-01 15:20:06 +02:00
1c362d83b9
reenable firewall 2025-09-01 15:20:06 +02:00
4c360d2cd9
rm comments 2025-09-01 15:20:06 +02:00
4db88cf8df
rm getty 2025-09-01 15:20:06 +02:00
e6c590b4d7
mv attempts 2025-09-01 15:20:06 +02:00
03ea2730b0
download-attempts: settle for just targetNode 2025-09-01 15:20:06 +02:00
f8b508fa43
rm comment 2025-09-01 15:20:06 +02:00
2b66f15e7c
restore imports 2025-09-01 15:20:06 +02:00
cac911a16b
dedupe nixosTest.nix 2025-09-01 15:20:06 +02:00
4249a64c10
qemu guest 2025-09-01 15:20:06 +02:00
c1897a3684
grub 2025-09-01 15:20:06 +02:00
562e511ed8
auto login 2025-09-01 15:20:06 +02:00
c7f2e2b7aa
networking 2025-09-01 15:20:06 +02:00
d363957e37
users 2025-09-01 15:20:06 +02:00
9e3c3b9ee0
handle test outcome 2025-09-01 15:20:06 +02:00
d5bd886757
specialArgs: sources 2025-09-01 15:20:06 +02:00
67484b70ee
nix in tests: download-attempts = 1 2025-09-01 15:20:06 +02:00
767ffd9f87
ensure inputs 2025-09-01 15:20:06 +02:00
80f2bbcc4d
rm paste 2025-09-01 15:20:06 +02:00
8d5c9781d5
move stuff not needed in test out 2025-09-01 15:20:06 +02:00
ea78e850af
ensure availability of needed inputs 2025-09-01 15:20:06 +02:00
2a550e6963
reduce download attempts in test 2025-09-01 15:20:06 +02:00
34a5a62ba3
settle for hello, ditching cowsay 2025-09-01 15:20:06 +02:00
c0a5f28adf
move imports from paste to targetNode to increase parity between paste and nixosTest 2025-09-01 15:20:06 +02:00
9adfb3eae9
ditch superfluous substituters 2025-09-01 15:20:06 +02:00
7d3afbb469
pasteable command for trying without rebuilding vm 2025-09-01 15:20:06 +02:00
ff5fd5047f
add keys 2025-09-01 15:20:06 +02:00
d2b5d7e607
wip: use ssh in test 2025-09-01 15:20:06 +02:00
382bcda9d2
add deployment method: ssh 2025-09-01 15:20:06 +02:00
35e49c04f4
un-nixops 2025-09-01 15:20:06 +02:00
bb79f366e9
scaffold deployment/check/data-model from ./basic
modelify
2025-09-01 15:20:06 +02:00
63c6221479
allow different deployment types 2025-09-01 15:20:06 +02:00
47 changed files with 1862 additions and 59 deletions

View file

@ -63,6 +63,30 @@ jobs:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-panel -L
check-deployment-model-ssh:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-model-ssh -L
check-deployment-model-nixops4:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-model-nixops4 -L
check-deployment-model-tf:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-model-tf -L
check-deployment-model-tf-proxmox:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-model-tf-proxmox -L
## NOTE: NixOps4 does not provide a good “dry run” mode, so we instead check
## proxies for resources, namely whether their `.#vmOptions.<machine>` and
## `.#nixosConfigurations.<machine>` outputs evaluate and build correctly, and

View file

@ -5,4 +5,5 @@
];
pathToRoot = ../../..;
pathFromRoot = ./.;
useFlake = true;
}

View file

@ -10,5 +10,10 @@ runNixOSTest {
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix) targetMachines pathToRoot pathFromRoot;
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
useFlake
;
}

View file

@ -1,4 +1,9 @@
{ inputs, lib, ... }:
{
inputs,
lib,
config,
...
}:
{
_class = "nixosTest";
@ -8,6 +13,7 @@
sourceFileset = lib.fileset.unions [
./constants.nix
./deployment.nix
(config.pathToCwd + "/flake-under-test.nix")
];
nodes.deployer =

View file

@ -8,4 +8,5 @@
pathToRoot = ../../..;
pathFromRoot = ./.;
enableAcme = true;
useFlake = true;
}

View file

@ -15,5 +15,6 @@ runNixOSTest {
pathToRoot
pathFromRoot
enableAcme
useFlake
;
}

View file

@ -1,6 +1,7 @@
{
inputs,
hostPkgs,
config,
lib,
...
}:
@ -19,6 +20,7 @@ in
sourceFileset = lib.fileset.unions [
./constants.nix
./deployments.nix
(config.pathToCwd + "/flake-under-test.nix")
# REVIEW: I would like to be able to grab all of `/deployment` minus
# `/deployment/check`, but I can't because there is a bunch of other files

View file

@ -0,0 +1,25 @@
{
lib,
...
}:
let
inherit (lib) mkOption types;
in
{
options = {
host = mkOption {
type = types.str;
description = "name of the host to deploy to";
};
targetSystem = mkOption {
type = types.str;
description = "name of the host to deploy to";
};
sshOpts = mkOption {
description = "Extra SSH options (`-o`) to use.";
type = types.listOf types.str;
default = [ ];
example = "ConnectTimeout=60";
};
};
}

View file

@ -0,0 +1,290 @@
{
config,
system,
inputs ? (import ../../../default.nix { }).inputs, # XXX can't be serialized
sources ? import ../../../npins,
...
}@args:
let
# having this module's location (`self`) and (serializable) `args`, we know
# enough to make it re-call itself to extract different info elsewhere later.
# we use this to make a deployment script using the desired nixos config,
# which would otherwise not be serializable, while nix also makes it hard to
# produce its derivation to pass thru without a `nix-instantiate` call,
# which in turn would need to be passed the (unserializable) nixos config.
self = "deployment/check/common/data-model.nix";
inherit (sources) nixpkgs;
pkgs = import nixpkgs { inherit system; };
inherit (pkgs) lib;
deployment-config = config;
inherit (deployment-config)
nodeName
pathToRoot
targetSystem
sshOpts
;
inherit (lib) mkOption types;
eval =
module:
(lib.evalModules {
specialArgs = {
inherit pkgs inputs;
};
modules = [
module
../../data-model.nix
];
}).config;
fediversity = eval (
{ config, ... }:
{
config = {
resources.login-shell = {
description = "The operator needs to be able to log into the shell";
request =
{ ... }:
{
_class = "fediversity-resource-request";
options = {
wheel = mkOption {
description = "Whether the login user needs root permissions";
type = types.bool;
default = false;
};
packages = mkOption {
description = "Packages that need to be available in the user environment";
type = with types; attrsOf package;
};
};
};
policy =
{ config, ... }:
{
_class = "fediversity-resource-policy";
options = {
username = mkOption {
description = "Username for the operator";
type = types.str; # TODO: use the proper constraints from NixOS
};
wheel = mkOption {
description = "Whether to allow login with root permissions";
type = types.bool;
default = false;
};
};
config = {
resource-type = types.raw; # TODO: splice out the user type from NixOS
apply =
requests:
let
# Filter out requests that need wheel if policy doesn't allow it
validRequests = lib.filterAttrs (
_name: req: !req.login-shell.wheel || config.wheel
) requests.resources;
in
lib.optionalAttrs (validRequests != { }) {
${config.username} = {
isNormalUser = true;
packages =
with lib;
attrValues (concatMapAttrs (_name: request: request.login-shell.packages) validRequests);
extraGroups = lib.optional config.wheel "wheel";
};
};
};
};
};
applications.hello =
{ ... }:
{
description = ''Command-line tool that will print "Hello, world!" on the terminal'';
module =
{ ... }:
{
options.enable = lib.mkEnableOption "Hello in the shell";
};
implementation = cfg: {
input = cfg;
output.resources = lib.optionalAttrs cfg.enable {
hello.login-shell.packages.hello = pkgs.hello;
};
};
};
environments =
let
mkNixosConfiguration =
environment: requests:
{ ... }:
{
imports = [
./data-model-options.nix
../common/sharedOptions.nix
../common/targetNode.nix
"${nixpkgs}/nixos/modules/profiles/qemu-guest.nix"
];
users.users = environment.config.resources."operator-environment".login-shell.apply {
resources = lib.filterAttrs (_name: value: value ? login-shell) (
lib.concatMapAttrs (
k': req: lib.mapAttrs' (k: lib.nameValuePair "${k'}.${k}") req.resources
) requests
);
};
};
in
{
single-nixos-vm-ssh = environment: {
resources."operator-environment".login-shell.username = "operator";
implementation =
{
required-resources,
...
}:
{
input = required-resources;
output.ssh-host = {
nixos-configuration = mkNixosConfiguration environment required-resources;
ssh = {
username = "root";
host = nodeName;
key-file = null;
};
};
};
};
single-nixos-vm-nixops4 = environment: {
resources."operator-environment".login-shell.username = "operator";
implementation =
{
required-resources,
...
}:
{
input = required-resources;
output.nixops4 =
{ providers, ... }:
{
providers = {
inherit (inputs.nixops4.modules.nixops4Provider) local;
};
resources.${nodeName} = {
type = providers.local.exec;
imports = [
inputs.nixops4-nixos.modules.nixops4Resource.nixos
../common/targetResource.nix
];
nixos.module = mkNixosConfiguration environment required-resources;
_module.args = { inherit inputs sources; };
inherit (deployment-config) nodeName pathToRoot pathFromRoot;
};
};
};
};
single-nixos-vm-tf = environment: {
resources."operator-environment".login-shell.username = "operator";
implementation =
{
required-resources,
deployment-name,
}:
{
input = required-resources;
output.tf-host = {
nixos-configuration = mkNixosConfiguration environment required-resources;
system = targetSystem;
ssh = {
username = "root";
host = nodeName;
key-file = null;
inherit sshOpts;
};
module = self;
inherit args deployment-name;
root-path = pathToRoot;
};
};
};
single-nixos-vm-tf-proxmox = environment: {
resources."operator-environment".login-shell.username = "operator";
implementation =
{
required-resources,
deployment-name,
}:
{
input = required-resources;
output.tf-proxmox-host = {
nixos-configuration = mkNixosConfiguration environment required-resources;
system = targetSystem;
ssh = {
username = "root";
host = nodeName;
key-file = null;
inherit sshOpts;
};
module = self;
inherit args deployment-name;
root-path = pathToRoot;
};
};
};
};
};
options = {
"example-configuration" = mkOption {
type = config.configuration;
default = {
enable = true;
applications.hello.enable = true;
};
};
"ssh-deployment" =
let
env = config.environments."single-nixos-vm-ssh";
in
mkOption {
type = env.resource-mapping.output-type;
default = env.deployment {
deployment-name = "ssh-deployment";
configuration = config."example-configuration";
};
};
"nixops4-deployment" =
let
env = config.environments."single-nixos-vm-nixops4";
in
mkOption {
type = env.resource-mapping.output-type;
default = env.deployment {
deployment-name = "nixops4-deployment";
configuration = config."example-configuration";
};
};
"tf-deployment" =
let
env = config.environments."single-nixos-vm-tf";
in
mkOption {
type = env.resource-mapping.output-type;
default = env.deployment {
deployment-name = "tf-deployment";
configuration = config."example-configuration";
};
};
"tf-proxmox-deployment" =
let
env = config.environments."single-nixos-vm-tf-proxmox";
in
mkOption {
type = env.resource-mapping.output-type;
default = env.deployment {
deployment-name = "tf-proxmox-deployment";
configuration = config."example-configuration";
};
};
};
}
);
in
fediversity

View file

@ -59,6 +59,7 @@ in
inputs.nixpkgs
sources.flake-parts
sources.nixpkgs
sources.flake-inputs
sources.git-hooks

View file

@ -76,8 +76,6 @@ in
./sharedOptions.nix
./targetNode.nix
./targetResource.nix
(config.pathToCwd + "/flake-under-test.nix")
];
acmeNodeIP = config.nodes.acme.networking.primaryIPAddress;
@ -164,31 +162,38 @@ in
deployer.succeed(f"echo '{host_key}' > ${config.pathFromRoot}/${tm}_host_key.pub")
'')}
## NOTE: This is super slow. It could probably be optimised in Nix, for
## instance by allowing to grab things directly from the host's store.
##
## NOTE: We use the repository as-is (cf `src` above), overriding only
## `flake.nix` by our `flake-under-test.nix`. We also override the flake
## lock file to use locally available inputs, as we cannot download them.
##
with subtest("Override the flake and its lock"):
deployer.succeed("cp ${config.pathFromRoot}/flake-under-test.nix flake.nix")
deployer.succeed("""
nix flake lock --extra-experimental-features 'flakes nix-command' \
--offline -v \
--override-input nixops4 ${inputs.nixops4.packages.${system}.flake-in-a-bottle} \
\
--override-input nixops4-nixos ${inputs.nixops4-nixos} \
--override-input nixops4-nixos/flake-parts ${inputs.nixops4-nixos.inputs.flake-parts} \
--override-input nixops4-nixos/flake-parts/nixpkgs-lib ${inputs.nixops4-nixos.inputs.flake-parts.inputs.nixpkgs-lib} \
--override-input nixops4-nixos/nixops4-nixos ${emptyFlake} \
--override-input nixops4-nixos/nixpkgs ${inputs.nixops4-nixos.inputs.nixpkgs} \
--override-input nixops4-nixos/nixops4 ${
inputs.nixops4-nixos.inputs.nixops4.packages.${system}.flake-in-a-bottle
} \
--override-input nixops4-nixos/git-hooks-nix ${emptyFlake} \
;
""")
${
if config.useFlake then
''
## NOTE: This is super slow. It could probably be optimised in Nix, for
## instance by allowing to grab things directly from the host's store.
##
## NOTE: We use the repository as-is (cf `src` above), overriding only
## `flake.nix` by our `flake-under-test.nix`. We also override the flake
## lock file to use locally available inputs, as we cannot download them.
##
with subtest("Override the flake and its lock"):
deployer.succeed("cp ${config.pathFromRoot}/flake-under-test.nix flake.nix")
deployer.succeed("""
nix flake lock --extra-experimental-features 'flakes nix-command' \
--offline -v \
--override-input nixops4 ${inputs.nixops4.packages.${system}.flake-in-a-bottle} \
\
--override-input nixops4-nixos ${inputs.nixops4-nixos} \
--override-input nixops4-nixos/flake-parts ${inputs.nixops4-nixos.inputs.flake-parts} \
--override-input nixops4-nixos/flake-parts/nixpkgs-lib ${inputs.nixops4-nixos.inputs.flake-parts.inputs.nixpkgs-lib} \
--override-input nixops4-nixos/nixops4-nixos ${emptyFlake} \
--override-input nixops4-nixos/nixpkgs ${inputs.nixops4-nixos.inputs.nixpkgs} \
--override-input nixops4-nixos/nixops4 ${
inputs.nixops4-nixos.inputs.nixops4.packages.${system}.flake-in-a-bottle
} \
--override-input nixops4-nixos/git-hooks-nix ${emptyFlake} \
;
""")
''
else
""
}
${optionalString config.enableAcme ''
with subtest("Set up handmade DNS"):

View file

@ -32,11 +32,11 @@ in
};
pathFromRoot = mkOption {
type = types.path;
type = types.either types.path types.str;
description = ''
Path from the root of the repository to the working directory.
'';
apply = x: lib.path.removePrefix config.pathToRoot x;
apply = x: if lib.isString x then x else lib.path.removePrefix config.pathToRoot x;
};
pathToCwd = mkOption {
@ -64,5 +64,7 @@ in
during the test to the correct value.
'';
};
useFlake = lib.mkEnableOption "Use a flake in the test.";
};
}

View file

@ -28,6 +28,8 @@ in
system.switch.enable = true;
nix = {
# short-cut network time-outs
settings.download-attempts = 1;
## Not used; save a large copy operation
channel.enable = false;
registry = lib.mkForce { };

View file

@ -0,0 +1,9 @@
{
targetMachines = [
"nixops4"
];
pathToRoot = ../../..;
pathFromRoot = ./.;
enableAcme = true;
useFlake = true;
}

View file

@ -0,0 +1,22 @@
{
runNixOSTest,
inputs,
sources,
}:
runNixOSTest {
imports = [
../../data-model.nix
../../function.nix
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
useFlake
;
}

View file

@ -0,0 +1,29 @@
{
inputs = {
nixops4.follows = "nixops4-nixos/nixops4";
nixops4-nixos.url = "github:nixops4/nixops4-nixos";
};
outputs =
inputs:
import ./mkFlake.nix inputs (
{ inputs, ... }:
let
system = "x86_64-linux";
in
{
imports = [
inputs.nixops4.modules.flake.default
];
nixops4Deployments.check-deployment-model =
(import ./deployment/check/common/data-model.nix {
inherit system inputs;
config = {
inherit (import ./deployment/check/data-model-nixops4/constants.nix) pathToRoot pathFromRoot;
nodeName = "nixops4";
};
})."nixops4-deployment".nixops4;
}
);
}

View file

@ -0,0 +1,52 @@
{
lib,
config,
inputs,
...
}:
{
_class = "nixosTest";
imports = [
../common/data-model-options.nix
];
name = "deployment-model";
sourceFileset = lib.fileset.unions [
../../data-model.nix
../../function.nix
../common/data-model.nix
../common/data-model-options.nix
./constants.nix
(config.pathToCwd + "/flake-under-test.nix")
];
nodes.deployer =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
inputs.nixops4.packages.${system}.default
jq
];
# FIXME: sad times
system.extraDependencies = with pkgs; [
jq
jq.inputDerivation
];
system.extraDependenciesFromModule =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
hello
];
};
};
extraTestScript = ''
with subtest("nixops4"):
nixops4.fail("hello 1>&2")
deployer.succeed("nixops4 apply check-deployment-model --show-trace --verbose --no-interactive 1>&2")
nixops4.succeed("su - operator -c hello 1>&2")
'';
}

View file

@ -0,0 +1,12 @@
{
targetMachines = [
"ssh"
];
# stablize path, as just the path would yield distinct paths when applied multiple times
pathToRoot = builtins.path {
path = ../../..;
name = "root";
};
pathFromRoot = "/deployment/check/data-model-ssh";
enableAcme = true;
}

View file

@ -0,0 +1,21 @@
{
runNixOSTest,
inputs,
sources,
}:
runNixOSTest {
imports = [
../../data-model.nix
../../function.nix
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
}

View file

@ -0,0 +1,66 @@
{
lib,
config,
pkgs,
...
}:
let
inherit (import ./constants.nix) pathToRoot pathFromRoot;
inherit (pkgs) system;
deployment-config = {
inherit pathToRoot pathFromRoot;
inherit (config) enableAcme;
acmeNodeIP = if config.enableAcme then config.nodes.acme.networking.primaryIPAddress else null;
nodeName = "ssh";
};
deploy =
(import ../common/data-model.nix {
inherit system;
config = deployment-config;
# opt not to pass `inputs`, as we could only pass serializable arguments through to its self-call
})."ssh-deployment".ssh-host.run;
in
{
_class = "nixosTest";
imports = [
../common/data-model-options.nix
];
name = "deployment-model";
sourceFileset = lib.fileset.unions [
../../data-model.nix
../../function.nix
../common/data-model.nix
../common/data-model-options.nix
./constants.nix
];
nodes.deployer =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
jq
deploy
];
system.extraDependenciesFromModule =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
hello
];
};
};
extraTestScript = ''
with subtest("Check the status before deployment"):
ssh.fail("hello 1>&2")
with subtest("Run the deployment"):
deployer.succeed("""
${lib.getExe deploy}
""")
ssh.wait_for_unit("multi-user.target")
ssh.succeed("su - operator -c hello 1>&2")
'';
}

View file

@ -0,0 +1,11 @@
{
targetMachines = [
"mypve"
];
pathToRoot = builtins.path {
path = ../../..;
name = "root";
};
pathFromRoot = "/deployment/check/data-model-tf-proxmox";
enableAcme = true;
}

View file

@ -0,0 +1,48 @@
{
runNixOSTest,
inputs,
sources,
system,
}:
let
pkgs = import sources.nixpkgs-stable {
inherit system;
overlays = [ overlay ];
};
overlay = _: _: {
inherit
(import "${sources.proxmox-nixos}/pkgs" {
craneLib = pkgs.callPackage "${sources.crane}/lib" { };
# breaks from https://github.com/NixOS/nixpkgs/commit/06b354eb2dc535c57e9b4caaa16d79168f117a26,
# which updates libvncserver to 0.9.15, which was not yet patched at https://git.proxmox.com/?p=vncterm.git.
inherit pkgs;
# not so picky about version for our purposes
pkgs-unstable = pkgs;
})
proxmox-ve
pve-ha-manager
;
};
in
runNixOSTest {
node.specialArgs = {
inherit
sources
pkgs
;
};
imports = [
../../data-model.nix
../../function.nix
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
}

View file

@ -0,0 +1,217 @@
{
lib,
pkgs,
sources,
...
}:
let
inherit (import ./constants.nix) pathToRoot pathFromRoot;
inherit (pkgs) system;
deployment-config = {
inherit pathToRoot pathFromRoot;
nodeName = "mypve";
targetSystem = system;
sshOpts = [
"ConnectTimeout=1"
"ServerAliveInterval=1"
];
};
deployment =
(import ../common/data-model.nix {
inherit system;
config = deployment-config;
# opt not to pass `inputs`, as we could only pass serializable arguments through to its self-call
})."tf-proxmox-deployment".tf-proxmox-host;
# deployment = setup.tf-proxmox-host;
# tracking non-tarball downloads seems unsupported still in npins:
# https://github.com/andir/npins/issues/163
minimalIso = pkgs.fetchurl {
url = "https://releases.nixos.org/nixos/24.05/nixos-24.05.7139.bcba2fbf6963/nixos-minimal-24.05.7139.bcba2fbf6963-x86_64-linux.iso";
hash = "sha256-plre/mIHdIgU4xWU+9xErP+L4i460ZbcKq8iy2n4HT8=";
};
machine =
(import "${pkgs.nixos-generators}/share/nixos-generator/nixos-generate.nix" {
inherit system;
inherit (sources) nixpkgs;
formatConfig = "${pkgs.nixos-generators}/share/nixos-generator/formats/proxmox.nix";
configuration = deployment.nixos-configuration;
}).config;
in
{
_class = "nixosTest";
imports = [
../common/data-model-options.nix
];
name = "deployment-model";
sourceFileset = lib.fileset.unions [
../../run/tf-proxmox/run.sh
];
nodes.mypve =
{ sources, ... }:
{
imports = [
"${sources.proxmox-nixos}/modules/proxmox-ve"
];
users.users.root = {
password = "mytestpw";
hashedPasswordFile = lib.mkForce null;
};
services.proxmox-ve = {
enable = true;
ipAddress = "192.168.1.1";
vms = {
myvm1 = {
vmid = 100;
memory = 1024;
cores = 1;
sockets = 1;
kvm = true;
scsi = [ { file = "local:16"; } ];
cdrom = "local:iso/minimal.iso";
};
};
};
virtualisation = {
additionalPaths = [ minimalIso ];
diskSize = 4096;
memorySize = 2048;
};
};
nodes.deployer =
{ pkgs, ... }:
{
nix.nixPath = [
(lib.concatStringsSep ":" (lib.mapAttrsToList (k: v: k + "=" + v) sources))
];
environment.systemPackages = with pkgs; [
(pkgs.callPackage ../../run/tf-proxmox/tf.nix { inherit sources; })
jq
nixos-generators
];
# needed only when building from deployer
system.extraDependenciesFromModule =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
hello
];
};
system.extraDependencies =
# (lib.lists.map lib.traceVal)
(
(lib.lists.concatMap (
pkg:
(
if
pkg ? inputDerivation
# error: output '/nix/store/dki9d3vldafg9ydrfm7x0g0rr0qljk98-sudo-1.9.16p2' is not allowed to refer to the following paths:
# /nix/store/2xdmps65ryklmbf025bm4pxv16gb8ajv-sudo-1.9.16p2.tar.gz
# /nix/store/58br4vk3q5akf4g8lx0pqzfhn47k3j8d-bash-5.2p37
# /nix/store/8v6k283dpbc0qkdq81nb6mrxrgcb10i1-gcc-wrapper-14-20241116
# /nix/store/9r1nl9ksiyszy4qzzg6y2gcdkca0xmhy-stdenv-linux
# /nix/store/a4rmp6in7igbl1wbz9pli5nq0wiclq0y-groff-1.23.0
# /nix/store/dki9d3vldafg9ydrfm7x0g0rr0qljk98-sudo-1.9.16p2
# /nix/store/f5y58qz2fzpzgkhp0nizixi10x04ppyy-linux-pam-1.6.1
# /nix/store/shkw4qm9qcw5sc5n1k5jznc83ny02r39-default-builder.sh
# /nix/store/vj1c3wf9c11a0qs6p3ymfvrnsdgsdcbq-source-stdenv.sh
# /nix/store/yh6qg1nsi5h2xblcr67030pz58fsaxx3-coreutils-9.6
&& !(lib.strings.hasInfix "sudo" (lib.traceVal (builtins.toString pkg)))
then
lib.trace "yes" [
# lib.traceVal pkg.inputDerivation # not of type `path in the Nix store'
((x: builtins.trace "${builtins.toString pkg}: ${builtins.toString (lib.isPath x.inputDerivation)}" x) pkg).inputDerivation
]
else
lib.trace "no" [ ]
)
) machine.environment.systemPackages)
++ [
((x: builtins.trace "machine.system.build.toplevel.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.toplevel.inputDerivation)
((x: builtins.trace "machine.system.build.etc.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.etc.inputDerivation)
((x: builtins.trace "machine.system.build.etcBasedir.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.etcBasedir.inputDerivation)
((x: builtins.trace "machine.system.build.etcMetadataImage.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.etcMetadataImage.inputDerivation)
((x: builtins.trace "machine.system.build.extraUtils.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.extraUtils.inputDerivation)
((x: builtins.trace "machine.system.path.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.path.inputDerivation)
((x: builtins.trace "machine.system.build.setEnvironment.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.setEnvironment.inputDerivation)
((x: builtins.trace "machine.system.build.vm.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.vm.inputDerivation)
((x: builtins.trace "machine.system.build.bootStage1.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.bootStage1.inputDerivation)
((x: builtins.trace "machine.system.build.bootStage2.inputDerivation: ${builtins.toString (lib.isPath x)}" x) machine.system.build.bootStage2.inputDerivation)
pkgs.gnu-config
# pkgs.gnu-config.inputDerivation
pkgs.byacc
# pkgs.byacc.inputDerivation
pkgs.stdenv
pkgs.stdenvNoCC
sources.nixpkgs
pkgs.vte
(
## We build a whole NixOS system that contains the module
## `system.extraDependenciesFromModule`, only to grab its
## configuration and the store paths needed to build it and
## dump them in `system.extraDependencies`.
# see: https://git.fediversity.eu/Fediversity/Fediversity/pulls/338/files
pkgs.closureInfo {
rootPaths = map (drv: drv.drvPath) (
[
machine.system.build.toplevel.inputDerivation
machine.system.build.etc.inputDerivation
machine.system.build.etcBasedir.inputDerivation
machine.system.build.etcMetadataImage.inputDerivation
machine.system.build.extraUtils.inputDerivation
machine.system.path.inputDerivation
machine.system.build.setEnvironment.inputDerivation
machine.system.build.vm.inputDerivation
machine.system.build.bootStage1.inputDerivation
machine.system.build.bootStage2.inputDerivation
]
++ lib.concatMap (x: if x ? source.inputDerivation then [ x.source.inputDerivation ] else [ ]) (
lib.attrValues machine.environment.etc
)
++ machine.environment.systemPackages
);
}
)
]
++ lib.concatLists (
lib.mapAttrsToList (
_k: v:
if v ? source.inputDerivation then
[
# v.source.inputDerivation
((x: builtins.trace "${builtins.toString (lib.attrNames v)}: ${builtins.toString (lib.isPath x.source.inputDerivation)}" x) v).source.inputDerivation
]
else
[ ]
) machine.environment.etc
)
);
};
extraTestScript = ''
mypve.wait_for_unit("pveproxy.service")
assert "running" in mypve.succeed("pveproxy status")
mypve.succeed("mkdir -p /run/pve")
assert "Proxmox" in mypve.succeed("curl -s -i -k https://localhost:8006")
# mypve.succeed("pvesh set /access/password --userid root@pam --password mypwdlol --confirmation-password mytestpw 1>&2")
# mypve.succeed("curl -s -i -k -d '{\"userid\":\"root@pam\",\"password\":\"mypwdhaha\",\"confirmation-password\":\"mypwdlol\"}' -X PUT https://localhost:8006/api2/json/access/password 1>&2")
# on mistake: 401 No ticket
# mypve.succeed("haha")
with subtest("Run the deployment"):
# target.fail("hello 1>&2")
deployer.succeed("""
${deployment.run}
""")
# target.wait_for_unit("multi-user.target")
# target.succeed("su - operator -c hello 1>&2")
'';
}

View file

@ -0,0 +1,11 @@
{
targetMachines = [
"target"
];
pathToRoot = builtins.path {
path = ../../..;
name = "root";
};
pathFromRoot = "/deployment/check/data-model-tf";
enableAcme = true;
}

View file

@ -0,0 +1,21 @@
{
runNixOSTest,
inputs,
sources,
}:
runNixOSTest {
imports = [
../../data-model.nix
../../function.nix
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
}

View file

@ -0,0 +1,65 @@
{
lib,
pkgs,
...
}:
let
inherit (import ./constants.nix) pathToRoot pathFromRoot;
inherit (pkgs) system;
deployment-config = {
inherit pathToRoot pathFromRoot;
nodeName = "target";
targetSystem = system;
sshOpts = [
"ConnectTimeout=1"
"ServerAliveInterval=1"
];
};
deployment =
(import ../common/data-model.nix {
inherit system;
config = deployment-config;
# opt not to pass `inputs`, as we could only pass serializable arguments through to its self-call
})."tf-deployment".tf-host;
in
{
_class = "nixosTest";
imports = [
../common/data-model-options.nix
];
name = "deployment-model";
sourceFileset = lib.fileset.unions [
../../run/tf-single-host/run.sh
];
nodes.deployer =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
(pkgs.callPackage ../../run/tf-single-host/tf.nix { })
jq
];
# needed only when building from deployer
system.extraDependenciesFromModule =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
hello
];
};
};
extraTestScript = ''
with subtest("ssh: Check the status before deployment"):
target.fail("hello 1>&2")
with subtest("ssh: Run the deployment"):
deployer.succeed("""
${deployment.run}
""")
target.wait_for_unit("multi-user.target")
target.succeed("su - operator -c hello 1>&2")
'';
}

View file

@ -8,4 +8,5 @@
pathToRoot = ../../..;
pathFromRoot = ./.;
enableAcme = true;
useFlake = true;
}

View file

@ -15,5 +15,6 @@ runNixOSTest {
pathToRoot
pathFromRoot
enableAcme
useFlake
;
}

View file

@ -128,6 +128,7 @@ in
sourceFileset = lib.fileset.unions [
./constants.nix
./deployment.nix
(config.pathToCwd + "/flake-under-test.nix")
# REVIEW: I would like to be able to grab all of `/deployment` minus
# `/deployment/check`, but I can't because there is a bunch of other files

View file

@ -6,14 +6,13 @@ let
module:
(lib.evalModules {
specialArgs = {
inherit inputs;
inherit pkgs inputs;
};
modules = [
module
./data-model.nix
];
}).config;
nixops4Deployment = inputs.nixops4.modules.nixops4Deployment.default;
inherit (inputs.nixops4.lib) mkDeployment;
in
{
@ -101,18 +100,18 @@ in
};
implementation = cfg: {
input = cfg;
output = lib.optionalAttrs cfg.enable {
resources.hello.login-shell.packages.hello = pkgs.hello;
output.resources = lib.optionalAttrs cfg.enable {
hello.login-shell.packages.hello = pkgs.hello;
};
};
};
environments.single-nixos-vm =
{ config, ... }:
{
resources.operator-environment.login-shell.username = "operator";
resources."operator-environment".login-shell.username = "operator";
implementation = requests: {
input = requests;
output =
output.nixops4 =
{ providers, ... }:
{
providers = {
@ -126,9 +125,13 @@ in
nixos.module =
{ ... }:
{
users.users = config.resources.shell.login-shell.apply (
lib.filterAttrs (_name: value: value ? login-shell) requests
);
users.users = config.resources."operator-environment".login-shell.apply {
resources = lib.filterAttrs (_name: value: value ? login-shell) (
lib.concatMapAttrs (
k': req: lib.mapAttrs' (k: lib.nameValuePair "${k'}.${k}") req.resources
) requests
);
};
};
};
};
@ -136,7 +139,7 @@ in
};
};
options = {
example-configuration = mkOption {
"example-configuration" = mkOption {
type = config.configuration;
readOnly = true;
default = {
@ -144,20 +147,22 @@ in
applications.hello.enable = true;
};
};
example-deployment = mkOption {
type = types.submodule nixops4Deployment;
"example-deployment" = mkOption {
type = config.environments.single-nixos-vm.resource-mapping.output-type;
readOnly = true;
default = config.environments.single-nixos-vm.deployment config.example-configuration;
default = config.environments.single-nixos-vm.deployment config."example-configuration";
};
};
}
);
resources = fediversity.applications.hello.resources fediversity.example-configuration.applications.hello;
resources =
fediversity.applications.hello.resources
fediversity."example-configuration".applications.hello;
hello-shell = resources.resources.hello.login-shell;
environment = fediversity.environments.single-nixos-vm.resources.operator-environment.login-shell;
environment = fediversity.environments.single-nixos-vm.resources."operator-environment".login-shell;
result = mkDeployment {
modules = [
(fediversity.environments.single-nixos-vm.deployment fediversity.example-configuration)
(fediversity.environments.single-nixos-vm.deployment fediversity."example-configuration")
];
};

View file

@ -2,18 +2,32 @@
lib,
config,
inputs,
pkgs,
...
}:
let
inherit (lib) mkOption types;
inherit (lib.types)
attrsOf
attrTag
attrsOf
deferredModuleWith
submodule
optionType
functionTo
nullOr
optionType
raw
str
submodule
;
toBash =
v:
lib.replaceStrings [ "\"" ] [ "\\\"" ] (
if lib.isPath v || builtins.isNull v then
toString v
else if lib.isString v then
v
else
lib.strings.toJSON v
);
functionType = import ./function.nix;
application-resources = submodule {
@ -33,12 +47,199 @@ let
{
_class = "nixops4Deployment";
_module.args = {
resourceProviderSystem = builtins.currentSystem;
resourceProviderSystem = pkgs.system;
resources = { };
};
}
];
};
nixos-configuration = mkOption {
description = "A NixOS configuration.";
type = raw;
};
host-ssh = mkOption {
description = "SSH connection info to connect to a single host.";
type = submodule {
options = {
host = mkOption {
description = "the host to access by SSH";
type = str;
};
username = mkOption {
description = "the SSH user to use";
type = nullOr str;
default = null;
};
key-file = mkOption {
description = "path to the user's SSH private key";
type = nullOr str;
example = "/root/.ssh/id_ed25519";
};
sshOpts = mkOption {
description = "Extra SSH options (`-o`) to use.";
type = types.listOf str;
default = [ ];
example = "ConnectTimeout=60";
};
};
};
};
deployment-type = attrTag {
ssh-host = mkOption {
description = "A deployment by SSH to update a single existing NixOS host.";
type = submodule {
options = {
inherit nixos-configuration;
ssh = host-ssh;
};
};
};
nixops4 = mkOption {
description = "A NixOps4 NixOS deployment. For an example, see https://github.com/nixops4/nixops4-nixos/blob/main/example/deployment.nix.";
type = nixops4Deployment;
};
tf-host = mkOption {
description = "A Terraform deployment by SSH to update a single existing NixOS host.";
type = submodule (tf-host: {
options = {
system = mkOption {
description = "The architecture of the system to deploy to.";
type = types.str;
};
inherit nixos-configuration;
ssh = host-ssh;
module = mkOption {
description = "The module to call to obtain the NixOS configuration from.";
type = types.str;
};
args = mkOption {
description = "The arguments with which to call the module to obtain the NixOS configuration.";
type = types.attrs;
};
deployment-name = mkOption {
description = "The name of the deployment for which to obtain the NixOS configuration.";
type = types.str;
};
root-path = mkOption {
description = "The path to the root of the repository.";
type = types.path;
};
run = mkOption {
type = types.package;
# error: The option `tf-deployment.tf-host.run' is read-only, but it's set multiple times.
# readOnly = true;
default =
let
inherit (tf-host.config)
system
ssh
module
args
deployment-name
root-path
;
inherit (ssh)
host
username
key-file
sshOpts
;
environment = {
key_file = key-file;
deployment_name = deployment-name;
root_path = root-path;
ssh_opts = sshOpts;
inherit
system
host
username
module
args
;
deployment_type = "tf-host";
};
tf-env = pkgs.callPackage ./run/tf-single-host/tf-env.nix { };
in
pkgs.writeShellScriptBin "deploy-ssh.sh" ''
env ${toString (lib.mapAttrsToList (k: v: "TF_VAR_${k}=\"${toBash v}\"") environment)} \
tf_env=${tf-env} bash ./deployment/run/tf-single-host/run.sh
'';
};
};
});
};
tf-proxmox-host = mkOption {
description = "A Terraform deployment by SSH to update a single existing NixOS host.";
type = submodule (tf-host: {
options = {
system = mkOption {
description = "The architecture of the system to deploy to.";
type = types.str;
};
inherit nixos-configuration;
ssh = host-ssh;
# TODO: add proxmox info
module = mkOption {
description = "The module to call to obtain the NixOS configuration from.";
type = types.str;
};
args = mkOption {
description = "The arguments with which to call the module to obtain the NixOS configuration.";
type = types.attrs;
};
deployment-name = mkOption {
description = "The name of the deployment for which to obtain the NixOS configuration.";
type = types.str;
};
root-path = mkOption {
description = "The path to the root of the repository.";
type = types.path;
};
run = mkOption {
type = types.str;
# error: The option `tf-deployment.tf-host.run' is read-only, but it's set multiple times.
# readOnly = true;
default =
let
inherit (tf-host.config)
system
ssh
module
args
deployment-name
root-path
;
inherit (ssh)
host
username
key-file
sshOpts
;
environment = {
key_file = key-file;
deployment_name = deployment-name;
root_path = root-path;
ssh_opts = sshOpts;
inherit
system
host
username
module
args
;
deployment_type = "tf-proxmox-host";
};
tf-env = pkgs.callPackage ./run/tf-proxmox/tf-env.nix { };
in
''
env ${toString (lib.mapAttrsToList (k: v: "TF_VAR_${k}=\"${toBash v}\"") environment)} \
tf_env=${tf-env} bash ./deployment/run/tf-proxmox/run.sh
'';
};
};
});
};
};
in
{
options = {
@ -68,8 +269,7 @@ in
description = "The type of resource this policy configures";
type = types.optionType;
};
# TODO(@fricklerhandwerk): we may want to make the function type explict here: `request -> resource-type`
# and then also rename this to be consistent with the application's resource mapping
# TODO(@fricklerhandwerk): we may want to make the function type explicit here: `application-resources -> resource-type`
options.apply = mkOption {
description = "Apply the policy to a request";
type = functionTo policy.config.resource-type;
@ -145,12 +345,21 @@ in
type = environment.config.resource-mapping.function-type;
};
resource-mapping = mkOption {
description = "Function type for the mapping from resources to a (NixOps4) deployment";
description = "Function type for the mapping from resources to a deployment";
type = submodule functionType;
readOnly = true;
default = {
input-type = application-resources;
output-type = nixops4Deployment;
input-type = submodule {
options = {
deployment-name = mkOption {
type = types.str;
};
required-resources = mkOption {
type = attrsOf application-resources;
};
};
};
output-type = deployment-type;
};
};
# TODO(@fricklerhandwerk): maybe this should be a separate thing such as `fediversity-setup`,
@ -161,14 +370,17 @@ in
type = functionTo (environment.config.resource-mapping.output-type);
readOnly = true;
default =
cfg:
{
deployment-name,
configuration,
}:
# TODO: check cfg.enable.true
let
required-resources = lib.mapAttrs (
name: application-settings: config.applications.${name}.resources application-settings
) cfg.applications;
) configuration.applications;
in
(environment.config.implementation required-resources).output;
(environment.config.implementation { inherit required-resources deployment-name; }).output;
};
};

View file

@ -26,6 +26,26 @@
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
deployment-model-ssh = import ./check/data-model-ssh {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
deployment-model-nixops4 = import ./check/data-model-nixops4 {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
deployment-model-tf = import ./check/data-model-tf {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
deployment-model-tf-proxmox = import ./check/data-model-tf-proxmox {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources system;
};
};
};
}

25
deployment/nixos.nix Normal file
View file

@ -0,0 +1,25 @@
{
configuration,
system,
sources ? import ../npins,
}:
let
eval = import "${sources.nixpkgs}/nixos/lib/eval-config.nix" {
inherit system;
specialArgs = {
inherit sources;
};
modules = [ configuration ];
};
toplevel =
{
inherit (eval) pkgs config options;
system = eval.config.system.build.toplevel;
inherit (eval.config.system.build) vm vmWithBootLoader;
}
.config.system.build.toplevel;
in
{
drv_path = toplevel.drvPath;
out_path = toplevel;
}

View file

@ -0,0 +1,49 @@
#! /usr/bin/env bash
set -xeuo pipefail
declare username host system module args deployment_name deployment_type key_file root_path ssh_opts
IFS=" " read -r -a ssh_opts <<< "$( (echo "$ssh_opts" | jq -r '@sh') | tr -d \'\")"
# DEPLOY
sshOpts=(
-o BatchMode=yes
-o StrictHostKeyChecking=no
)
for ssh_opt in "${ssh_opts[@]}"; do
sshOpts+=(
-o "$ssh_opt"
)
done
if [[ -n "$key_file" ]]; then
sshOpts+=(
-i "$key_file"
)
fi
destination="$username@$host"
command=(nix-instantiate --show-trace --expr "
import $root_path/deployment/nixos.nix {
system = \"$system\";
configuration = (import \"$root_path/$module\" (builtins.fromJSON ''$args'')).$deployment_name.$deployment_type.nixos-configuration;
}
")
# INSTANTIATE
# instantiate the config in /nix/store
"${command[@]}" -A out_path
# get the realized derivation to deploy
outPath=$(nix-store --realize "$("${command[@]}" --show-trace --eval --strict --json | jq -r '.drv_path')")
# deploy the config by nix-copy-closure
NIX_SSHOPTS="${sshOpts[*]}" nix-copy-closure --to "$destination" "$outPath" --gzip --use-substitutes
# switch the remote host to the config
# shellcheck disable=SC2029
ssh "${sshOpts[@]}" "$destination" "nix-env --profile /nix/var/nix/profiles/system --set $outPath"
# shellcheck disable=SC2029
output=$(ssh "${sshOpts[@]}" "$destination" "nohup $outPath/bin/switch-to-configuration switch &" 2>&1) || echo "status code: $?"
echo "output: $output"
if [[ $output != *"Timeout, server $host not responding"* ]]; then
echo "non-timeout error: $output"
exit 1
else
exit 0
fi

View file

@ -0,0 +1,172 @@
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "= 0.76.1"
}
}
}
locals {
dump_name = "vzdump-qemu-nixos-fediversity-${var.category}.vma.zst"
}
provider "proxmox" {
endpoint = "https://${var.host}:8006/"
insecure = true
ssh {
agent = true
}
# # Choose one authentication method:
# api_token = var.virtual_environment_api_token
# # OR
username = "root@pam" # var.virtual_environment_username # "username@realm"
password = "mytestpw" # var.virtual_environment_password
# # OR
# auth_ticket = var.virtual_environment_auth_ticket
# csrf_prevention_token = var.virtual_environment_csrf_prevention_token
}
# # FIXME move to host
# # FIXME add proxmox
# data "external" "base-hash" {
# program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ${path.module}/../common/nixos/base.nix)\\\"}\""]
# }
# # hash of our code directory, used to trigger re-deploy
# # FIXME calculate separately to reduce false positives
# data "external" "hash" {
# program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ..)\\\"}\""]
# }
# FIXME move to host
resource "terraform_data" "template" {
# triggers_replace = [
# data.external.base-hash.result,
# ]
provisioner "local-exec" {
working_dir = path.root
# FIXME configure to use actual base image
command = <<-EOF
set -euo pipefail
nixos-generate -f proxmox -o /tmp/nixos-image
ln -s /tmp/nixos-image/vzdump-qemu-nixos-*.vma.zst /tmp/nixos-image/${local.dump_name}
EOF
}
}
# FIXME move to host
resource "proxmox_virtual_environment_file" "upload" {
lifecycle {
replace_triggered_by = [
terraform_data.template,
]
}
content_type = "images"
datastore_id = "local"
node_name = var.host
overwrite = true
source_file {
path = "/tmp/nixos-image/${local.dump_name}"
file_name = local.dump_name
}
}
# FIXME distinguish var.category
data "proxmox_virtual_environment_vms" "nixos_base" {
node_name = var.host
filter {
name = "template"
values = [true]
}
# filter {
# name = "node_name"
# values = ["nixos-base"]
# }
}
# resource "proxmox_virtual_environment_vm" "nix_vm" {
# lifecycle {
# replace_triggered_by = [
# proxmox_virtual_environment_file.upload,
# ]
# }
# node_name = var.host
# pool_id = var.pool_id
# description = var.description
# started = true
# agent {
# enabled = true
# }
# cpu {
# type = "x86-64-v2-AES"
# cores = var.cores
# sockets = var.sockets
# numa = true
# }
# memory {
# dedicated = var.memory
# }
# efi_disk {
# datastore_id = "linstor_storage"
# type = "4m"
# }
# disk {
# datastore_id = "linstor_storage"
# interface = "scsi0"
# discard = "on"
# iothread = true
# size = var.disk_size
# ssd = true
# }
# clone {
# datastore_id = "local"
# node_name = data.proxmox_virtual_environment_vms.nixos_base.vms[0].node_name # invalid index: empty list
# vm_id = data.proxmox_virtual_environment_vms.nixos_base.vms[0].vm_id
# full = true
# }
# network_device {
# model = "virtio"
# bridge = "vnet1306"
# }
# operating_system {
# type = "l26"
# }
# scsi_hardware = "virtio-scsi-single"
# bios = "ovmf"
# }
# module "nixos-rebuild" {
# depends_on = [
# proxmox_virtual_environment_vm.nix_vm
# ]
# source = "../tf-single-host"
# system = var.system
# username = var.username
# host = proxmox_virtual_environment_vm.nix_vm.ipv4_addresses[0] # needs guest agent installed
# module = var.module
# args = var.args
# key_file = var.key_file
# deployment_name = var.deployment_name
# root_path = var.root_path
# ssh_opts = var.ssh_opts
# deployment_type = var.deployment_type
# }

View file

@ -0,0 +1,10 @@
#! /usr/bin/env bash
set -xeuo pipefail
declare tf_env
export TF_LOG=info
# export TF_LOG=debug
cd "${tf_env}/deployment/run/tf-proxmox"
# parallelism=1: limit OOM risk
tofu apply --auto-approve -lock=false -input=false -parallelism=1

View file

@ -0,0 +1,16 @@
{
pkgs,
lib,
sources,
}:
pkgs.writeScriptBin "setup" ''
set -xe
# calculated pins
echo '${lib.strings.toJSON sources}' > ./.npins.json
# generate TF lock for nix's TF providers
rm -rf .terraform/
rm -f .terraform.lock.hcl
# suppress warning on architecture-specific generated lock file:
# `Warning: Incomplete lock file information for providers`.
tofu init -input=false 1>/dev/null
''

View file

@ -0,0 +1,33 @@
{
lib,
pkgs,
sources ? import ../../../npins,
}:
pkgs.stdenv.mkDerivation {
name = "tf-repo";
src =
with lib.fileset;
toSource {
root = ../../../.;
# don't copy ignored files
fileset = intersection (gitTracked ../../../.) ../../../.;
};
buildInputs = [
(pkgs.callPackage ./tf.nix { inherit sources; })
(pkgs.callPackage ./setup.nix { inherit sources; })
];
buildPhase = ''
runHook preBuild
for category in deployment/run/tf-single-host deployment/run/tf-proxmox; do
pushd "$category"
source setup
popd
done
runHook postBuild
'';
installPhase = ''
runHook preInstall
cp -r . $out
runHook postInstall
'';
}

View file

@ -0,0 +1,26 @@
# FIXME: use overlays so this gets imported just once?
{
pkgs,
sources,
...
}:
let
mkProvider =
args:
pkgs.terraform-providers.mkProvider (
{ mkProviderFetcher = { repo, ... }: sources.${repo}; } // args
);
in
pkgs.opentofu.withPlugins (p: [
p.external
(mkProvider {
owner = "bpg";
repo = "terraform-provider-proxmox";
rev = "v0.76.1";
spdx = "MPL-2.0";
hash = null;
vendorHash = "sha256-3KJ7gi3UEZu31LhEtcRssRUlfsi4mIx6FGTKi1TDRdg=";
homepage = "https://registry.terraform.io/providers/bpg/proxmox";
provider-source-address = "registry.opentofu.org/bpg/proxmox";
})
])

View file

@ -0,0 +1,97 @@
variable "system" {
description = "The architecture of the system to deploy to."
type = string
default = "x86_64-linux"
}
variable "username" {
description = "the SSH user to use"
type = string
default = "root"
}
variable "host" {
description = "the host of the ProxmoX Virtual Environment."
type = string
}
variable "module" {
description = "The module to call to obtain the NixOS configuration from."
type = string
}
variable "args" {
description = "The arguments with which to call the module to obtain the NixOS configuration."
type = string
default = "{}"
}
variable "key_file" {
description = "path to the user's SSH private key"
type = string
}
variable "deployment_name" {
description = "The name of the deployment for which to obtain the NixOS configuration."
type = string
}
variable "root_path" {
description = "The path to the root of the repository."
type = string
}
variable "ssh_opts" {
description = "Extra SSH options (`-o`) to use."
type = string
default = "[]"
}
variable "deployment_type" {
description = "A `deployment-type` from the Fediversity data model, for grabbing the desired NixOS configuration."
type = string
default = "tf-proxmox-host"
}
#########################################
variable "category" {
type = string
description = "Category to be used in naming the base image."
default = "test"
}
variable "description" {
type = string
default = ""
}
variable "sockets" {
type = number
description = "The number of sockets of the VM."
default = 1
}
variable "cores" {
type = number
description = "The number of cores of the VM."
default = 1
}
variable "memory" {
type = number
description = "The amount of memory of the VM in MiB."
default = 2048
}
variable "disk_size" {
type = number
description = "The amount of disk of the VM in GiB."
default = 32
}
variable "pool_id" {
type = string
description = "The identifier for a pool to assign the virtual machine to."
default = "Fediversity"
}

View file

@ -0,0 +1,52 @@
# hash of our code directory, used to trigger re-deploy
# FIXME calculate separately to reduce false positives
data "external" "hash" {
program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ../../..)\\\"}\""]
}
# TF resource to build and deploy NixOS instances.
resource "terraform_data" "nixos" {
# trigger rebuild/deploy if (FIXME?) any potentially used config/code changed,
# preventing these (20+s, build being bottleneck) when nothing changed.
# terraform-nixos separates these to only deploy if instantiate changed,
# yet building even then - which may be not as bad using deploy on remote.
# having build/deploy one resource reflects wanting to prevent no-op rebuilds
# over preventing (with less false positives) no-op deployments,
# as i could not find a way to do prevent no-op rebuilds without merging them:
# - generic resources cannot have outputs, while we want info from the instantiation (unless built on host?).
# - `data` always runs, which is slow for deploy and especially build.
triggers_replace = [
data.external.hash.result,
var.host,
var.module,
var.args,
var.root_path,
var.deployment_type,
]
provisioner "local-exec" {
# directory to run the script from. we use the TF project root dir,
# here as a path relative from where TF is run from,
# matching calling modules' expectations on config_nix locations.
# note that absolute paths can cause false positives in triggers,
# so are generally discouraged in TF.
working_dir = path.root
environment = {
system = var.system
username = var.username
host = var.host
module = var.module
host = var.host
args = var.args
key_file = var.key_file
deployment_name = var.deployment_name
root_path = var.root_path
ssh_opts = var.ssh_opts
deployment_type = var.deployment_type
}
# TODO: refactor back to command="ignoreme" interpreter=concat([]) to protect sensitive data from error logs?
# TODO: build on target?
command = "sh ../ssh-single-host/run.sh"
}
}

View file

@ -0,0 +1,9 @@
#! /usr/bin/env bash
set -xeuo pipefail
declare tf_env
export TF_LOG=info
cd "${tf_env}/deployment/run/tf-single-host"
# parallelism=1: limit OOM risk
tofu apply --auto-approve -lock=false -parallelism=1

View file

@ -0,0 +1,16 @@
{
pkgs,
lib,
sources,
}:
pkgs.writeScriptBin "setup" ''
set -xe
# calculated pins
echo '${lib.strings.toJSON sources}' > ./.npins.json
# generate TF lock for nix's TF providers
rm -rf .terraform/
rm -f .terraform.lock.hcl
# suppress warning on architecture-specific generated lock file:
# `Warning: Incomplete lock file information for providers`.
tofu init -input=false 1>/dev/null
''

View file

@ -0,0 +1,31 @@
{
lib,
pkgs,
sources ? import ../../../npins,
}:
pkgs.stdenv.mkDerivation {
name = "tf-repo";
src =
with lib.fileset;
toSource {
root = ../../../.;
# don't copy ignored files
fileset = intersection (gitTracked ../../../.) ../../../.;
};
buildInputs = [
(pkgs.callPackage ./tf.nix { })
(pkgs.callPackage ./setup.nix { inherit sources; })
];
buildPhase = ''
runHook preBuild
pushd deployment/run/tf-single-host
source setup
popd
runHook postBuild
'';
installPhase = ''
runHook preInstall
cp -r . $out
runHook postInstall
'';
}

View file

@ -0,0 +1,11 @@
# FIXME: use overlays so this gets imported just once?
{
pkgs,
...
}:
let
tf = pkgs.opentofu;
in
tf.withPlugins (p: [
p.external
])

View file

@ -0,0 +1,54 @@
variable "system" {
description = "The architecture of the system to deploy to."
type = string
default = "x86_64-linux"
}
variable "username" {
description = "the SSH user to use"
type = string
default = "root"
}
variable "host" {
description = "the host to access by SSH"
type = string
}
variable "module" {
description = "The module to call to obtain the NixOS configuration from."
type = string
}
variable "args" {
description = "The arguments with which to call the module to obtain the NixOS configuration."
type = string
default = "{}"
}
variable "key_file" {
description = "path to the user's SSH private key"
type = string
}
variable "deployment_name" {
description = "The name of the deployment for which to obtain the NixOS configuration."
type = string
}
variable "root_path" {
description = "The path to the root of the repository."
type = string
}
variable "ssh_opts" {
description = "Extra SSH options (`-o`) to use."
type = string
default = "[]"
}
variable "deployment_type" {
description = "A `deployment-type` from the Fediversity data model, for grabbing the desired NixOS configuration."
type = string
default = "tf-host"
}

View file

@ -192,6 +192,19 @@
"revision": "48f39fbe2e8f90f9ac160dd4b6929f3ac06d8223",
"url": "https://github.com/SaumonNet/proxmox-nixos/archive/48f39fbe2e8f90f9ac160dd4b6929f3ac06d8223.tar.gz",
"hash": "0606qcs8x1jwckd1ivf52rqdmi3lkn66iiqh6ghd4kqx0g2bw3nv"
},
"terraform-provider-proxmox": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "kiaragrouwstra",
"repo": "terraform-provider-proxmox"
},
"branch": "content-type-images",
"submodules": false,
"revision": "d465b71e2c112903b9cf235e3a2b4f7997272ab9",
"url": "https://github.com/kiaragrouwstra/terraform-provider-proxmox/archive/d465b71e2c112903b9cf235e3a2b4f7997272ab9.tar.gz",
"hash": "05l45w40708sx1hyli10ncr0hsjsf0djkc7x9xkdl4gw96m1578n"
}
},
"version": 5