switch out infra to terraform, remove flakes

account for 285
This commit is contained in:
Kiara Grouwstra 2025-05-01 14:20:37 +02:00
parent bf4df5500a
commit d86086d53a
Signed by: kiara
SSH key fingerprint: SHA256:COspvLoLJ5WC5rFb9ZDe5urVCkK4LJZOsjfF4duRJFU
43 changed files with 476 additions and 386 deletions

View file

@ -27,7 +27,7 @@ jobs:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.test-mastodon-service -L
check-peertube:
check-services:
runs-on: native
steps:
- uses: actions/checkout@v4
@ -88,3 +88,9 @@ jobs:
steps:
- uses: actions/checkout@v4
- run: cd launch && nix-build -A tests
check-infra:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: cd infra && nix-build -A tests

View file

@ -1,8 +1,7 @@
# The Fediversity project
This repository contains all the code and code-related files having to do with
[the Fediversity project](https://fediversity.eu/), with the notable exception
of [NixOps4 that is hosted on GitHub](https://github.com/nixops4/nixops4).
[the Fediversity project](https://fediversity.eu/).
## Goals
@ -87,27 +86,15 @@ To reach these goals, we aim to implement the following interactions between [ac
The software includes technical configuration that links software components.
Most user-facing configuration remains untouched by the deployment process.
> Example: NixOps4 is used to deploy [Pixelfed](https://pixelfed.org).
> Example: OpenTofu is used to deploy [Pixelfed](https://pixelfed.org).
- Migrate
Move service configurations and deployment state, including user data, from one hosting provider to another.
- [NixOps4](https://github.com/nixops4/nixops4)
- [OpenTofu](https://opentofu.org/)
A tool for deploying and managing resources through the Nix language.
NixOps4 development is supported by the Fediversity project
- Resource
A [resource for NixOps4](https://nixops.dev/manual/development/concept/resource.html) is any external entity that can be declared with NixOps4 expressions and manipulated with NixOps4, such as a virtual machine, an active NixOS configuration, a DNS entry, or customer database.
- Resource provider
A resource provider for NixOps4 is an executable that communicates between a resource and NixOps4 using a standardised protocol, allowing [CRUD operations](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) on the resources to be performed by NixOps4.
Refer to the [NixOps4 manual](https://nixops.dev/manual/development/resource-provider/index.html) for details.
> Example: We need a resource provider for obtaining deployment secrets from a database.
An infrastructure-as-code tool, and open-source (MPL 2.0) fork of Terraform.
- Runtime backend
@ -136,9 +123,6 @@ Contact the project team if you have questions or suggestions, or if you're inte
Most of the directories in this repository have their own README going into more
details as to what they are for. As an overview:
- [`deployment/`](./deployment) contains work to generate a full Fediversity
deployment from a minimal configuration.
- [`infra/`](./infra) contains the configurations for the various VMs that are
in production for the project, for instance the Git instances or the Wiki, as
well as means to provision and set up new ones.
@ -146,9 +130,6 @@ details as to what they are for. As an overview:
- [`keys/`](./keys) contains the public keys of the contributors to this project
as well as the systems that we administrate.
- [`matrix/`](./matrix) contains everything having to do with setting up a
fully-featured Matrix server.
- [`secrets/`](./secrets) contains the secrets that need to get injected into
machine configurations.

View file

@ -3,8 +3,8 @@
# shellcheck shell=bash
if type -P lorri &>/dev/null; then
eval "$(lorri direnv --flake .)"
eval "$(lorri direnv)"
else
echo 'while direnv evaluated .envrc, could not find the command "lorri" [https://github.com/nix-community/lorri]'
use flake
use_nix
fi

View file

@ -22,6 +22,8 @@ then to initialize, or after updating pins or TF providers:
setup
```
then, one can use the `tofu` CLI in the sub-folders.
## implementing
proper documentation TODO.

View file

@ -103,7 +103,7 @@ in
description = ''
The IP address of the machine, version 4. It will be injected as a
value in `networking.interfaces.eth0`, but it will also be used to
communicate with the machine via NixOps4.
communicate with the machine.
'';
};
@ -139,7 +139,7 @@ in
description = ''
The IP address of the machine, version 6. It will be injected as a
value in `networking.interfaces.eth0`, but it will also be used to
communicate with the machine via NixOps4.
communicate with the machine.
'';
};
@ -162,7 +162,7 @@ in
hostPublicKey = mkOption {
description = ''
The ed25519 host public key of the machine. It is used to filter Age
secrets and only keep the relevant ones, and to feed to NixOps4.
secrets and only keep the relevant ones, and to feed to TF.
'';
};

View file

@ -1,5 +1,4 @@
{
inputs,
lib,
config,
keys,
@ -14,51 +13,33 @@ let
in
{
_class = "nixops4Resource";
imports = [ ./options.nix ];
fediversityVm.hostPublicKey = mkDefault keys.systems.${config.fediversityVm.name};
ssh = {
host = config.fediversityVm.ipv4.address;
hostPublicKey = config.fediversityVm.hostPublicKey;
};
inherit (inputs) nixpkgs;
## The configuration of the machine. We strive to keep in this file only the
## options that really need to be injected from the resource. Everything else
## should go into the `./nixos` subdirectory.
nixos.module = {
imports = [
./options.nix
./nixos
./proxmox-qemu-vm.nix
];
imports = [
./options.nix
./nixos
./proxmox-qemu-vm.nix
];
## Inject the shared options from the resource's `config` into the NixOS
## configuration.
fediversityVm = config.fediversityVm;
## Read all the secrets, filter the ones that are supposed to be readable
## with this host's public key, and add them correctly to the configuration
## as `age.secrets.<name>.file`.
age.secrets = concatMapAttrs (
name: secret:
optionalAttrs (elem config.fediversityVm.hostPublicKey secret.publicKeys) {
${removeSuffix ".age" name}.file = secrets.rootPath + "/${name}";
}
) secrets;
## Read all the secrets, filter the ones that are supposed to be readable with
## public key, and create a mapping from `<name>.file` to the absolute path of
## the secret's file.
age.secrets = concatMapAttrs (
name: secret:
optionalAttrs (elem config.fediversityVm.hostPublicKey secret.publicKeys) {
${removeSuffix ".age" name}.file = secrets.rootPath + "/${name}";
}
) secrets.mapping;
## FIXME: Remove direct root authentication once the NixOps4 NixOS provider
## supports users with password-less sudo.
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors ++ [
# allow our panel vm access to the test machines
keys.panel
# allow continuous deployment access
keys.cd
];
};
## FIXME: Remove direct root authentication once the NixOps4 NixOS provider
## supports users with password-less sudo.
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors ++ [
# allow our panel vm access to the test machines
keys.panel
# allow continuous deployment access
keys.cd
];
}

View file

@ -10,8 +10,9 @@ in
imports = [
<disko/module.nix>
<agenix/modules/age.nix>
../services/fediversity
./node.nix
../../services/fediversity
./resource.nix
# ./node.nix
];
fediversityVm.name = hostname;
fediversity = {

44
infra/dev/main.tf Normal file
View file

@ -0,0 +1,44 @@
locals {
vm_domain = "abundos.eu"
}
module "nixos" {
source = "../sync-nix"
vm_domain = local.vm_domain
hostname = each.value.hostname
config_nix = each.value.config_nix
config_tf = each.value.config_tf
for_each = { for name, inst in {
# wiki = "vm02187" # does not resolve
# forgejo = "vm02116" # does not resolve
# TODO: move these to a separate `host` dir
dns = "fedi200"
fedipanel = "fedi201"
} : name => {
hostname = inst
config_tf = {
terraform = {
domain = local.vm_domain
hostname = inst
}
}
config_nix = <<-EOF
{
# note interpolations here TF ones
imports = [
# shared NixOS config
${path.root}/../common/shared.nix
# FIXME: separate template options by service
${path.root}/options.nix
# for service `forgejo` import `forgejo.nix`
${path.root}/../../machines/dev/${inst}/${name}.nix
# FIXME: get VM details from TF
${path.root}/../../machines/dev/${inst}
];
}
EOF
}
}
}

28
infra/dev/options.nix Normal file
View file

@ -0,0 +1,28 @@
# TODO: could (part of) this be generated somehow? c.f #275
{
lib,
...
}:
let
inherit (lib) types mkOption;
inherit (types) str enum;
in
{
options.terraform = {
domain = mkOption {
type = enum [
"fediversity.net"
];
description = ''
Apex domain under which the services will be deployed.
'';
default = "fediversity.net";
};
hostname = mkOption {
type = str;
description = ''
Internal name of the host, e.g. test01
'';
};
};
}

0
infra/dev/variables.tf Normal file
View file

View file

@ -1,9 +1,5 @@
locals {
system = "x86_64-linux"
# dependency paths pre-calculated from npins
pins = jsondecode(file("${path.root}/.npins.json"))
# nix path: expose pins, use nixpkgs in flake commands (`nix run`)
nix_path = "${join(":", [for name, path in local.pins : "${name}=${path}"])}:flake=${local.pins["nixpkgs"]}:flake"
vm_domain = "abundos.eu"
# user-facing applications
application_configs = {
# FIXME: wrap applications at the interface to grab them in one go?
@ -33,127 +29,46 @@ locals {
}
}
# hash of our code directory, used in dev to trigger re-deploy
# FIXME settle for pwd when in /nix/store?
# FIXME calculate separately to reduce false positives
data "external" "hash" {
program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ..)\\\"}\""]
}
module "nixos" {
source = "../sync-nix"
# TF resource to build and deploy NixOS instances.
resource "terraform_data" "nixos" {
vm_domain = local.vm_domain
hostname = each.value.hostname
config_nix = each.value.config_nix
config_tf = each.value.config_tf
for_each = {for name, inst in merge(
local.peripherals,
local.application_configs,
) : name => inst if try(inst.cfg.enable, false)}
# trigger rebuild/deploy if (FIXME?) any potentially used config/code changed,
# preventing these (20+s, build being bottleneck) when nothing changed.
# terraform-nixos separates these to only deploy if instantiate changed,
# yet building even then - which may be not as bad using deploy on remote.
# having build/deploy one resource reflects wanting to prevent no-op rebuilds
# over preventing (with less false positives) no-op deployments,
# as i could not find a way to do prevent no-op rebuilds without merging them:
# - generic resources cannot have outputs, while we want info from the instantiation (unless built on host?).
# - `data` always runs, which is slow for deploy and especially build.
triggers_replace = [
data.external.hash.result,
var.domain,
var.initialUser,
local.system,
each.key,
each.value,
]
provisioner "local-exec" {
# directory to run the script from. we use the TF project root dir,
# here as a path relative from where TF is run from.
# note that absolute paths can cause false positives in triggers,
# so are generally discouraged in TF.
working_dir = path.root
environment = {
# nix path used on build, lets us refer to e.g. nixpkgs like `<nixpkgs>`
NIX_PATH = local.nix_path
) : name => merge(inst, {
config_tf = {
terraform = {
domain = var.domain
hostname = inst.hostname
initialUser = var.initialUser
}
}
# TODO: refactor back to command="ignoreme" interpreter=concat([]) to protect sensitive data from error logs?
# TODO: build on target?
command = <<-EOF
set -euo pipefail
# INSTANTIATE
command=(
nix-instantiate
--expr
'let
os = import <nixpkgs/nixos> {
system = "${local.system}";
configuration = {
# note interpolations here TF ones
imports = [
# shared NixOS config
${path.root}/shared.nix
# FIXME: separate template options by service
${path.root}/options.nix
# for service `mastodon` import `mastodon.nix`
${path.root}/../../machines/operator/${inst.hostname}/${each.key}.nix
# FIXME: get VM details from TF
${path.root}/../../machines/operator/${inst.hostname}
];
# nix path for debugging
nix.nixPath = [ "${local.nix_path}" ];
## FIXME: switch root authentication to users with password-less sudo, see #24
users.users.root.openssh.authorizedKeys.keys = let
keys = import ../keys;
in attrValues keys.contributors ++ [
# allow our panel vm access to the test machines
keys.panel
];
} //
# template parameters passed in from TF thru json
builtins.fromJSON "${replace(jsonencode({
terraform = {
domain = var.domain
hostname = each.value.hostname
initialUser = var.initialUser
}
}), "\"", "\\\"")}";
};
in
# info we want to get back out
{
substituters = builtins.concatStringsSep " " os.config.nix.settings.substituters;
trusted_public_keys = builtins.concatStringsSep " " os.config.nix.settings.trusted-public-keys;
drv_path = os.config.system.build.toplevel.drvPath;
out_path = os.config.system.build.toplevel;
}'
)
# instantiate the config in /nix/store
"$${command[@]}" -A out_path
# get the other info
json="$("$${command[@]}" --eval --strict --json)"
# DEPLOY
declare substituters trusted_public_keys drv_path
# set our variables using the json object
eval "export $(echo $json | jaq -r 'to_entries | map("\(.key)=\(.value)") | @sh')"
# FIXME: de-hardcode domain
host="root@${each.value.hostname}.abundos.eu" # FIXME: #24
buildArgs=(
--option extra-binary-caches https://cache.nixos.org/
--option substituters $substituters
--option trusted-public-keys $trusted_public_keys
)
sshOpts=(
-o BatchMode=yes
-o StrictHostKeyChecking=no
)
# get the realized derivation to deploy
outPath=$(nix-store --realize "$drv_path" "$${buildArgs[@]}")
# deploy the config by nix-copy-closure
NIX_SSHOPTS="$${sshOpts[*]}" nix-copy-closure --to "$host" "$outPath" --gzip --use-substitutes
# switch the remote host to the config
ssh "$${sshOpts[@]}" "$host" "nix-env --profile /nix/var/nix/profiles/system --set $outPath; $outPath/bin/switch-to-configuration switch"
config_nix = <<-EOF
{
# note interpolations here TF ones
imports = [
# shared NixOS config
${path.root}/../common/shared.nix
# FIXME: separate template options by service
${path.root}/options.nix
# for service `mastodon` import `mastodon.nix`
${path.root}/../../machines/operator/${inst.hostname}/${name}.nix
# FIXME: get VM details from TF
${path.root}/../../machines/operator/${inst.hostname}
];
## FIXME: switch root authentication to users with password-less sudo, see #24
users.users.root.openssh.authorizedKeys.keys = let
keys = import ../../keys;
in [
# allow our panel vm access to the test machines
keys.panel
];
}
EOF
}
}) if try(inst.cfg.enable, false)}
}

15
infra/pass-ssh-key.sh Executable file
View file

@ -0,0 +1,15 @@
#!/usr/bin/env bash
export host="$host"
mkdir -p etc/ssh
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
for keyname in ssh_host_ed25519_key ssh_host_ed25519_key.pub; do
if [[ $keyname == *.pub ]]; then
umask 0133
else
umask 0177
fi
cp "$SCRIPT_DIR/../infra/test-machines/${host}/${keyname}" ./etc/ssh/${keyname}
done

20
infra/setup.nix Normal file
View file

@ -0,0 +1,20 @@
{
pkgs,
lib,
sources,
...
}:
pkgs.writeScriptBin "setup" ''
# calculated pins
echo '${lib.strings.toJSON sources}' > sync-nix/.npins.json
# generate TF lock for nix's TF providers
for category in dev operator sync-nix; do
pushd "$category"
rm -rf .terraform/
rm -f .terraform.lock.hcl
# suppress warning on architecture-specific generated lock file:
# `Warning: Incomplete lock file information for providers`.
tofu init -input=false 1>/dev/null
popd
done
''

View file

@ -0,0 +1 @@
{"agenix":"/nix/store/glsqq1xn5al7d528hvlbm4hl3ladxmka-source","disko":"/nix/store/7wf9q0mb1i43x9dr1qlyfaraq15n6sii-source","flake-inputs":"/nix/store/fqln0bcp6mp75k4sl0cav2f0np60lwhj-source","git-hooks":"/nix/store/8bh3jgq1riy3jxm07vy4xxzvk9xd74pc-source","gitignore":"/nix/store/g5v3sgqy6a0fsmas7mnapc196flrplix-source","home-manager":"/nix/store/cq3b3cx5rv9d0zj57kch9wmxzc2rm8dc-source","htmx":"/nix/store/mwqqk0qmldzvv4xj9kq2lbah2flhc44z-source","nix-unit":"/nix/store/4g1vvy7bhwh16cyd2r8ibq7n6ygk1wvk-source","nixpkgs":"/nix/store/w0y3wsqi67rlg544myycqxcg2hh17j1v-source"}

102
infra/sync-nix/main.tf Normal file
View file

@ -0,0 +1,102 @@
locals {
system = "x86_64-linux"
# dependency paths pre-calculated from npins
pins = jsondecode(file("${path.module}/.npins.json"))
# nix path: expose pins, use nixpkgs in flake commands (`nix run`)
nix_path = "${join(":", [for name, dir in local.pins : "${name}=${dir}"])}:flake=${local.pins["nixpkgs"]}:flake"
}
# hash of our code directory, used to trigger re-deploy
# FIXME calculate separately to reduce false positives
data "external" "hash" {
program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ..)\\\"}\""]
}
# TF resource to build and deploy NixOS instances.
resource "terraform_data" "nixos" {
# trigger rebuild/deploy if (FIXME?) any potentially used config/code changed,
# preventing these (20+s, build being bottleneck) when nothing changed.
# terraform-nixos separates these to only deploy if instantiate changed,
# yet building even then - which may be not as bad using deploy on remote.
# having build/deploy one resource reflects wanting to prevent no-op rebuilds
# over preventing (with less false positives) no-op deployments,
# as i could not find a way to do prevent no-op rebuilds without merging them:
# - generic resources cannot have outputs, while we want info from the instantiation (unless built on host?).
# - `data` always runs, which is slow for deploy and especially build.
triggers_replace = [
data.external.hash.result,
var.hostname,
var.config_nix,
var.config_tf,
]
provisioner "local-exec" {
# directory to run the script from. we use the TF project root dir,
# here as a path relative from where TF is run from,
# matching calling modules' expectations on config_nix locations.
# note that absolute paths can cause false positives in triggers,
# so are generally discouraged in TF.
working_dir = path.root
environment = {
# nix path used on build, lets us refer to e.g. nixpkgs like `<nixpkgs>`
NIX_PATH = local.nix_path
}
# TODO: refactor back to command="ignoreme" interpreter=concat([]) to protect sensitive data from error logs?
# TODO: build on target?
command = <<-EOF
set -euo pipefail
# INSTANTIATE
command=(
nix-instantiate
--expr
'let
os = import <nixpkgs/nixos> {
system = "${local.system}";
configuration =
${var.config_nix} //
# template parameters passed in from TF thru json
builtins.fromJSON "${replace(jsonencode(var.config_tf), "\"", "\\\"")}" //
{
# nix path for debugging
nix.nixPath = [ "${local.nix_path}" ];
};
};
in
# info we want to get back out
{
substituters = builtins.concatStringsSep " " os.config.nix.settings.substituters;
trusted_public_keys = builtins.concatStringsSep " " os.config.nix.settings.trusted-public-keys;
drv_path = os.config.system.build.toplevel.drvPath;
out_path = os.config.system.build.toplevel;
}'
)
# instantiate the config in /nix/store
"$${command[@]}" -A out_path
# get the other info
json="$("$${command[@]}" --eval --strict --json)"
# DEPLOY
declare substituters trusted_public_keys drv_path
# set our variables using the json object
eval "export $(echo $json | jaq -r 'to_entries | map("\(.key)=\(.value)") | @sh')"
host="root@${var.hostname}.${var.vm_domain}" # FIXME: #24
buildArgs=(
--option extra-binary-caches https://cache.nixos.org/
--option substituters $substituters
--option trusted-public-keys $trusted_public_keys
)
sshOpts=(
-o BatchMode=yes
-o StrictHostKeyChecking=no
)
# get the realized derivation to deploy
outPath=$(nix-store --realize "$drv_path" "$${buildArgs[@]}")
# deploy the config by nix-copy-closure
NIX_SSHOPTS="$${sshOpts[*]}" nix-copy-closure --to "$host" "$outPath" --gzip --use-substitutes
# switch the remote host to the config
ssh "$${sshOpts[@]}" "$host" "nix-env --profile /nix/var/nix/profiles/system --set $outPath; $outPath/bin/switch-to-configuration switch"
EOF
}
}

View file

@ -0,0 +1,17 @@
variable "vm_domain" {
type = string
}
variable "hostname" {
type = string
}
variable "config_nix" {
type = string
default = "{}"
}
variable "config_tf" {
type = map(any)
default = {}
}

View file

@ -6,21 +6,30 @@ let
cores = 2;
};
};
tf = pkgs.callPackage ./tf.nix { };
tf = pkgs.callPackage ./tf.nix {
inherit lib pkgs;
};
tfEnv = pkgs.callPackage ./tf-env.nix { };
nodes.server = {
environment.systemPackages = [
tf
tfEnv
];
};
in
lib.mapAttrs (name: test: pkgs.testers.runNixOSTest (test // { inherit name; })) {
tf-validate = {
inherit defaults;
nodes.server = {
environment.systemPackages = [
tf
tfEnv
];
};
tf-validate-dev = {
inherit defaults nodes;
testScript = ''
server.wait_for_unit("multi-user.target")
server.succeed("${lib.getExe tf} -chdir='${tfEnv}/launch' validate")
server.succeed("${lib.getExe tf} -chdir='${tfEnv}/infra/dev' validate")
'';
};
tf-validate-operator = {
inherit defaults nodes;
testScript = ''
server.wait_for_unit("multi-user.target")
server.succeed("${lib.getExe tf} -chdir='${tfEnv}/infra/operator' validate")
'';
};
}

View file

@ -15,14 +15,12 @@ pkgs.stdenv.mkDerivation {
};
buildInputs = [
(import ./tf.nix { inherit lib pkgs; })
(import ./setup.nix { inherit lib pkgs sources; })
];
buildPhase = ''
runHook preBuild
pushd infra
# calculated pins
echo '${lib.strings.toJSON sources}' > .npins.json
# generate TF lock for nix's TF providers
tofu init -input=false
setup
popd
runHook postBuild
'';

View file

@ -14,7 +14,7 @@ overwrite a secret without knowing its contents.)
In infra management, the systems' keys are used for security reasons; they
identify the machine that we are talking to. The contributor keys are used to
give access to the `root` user on these machines, which allows, among other
things, to deploy their configurations with NixOps4.
things, to deploy their configurations.
## Adding a contributor

View file

@ -0,0 +1,2 @@
_: {
}

View file

@ -17,10 +17,4 @@
gateway = "2a00:51c0:13:1305::1";
};
};
nixos.module = {
imports = [
./fedipanel.nix
];
};
}

View file

@ -9,12 +9,12 @@ in
_class = "nixos";
imports = [
<home-manager/nixos>
(import ../../../panel { }).module
];
security.acme = {
acceptTerms = true;
defaults.email = "beheer@procolix.com";
};
age.secrets.panel-ssh-key = {

View file

@ -1,3 +1,4 @@
{ lib, ... }:
{
_class = "nixops4Resource";
@ -11,31 +12,27 @@
ipv6.address = "2a00:51c0:12:1201::20";
};
nixos.module =
{ lib, ... }:
{
imports = [
./forgejo.nix
];
imports = [
./forgejo.nix
];
## vm02116 is running on old hardware based on a Xen VM environment, so it
## needs these extra options. Once the VM gets moved to a newer node, these
## two options can safely be removed.
boot.initrd.availableKernelModules = [ "xen_blkfront" ];
services.xe-guest-utilities.enable = true;
## vm02116 is running on old hardware based on a Xen VM environment, so it
## needs these extra options. Once the VM gets moved to a newer node, these
## two options can safely be removed.
boot.initrd.availableKernelModules = [ "xen_blkfront" ];
services.xe-guest-utilities.enable = true;
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/3802a66d-e31a-4650-86f3-b51b11918853";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/2CE2-1173";
fsType = "vfat";
};
};
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/3802a66d-e31a-4650-86f3-b51b11918853";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/2CE2-1173";
fsType = "vfat";
};
};
}

View file

@ -1,3 +1,4 @@
{ lib, ... }:
{
_class = "nixops4Resource";
@ -11,29 +12,25 @@
ipv6.address = "2a00:51c0:12:1201::187";
};
nixos.module =
{ lib, ... }:
{
imports = [
./wiki.nix
];
imports = [
./wiki.nix
];
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/a46a9c46-e32b-4216-a4aa-8819b2cd0d49";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/6AB5-4FA8";
fsType = "vfat";
options = [
"fmask=0022"
"dmask=0022"
];
};
};
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/a46a9c46-e32b-4216-a4aa-8819b2cd0d49";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/6AB5-4FA8";
fsType = "vfat";
options = [
"fmask=0022"
"dmask=0022"
];
};
};
}

View file

@ -1,16 +0,0 @@
<!-- This file is auto-generated by `machines.md.sh` from the machines'
configuration. -->
# Machines
Currently, this repository keeps track of the following VMs:
Machine | Proxmox | Description
--------|---------|-------------
[`fedi200`](./dev/fedi200) | fediversity | Testing machine for Hans
[`fedi201`](./dev/fedi201) | fediversity | FediPanel
[`vm02116`](./dev/vm02116) | procolix | Forgejo
[`vm02187`](./dev/vm02187) | procolix | Wiki
| `forgejo-ci` | n/a (physical) | Forgejo actions runner |
This table excludes all machines with names starting with `test`.

View file

@ -1,44 +0,0 @@
#!/usr/bin/env sh
set -euC
cd "$(dirname "$0")"
{
cat <<\EOF
<!-- This file is auto-generated by `machines.md.sh` from the machines'
configuration. -->
# Machines
Currently, this repository keeps track of the following VMs:
Machine | Proxmox | Description
--------|---------|-------------
EOF
vmOptions=$(
cd ..
nix eval \
--impure --raw --expr "
builtins.toJSON (builtins.getFlake (builtins.toString ../.)).vmOptions
" \
--log-format raw --quiet
)
## NOTE: `jq`'s `keys` is alphabetically sorted, just what we want here.
for machine in $(echo "$vmOptions" | jq -r 'keys[]'); do
if [ "${machine#test}" = "$machine" ]; then
proxmox=$(echo "$vmOptions" | jq -r ".$machine.proxmox")
description=$(echo "$vmOptions" | jq -r ".$machine.description" | head -n 1)
# shellcheck disable=SC2016
printf '[`%s`](./dev/%s) | %s | %s\n' "$machine" "$machine" "$proxmox" "$description"
fi
done
cat <<\EOF
| `forgejo-ci` | n/a (physical) | Forgejo actions runner |
This table excludes all machines with names starting with `test`.
EOF
} >| machines.md

View file

@ -0,0 +1,12 @@
{
"domain": "fediversity.net",
"mastodon": { "enable": false },
"peertube": { "enable": false },
"pixelfed": { "enable": false },
"initialUser": {
"displayName": "Testy McTestface",
"username": "test",
"password": "testtest",
"email": "test@test.com"
}
}

View file

@ -4,7 +4,7 @@ let
## and will end up in the Nix store. We don't care as they are only ever
## used for testing anyway.
##
## FIXME: Generate and store in NixOps4's state.
## FIXME: Generate and store in state.
mastodonS3KeyConfig =
{ pkgs, ... }:
{

View file

@ -13,7 +13,7 @@ in
enable = true;
## NOTE: Only ever used for testing anyway.
##
## FIXME: Generate and store in NixOps4's state.
## FIXME: Generate and store in state.
secretsFile = pkgs.writeText "secret" "574e093907d1157ac0f8e760a6deb1035402003af5763135bae9cbd6abe32b24";
};
};

View file

@ -12,6 +12,6 @@ in
mastodon = mastodonS3KeyConfig { inherit pkgs; } // {
enable = true;
};
temp.cores = 1; # FIXME: should come from NixOps4 eventually
temp.cores = 1; # FIXME: should come from TF eventually
};
}

View file

@ -26,15 +26,18 @@ in
pkgs.nix
pkgs.openssh
];
env = {
DEPLOYMENT_FLAKE = toString ../.;
DEPLOYMENT_NAME = "test";
NPINS_DIRECTORY = toString ../npins;
CREDENTIALS_DIRECTORY = toString ./.credentials;
DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3";
# locally: use a fixed relative reference, so we can use our newest files without copying to the store
REPO_DIR = toString ../.;
};
env =
let
inherit (builtins) toString;
in
import ./env.nix { inherit lib pkgs; }
// {
NPINS_DIRECTORY = toString ../npins;
CREDENTIALS_DIRECTORY = toString ./.credentials;
DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3";
# locally: use a fixed relative reference, so we can use our newest files without copying to the store
REPO_DIR = toString ../.;
};
shellHook = ''
${lib.concatStringsSep "\n" (
map (file: "ln -sf ${file.from} ${toString ./src/${file.to}}") package.generated

View file

@ -13,4 +13,5 @@
pkgs.jaq # tf
(import ../infra/tf.nix { inherit lib pkgs; })
];
SSH_PRIVATE_KEY_FILE = "";
}

View file

@ -10,6 +10,7 @@ For the full list of settings and their values, see
https://docs.djangoproject.com/en/4.2/ref/settings/
"""
import re
import sys
import os
import importlib.util
@ -18,6 +19,8 @@ import dj_database_url
from os import environ as env
from pathlib import Path
STORE_PATTERN = re.compile("^/nix/store/[^/]+$")
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
@ -240,3 +243,9 @@ if user_settings_file is not None:
# TODO(@fricklerhandwerk):
# The correct thing to do here would be using a helper function such as with `get_secret()` that will catch the exception and explain what's wrong and where to put the right values.
# Replacing the `USER_SETTINGS_FILE` mechanism following the comment there would probably be a good thing.
# PATH to expose to launch button
bin_path=env['BIN_PATH']
# path of the root flake to deploy from
# to deploy this should be specified, for dev just use a relative path.
repo_dir = env["REPO_DIR"]

View file

@ -1,5 +1,6 @@
from enum import Enum
import json
from os.path import expanduser
import subprocess
import logging
import os
@ -92,7 +93,7 @@ class DeploymentStatus(ConfigurationForm):
def deployment(self, config: BaseModel):
env = {
"PATH": os.environ.get("PATH"),
"PATH": settings.bin_path,
# "TF_LOG": "info",
} | {
# pass in form info to our deployment
@ -100,21 +101,16 @@ class DeploymentStatus(ConfigurationForm):
f"TF_VAR_{k}": v if isinstance(v, str) else json.dumps(v) for k, v in json.loads(config.model_dump_json()).items()
}
logger.info("env: %s", env)
cwd = f"{settings.repo_dir}/launch"
cwd = f"{settings.repo_dir}/infra/operator"
cmd = [
"tofu",
# f"-chdir={cwd}",
"apply",
f"-state={cwd}/terraform.tfstate", # FIXME: separate users' state
f"-state={cwd}/terraform.tfstate", # FIXME: separate users' state, see #313
"--auto-approve",
"-lock=false",
"-parallelism=1" # limit OOM risk
]
deployment_result = subprocess.run(
cmd,
cwd = cwd,
env = env,
stderr = subprocess.STDOUT,
)
deployment_result = subprocess.run(cmd, cwd=cwd, env=env)
logger.debug("deployment_result: %s", deployment_result)
return deployment_result, config

View file

@ -1,13 +1,10 @@
# Infra
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.
## Provisioning VMs with an initial configuration
NOTE[Niols]: This is very manual and clunky. Two things will happen. In the near
future, I will improve the provisioning script to make this a bit less clunky.
In the far future, NixOps4 will be able to communicate with Proxmox directly and
In the future, orchestration will be able to communicate with Proxmox directly and
everything will become much cleaner.
1. Choose names for your VMs. It is recommended to choose `fediXXX`, with `XXX`
@ -15,8 +12,7 @@ everything will become much cleaner.
2. Add a basic configuration for the machine. These typically go in
`infra/machines/<name>/default.nix`. You can look at other `fediXXX` VMs to
find inspiration. You probably do not need a `nixos.module` option at this
point.
find inspiration.
2. Add a file for each of those VM's public keys, eg.
```
@ -59,40 +55,6 @@ everything will become much cleaner.
FIXME: Figure out why the full configuration isn't on the machine at this
point and fix it.
## Updating existing VM configurations
Their configuration can be updated via NixOps4. Run
```sh
nixops4 deployments list
```
to see the available deployments.
This should be done from the root of the repository,
otherwise NixOps4 will fail with something like:
```
nixops4 error: evaluation: error:
… while calling the 'getFlake' builtin
error: path '/nix/store/05nn7krhvi8wkcyl6bsysznlv60g5rrf-source/flake.nix' does not exist, evaluation: error:
… while calling the 'getFlake' builtin
error: path '/nix/store/05nn7krhvi8wkcyl6bsysznlv60g5rrf-source/flake.nix' does not exist
```
Then, given a deployment (eg. `fedi200`), run
```sh
nixops4 apply <deployment>
```
Alternatively, to run the `default` deployment, which contains all the VMs, run
```sh
nixops4 apply
```
## Removing an existing VM
See `infra/proxmox-remove.sh --help`.

10
secrets/.envrc Normal file
View file

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# the shebang is ignored, but nice for editors
# shellcheck shell=bash
if type -P lorri &>/dev/null; then
eval "$(lorri direnv)"
else
echo 'while direnv evaluated .envrc, could not find the command "lorri" [https://github.com/nix-community/lorri]'
use_nix
fi

View file

@ -22,7 +22,7 @@ As an example, let us add a secret in a file “cheeses” whose content should
extension); this will open your `$EDITOR` ; enter “best ones come
unpasteurised”, save and close.
3. If you are doing something flake-related such as NixOps4, remember to commit
3. If you are doing something flake-related, remember to commit
or at least stage the secret.
4. In the machine's configuration, load our `ageSecrets` NixOS module, declare the machine's host key and start using your secrets, eg.:

View file

@ -1,4 +1,26 @@
{
system ? builtins.currentSystem,
sources ? import ../npins,
pkgs ? import sources.nixpkgs { inherit system; },
}:
let
inherit (sources) agenix;
in
{
mapping = import ./secrets.nix;
rootPath = ./.;
# shell for testing TF directly
shell = pkgs.mkShellNoCC {
packages = [
(pkgs.callPackage "${agenix}/pkgs/agenix.nix" { })
];
};
# re-export inputs so they can be overridden granularly
# (they can't be accessed from the outside any other way)
inherit
sources
system
pkgs
;
}

1
secrets/shell.nix Normal file
View file

@ -0,0 +1 @@
(import ./. { }).shell

10
services/.envrc Normal file
View file

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# the shebang is ignored, but nice for editors
# shellcheck shell=bash
if type -P lorri &>/dev/null; then
eval "$(lorri direnv)"
else
echo 'while direnv evaluated .envrc, could not find the command "lorri" [https://github.com/nix-community/lorri]'
use_nix
fi

14
services/default.nix Normal file
View file

@ -0,0 +1,14 @@
{
system ? builtins.currentSystem,
sources ? import ../npins,
pkgs ? import sources.nixpkgs {
inherit system;
},
}:
{
tests = {
mastodon = import ./tests/mastodon.nix { inherit pkgs; };
pixelfed-garage = import ./tests/pixelfed-garage.nix { inherit pkgs; };
peertube = import ./tests/peertube.nix { inherit pkgs; };
};
}

View file

@ -33,7 +33,7 @@ in
type = types.submodule {
options = {
cores = mkOption {
description = "number of cores; should be obtained from NixOps4";
description = "number of cores; should be obtained from TF";
type = types.int;
};

View file

@ -22,7 +22,7 @@ in
description = ''
Internal option change at your own risk
FIXME: should it be provided by NixOps4?
FIXME: should it be provided by TF?
or maybe we should just ask for a main secret from which to derive all the others?
'';
};