Compare commits

...
Sign in to create a new pull request.

17 commits

Author SHA1 Message Date
24e66476aa
add some comments 2025-05-12 08:38:17 +02:00
ca03480e35
add todo to move template to host 2025-05-12 08:12:31 +02:00
89d9a8eef6
remove hard-coded VM info 2025-05-11 19:25:52 +02:00
53b7edcbdf
fix for 285 2025-05-11 19:22:49 +02:00
eeb3970fda
in TF distinguish base from regular config 2025-05-11 19:22:49 +02:00
1cb5296ecb
temp settle on hard-coded host 2025-05-11 19:22:49 +02:00
c0ea144712
temp settle on testing just garage 2025-05-11 19:22:49 +02:00
7147108d6a
add package nixos-generators 2025-05-11 19:22:49 +02:00
937bd82e67
fork terraform proxmox provider to support content type images 2025-05-11 19:22:49 +02:00
dd5a6335b1
proxmox
pass in description

fix syntax

configure proxmox provider

typo

add doc comment in existing modules

add comment

allow insecure proxmox connection for use in dev

wip proxmox progress

use service configurations moved to machine-independent location

wire settings directly without option block terraform

adjust cwd

try tf on null input

update .envrc.sample with sample proxmox credentials
2025-05-11 19:22:49 +02:00
4af36e4f65
run direnv allow in panel to ensure it can get proxmox credentials 2025-05-11 18:23:05 +02:00
a9b0e88315
iso defaults 2025-05-11 18:23:05 +02:00
edfbc7d03a
factor out settings for use in base install 2025-05-11 18:23:05 +02:00
682b533b49
switch imports from lookup paths to explicit npins to keep things pure for tests 2025-05-11 18:23:05 +02:00
3834d92762
drop nixops-specific fediversityVm properties set only in static machines directories 2025-05-11 18:23:05 +02:00
84e5b67d25
account for 285 2025-05-11 18:23:05 +02:00
ec47484186
gitignore pydantic schema 2025-05-11 18:23:05 +02:00
102 changed files with 1231 additions and 1139 deletions

View file

@ -15,7 +15,7 @@ jobs:
- uses: actions/checkout@v4
- run: nix-build -A tests
check-peertube:
check-services:
runs-on: native
steps:
- uses: actions/checkout@v4
@ -32,3 +32,9 @@ jobs:
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-basic -L
check-infra:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: cd infra && nix-build -A tests

6
.gitignore vendored
View file

@ -1,3 +1,9 @@
.npins.json
.terraform/
.terraform.lock.hcl
.terraform.tfstate.lock.info
terraform.tfstate*
.auto.tfvars.json
.DS_Store
.idea
*.log

View file

@ -1,8 +1,7 @@
# The Fediversity project
This repository contains all the code and code-related files having to do with
[the Fediversity project](https://fediversity.eu/), with the notable exception
of [NixOps4 that is hosted on GitHub](https://github.com/nixops4/nixops4).
[the Fediversity project](https://fediversity.eu/).
## Goals
@ -81,27 +80,15 @@ Not everyone has the expertise and time to run their own server.
The software includes technical configuration that links software components.
Most user-facing configuration remains untouched by the deployment process.
> Example: NixOps4 is used to deploy [Pixelfed](https://pixelfed.org).
> Example: OpenTofu is used to deploy [Pixelfed](https://pixelfed.org).
- Migrate
Move service configurations and user data to a different hosting provider.
- [NixOps4](https://github.com/nixops4/nixops4)
- [OpenTofu](https://opentofu.org/)
A tool for deploying and managing resources through the Nix language.
NixOps4 development is supported by the Fediversity project
- Resource
A [resource for NixOps4](https://nixops.dev/manual/development/concept/resource.html) is any external entity that can be declared with NixOps4 expressions and manipulated with NixOps4, such as a virtual machine, an active NixOS configuration, a DNS entry, or customer database.
- Resource provider
A resource provider for NixOps4 is an executable that communicates between a resource and NixOps4 using a standardised protocol, allowing [CRUD operations](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) on the resources to be performed by NixOps4.
Refer to the [NixOps4 manual](https://nixops.dev/manual/development/resource-provider/index.html) for details.
> Example: We need a resource provider for obtaining deployment secrets from a database.
An infrastructure-as-code tool, and open-source (MPL 2.0) fork of Terraform.
## Development
@ -118,9 +105,6 @@ Contact the project team if you have questions or suggestions, or if you're inte
Most of the directories in this repository have their own README going into more
details as to what they are for. As an overview:
- [`deployment/`](./deployment) contains work to generate a full Fediversity
deployment from a minimal configuration.
- [`infra/`](./infra) contains the configurations for the various VMs that are
in production for the project, for instance the Git instances or the Wiki, as
well as means to provision and set up new ones.
@ -128,14 +112,8 @@ details as to what they are for. As an overview:
- [`keys/`](./keys) contains the public keys of the contributors to this project
as well as the systems that we administrate.
- [`matrix/`](./matrix) contains everything having to do with setting up a
fully-featured Matrix server.
- [`secrets/`](./secrets) contains the secrets that need to get injected into
machine configurations.
- [`services/`](./services) contains our effort to make Fediverse applications
work seemlessly together in our specific setting.
- [`website/`](./website) contains the framework and the content of [the
Fediversity website](https://fediversity.eu/)

View file

@ -24,6 +24,7 @@ let
## Add a directory here if pre-commit hooks shouldn't apply to it.
optout = [
"npins"
".terraform"
];
excludes = map (dir: "^${dir}/") optout;
addExcludes = lib.mapAttrs (_: c: c // { inherit excludes; });
@ -41,6 +42,9 @@ in
shell = pkgs.mkShellNoCC {
inherit (pre-commit-check) shellHook;
buildInputs = pre-commit-check.enabledPackages;
packages = [
pkgs.nixfmt-rfc-style
];
};
tests = {

View file

@ -1,6 +0,0 @@
# Deployment
This repository contains work to generate a full Fediversity deployment from a
minimal configuration. This is different from [`../services/`](../services) that
focuses on one machine, providing a polished and unified interface to different
Fediverse services.

1
infra/.envrc.sample Normal file
View file

@ -0,0 +1 @@
export PROXMOX_VE_API_TOKEN="myuser@ProcoliX!TOKEN_NAME=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

1
infra/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
**/.envrc

View file

@ -1,98 +1,37 @@
# Infra
# service deployment
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.
deploys [NixOS](https://nixos.org/) templates using [OpenTofu](https://opentofu.org/).
## Provisioning VMs with an initial configuration
## requirements
NOTE[Niols]: This is very manual and clunky. Two things will happen. In the near
future, I will improve the provisioning script to make this a bit less clunky.
In the far future, NixOps4 will be able to communicate with Proxmox directly and
everything will become much cleaner.
- [nix](https://nix.dev/)
1. Choose names for your VMs. It is recommended to choose `fediXXX`, with `XXX`
above 100. For instance, `fedi117`.
## usage
2. Add a basic configuration for the machine. These typically go in
`infra/machines/<name>/default.nix`. You can look at other `fediXXX` VMs to
find inspiration. You probably do not need a `nixos.module` option at this
point.
### development
2. Add a file for each of those VM's public keys, eg.
```
touch keys/systems/fedi117.pub
```
Those files need to exist during provisioning, but their content matters only
when updating the machines' configuration.
FIXME: Remove this step by making the provisioning script not fail with the
public key does not exist yet.
3. Run the provisioning script:
```
sh infra/proxmox-provision.sh fedi117
```
The script can take several ids at the same time. It requires some
authentication options and provides several more. See `--help`.
4. (Optional) Add a DNS entry for the machine; for instance `fedi117.abundos.eu
A 95.215.187.117`.
5. Grab the public host keys for the machines in question, and add it to the
repository. For instance:
```
ssh fedi117.abundos.eu 'sudo cat /etc/ssh/ssh_host_ed25519_key.pub' > keys/systems/fedi117.pub
```
FIXME: Make the provisioning script do that for us.
7. Regenerate the list of machines:
```
sh infra/machines.md.sh
```
Commit it with the machine's configuration, public key, etc.
8. At this point, the machine contains a very basic configuration that contains
just enough for it to boot and be reachable. Go on to the next section to
update the machine and put an actual configuration.
FIXME: Figure out why the full configuration isn't on the machine at this
point and fix it.
## Updating existing VM configurations
Their configuration can be updated via NixOps4. Run
before using other commands, if not using direnv:
```sh
nixops4 deployments list
nix-shell
```
to see the available deployments.
This should be done from the root of the repository,
otherwise NixOps4 will fail with something like:
```
nixops4 error: evaluation: error:
… while calling the 'getFlake' builtin
error: path '/nix/store/05nn7krhvi8wkcyl6bsysznlv60g5rrf-source/flake.nix' does not exist, evaluation: error:
… while calling the 'getFlake' builtin
error: path '/nix/store/05nn7krhvi8wkcyl6bsysznlv60g5rrf-source/flake.nix' does not exist
```
Then, given a deployment (eg. `fedi200`), run
then to initialize, or after updating pins or TF providers:
```sh
nixops4 apply <deployment>
setup
```
Alternatively, to run the `default` deployment, which contains all the VMs, run
then, one can use the `tofu` CLI in the sub-folders.
```sh
nixops4 apply
```
## credentials
## Removing an existing VM
Credentials may be placed in (sub-folders') `.envrc`, see:
See `infra/proxmox-remove.sh --help`.
- `.envrc.sample`
- TF [proxmox provider](https://registry.terraform.io/providers/bpg/proxmox/latest/docs#environment-variables-summary)
## implementing
proper documentation TODO.
until then, a reference implementation may be found in [`panel/`](https://git.fediversity.eu/Fediversity/Fediversity/src/branch/main/panel).

View file

@ -0,0 +1,37 @@
# base configuration used also in the initial NixOS install,
# enabling to then push further configs.
{ lib, modulesPath, ... }:
let
inherit (lib) attrValues;
keys = import ../../../keys;
in
{
imports = [
"${modulesPath}/virtualisation/qemu-guest-agent.nix"
"${modulesPath}/virtualisation/qemu-vm.nix"
"${modulesPath}/profiles/qemu-guest.nix"
./hardware.nix
./users.nix
];
time.timeZone = "Europe/Amsterdam";
i18n.defaultLocale = "en_US.UTF-8";
system.stateVersion = "24.05"; # do not change
services.qemuGuest.enable = true;
networking.firewall.enable = true;
services.openssh = {
enable = true;
settings.PasswordAuthentication = false;
};
## TODO Remove direct root authentication, see #24
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors;
# FIXME un-hardcode
networking.nameservers = [
"95.215.185.6"
"95.215.185.7"
"2a00:51c0::5fd7:b906"
"2a00:51c0::5fd7:b907"
];
}

View file

@ -1,21 +1,9 @@
{ lib, ... }:
let
inherit (lib) mkDefault;
in
{
imports = [
./hardware.nix
./base.nix
./networking.nix
./users.nix
];
time.timeZone = "Europe/Amsterdam";
i18n.defaultLocale = "en_US.UTF-8";
system.stateVersion = "24.05"; # do not change
nixpkgs.hostPlatform = mkDefault "x86_64-linux";
## This is just nice to have, but it is also particularly important for the
## Forgejo CI runners because the Nix configuration in the actions is directly
## taken from here.

View file

@ -1,7 +1,12 @@
{ modulesPath, ... }:
let
sources = import ../../../npins;
in
{
imports = [ (modulesPath + "/profiles/qemu-guest.nix") ];
imports = [
"${modulesPath}/profiles/qemu-guest.nix"
"${sources.disko}/module.nix"
];
boot = {
loader = {

View file

@ -6,11 +6,6 @@ let
in
{
config = {
services.openssh = {
enable = true;
settings.PasswordAuthentication = false;
};
networking = {
hostName = config.fediversityVm.name;
domain = config.fediversityVm.domain;
@ -46,13 +41,6 @@ in
interface = "eth0";
};
nameservers = [
"95.215.185.6"
"95.215.185.7"
"2a00:51c0::5fd7:b906"
"2a00:51c0::5fd7:b907"
];
firewall.enable = false;
nftables = {
enable = true;

View file

@ -2,81 +2,11 @@
let
inherit (lib) mkOption;
inherit (lib.types) types;
in
{
options.fediversityVm = {
##########################################################################
## Meta
name = mkOption {
description = ''
The name of the machine. Most of the time, this will look like `vm02XXX`
or `fediYYY`.
'';
};
proxmox = mkOption {
type = types.nullOr (
types.enum [
"procolix"
"fediversity"
]
);
description = ''
The Proxmox instance. This is used for provisioning only and should be
set to `null` if the machine is not a VM.
'';
};
vmId = mkOption {
# REVIEW: There is `types.ints.between` but maybe not `types.ints.above`?
type = types.nullOr (types.addCheck types.int (x: x >= 100));
description = ''
The id of the machine in the corresponding Proxmox. This is used for
provisioning only and should be set to `null` if the machine is not a
VM.
'';
};
description = mkOption {
description = ''
A human-readable description of the machine's purpose. It should be
constituted of a first line giving a very short description, followed
by a blank line, then followed by more details if necessary.
'';
default = "";
};
##########################################################################
## Virtualised hardware
sockets = mkOption {
type = types.int;
description = "The number of sockets of the VM.";
default = 1;
};
cores = mkOption {
type = types.int;
description = "The number of cores of the VM.";
default = 1;
};
memory = mkOption {
type = types.int;
description = "The amount of memory of the VM in MiB.";
default = 2048;
};
diskSize = mkOption {
type = types.int;
description = "The amount of disk of the VM in GiB.";
default = 32;
};
##########################################################################
## Networking
@ -93,7 +23,7 @@ in
description = ''
The IP address of the machine, version 4. It will be injected as a
value in `networking.interfaces.eth0`, but it will also be used to
communicate with the machine via NixOps4.
communicate with the machine.
'';
};
@ -118,7 +48,7 @@ in
description = ''
The IP address of the machine, version 6. It will be injected as a
value in `networking.interfaces.eth0`, but it will also be used to
communicate with the machine via NixOps4.
communicate with the machine.
'';
};
@ -137,21 +67,5 @@ in
default = "2a00:51c0:12:1201::1"; # FIXME: compute default from `address` and `prefixLength`.
};
};
hostPublicKey = mkOption {
description = ''
The ed25519 host public key of the machine. It is used to filter Age
secrets and only keep the relevant ones, and to feed to NixOps4.
'';
};
unsafeHostPrivateKey = mkOption {
default = null;
description = ''
The ed25519 host private key of the machine. It is used when
provisioning to have a predictable public key. Warning: only ever use
this for testing machines, as it is a security hole for so many reasons.
'';
};
};
}

View file

@ -8,8 +8,6 @@ let
inherit (lib) attrValues elem mkDefault;
inherit (lib.attrsets) concatMapAttrs optionalAttrs;
inherit (lib.strings) removeSuffix;
sources = import ../../npins;
inherit (sources) nixpkgs agenix disko;
secretsPrefix = ../../secrets;
secrets = import (secretsPrefix + "/secrets.nix");
@ -17,48 +15,23 @@ let
in
{
imports = [ ./options.nix ];
imports = [
./options.nix
./nixos
];
fediversityVm.hostPublicKey = mkDefault keys.systems.${config.fediversityVm.name};
## Read all the secrets, filter the ones that are supposed to be readable
## with this host's public key, and add them correctly to the configuration
## as `age.secrets.<name>.file`.
age.secrets = concatMapAttrs (
name: secret:
optionalAttrs (elem config.fediversityVm.hostPublicKey secret.publicKeys) {
${removeSuffix ".age" name}.file = secretsPrefix + "/${name}";
}
) secrets;
ssh = {
host = config.fediversityVm.ipv4.address;
hostPublicKey = config.fediversityVm.hostPublicKey;
};
inherit nixpkgs;
## The configuration of the machine. We strive to keep in this file only the
## options that really need to be injected from the resource. Everything else
## should go into the `./nixos` subdirectory.
nixos.module = {
imports = [
(import "${agenix}/modules/age.nix")
(import "${disko}/module.nix")
./options.nix
./nixos
];
## Inject the shared options from the resource's `config` into the NixOS
## configuration.
fediversityVm = config.fediversityVm;
## Read all the secrets, filter the ones that are supposed to be readable
## with this host's public key, and add them correctly to the configuration
## as `age.secrets.<name>.file`.
age.secrets = concatMapAttrs (
name: secret:
optionalAttrs (elem config.fediversityVm.hostPublicKey secret.publicKeys) ({
${removeSuffix ".age" name}.file = secretsPrefix + "/${name}";
})
) secrets;
## FIXME: Remove direct root authentication once the NixOps4 NixOS provider
## supports users with password-less sudo.
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors ++ [
# allow our panel vm access to the test machines
keys.panel
];
};
## FIXME: Remove direct root authentication once the NixOps4 NixOS provider
## supports users with password-less sudo.
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors;
}

27
infra/common/shared.nix Normal file
View file

@ -0,0 +1,27 @@
{
pkgs,
config,
...
}:
let
inherit (config.terraform) hostname domain initialUser;
sources = import ../../npins;
in
{
imports = [
"${sources.agenix}/modules/age.nix"
"${sources.disko}/module.nix"
../../services/fediversity
./resource.nix
];
fediversityVm.name = hostname;
fediversity = {
inherit domain;
temp.initialUser = {
inherit (initialUser) username email displayName;
# FIXME: disgusting, but nvm, this is going to be replaced by
# proper central authentication at some point
passwordFile = pkgs.writeText "password" initialUser.password;
};
};
}

31
infra/default.nix Normal file
View file

@ -0,0 +1,31 @@
{
system ? builtins.currentSystem,
sources ? import ../npins,
pkgs ? import sources.nixpkgs { inherit system; },
}:
let
inherit (pkgs) lib;
setup = import ./setup.nix { inherit lib pkgs sources; };
in
{
# shell for testing TF directly
shell = pkgs.mkShellNoCC {
packages = [
(import ./tf.nix { inherit lib pkgs; })
pkgs.direnv
pkgs.jaq
pkgs.nixos-generators
setup
];
};
tests = pkgs.callPackage ./tests.nix { };
# re-export inputs so they can be overridden granularly
# (they can't be accessed from the outside any other way)
inherit
sources
system
pkgs
;
}

44
infra/dev/main.tf Normal file
View file

@ -0,0 +1,44 @@
locals {
vm_domain = "abundos.eu"
}
module "nixos" {
source = "../sync-nix"
vm_domain = local.vm_domain
hostname = each.value.hostname
config_nix = each.value.config_nix
config_tf = each.value.config_tf
for_each = { for name, inst in {
# wiki = "vm02187" # does not resolve
# forgejo = "vm02116" # does not resolve
# TODO: move these to a separate `host` dir
dns = "fedi200"
fedipanel = "fedi201"
} : name => {
hostname = inst
config_tf = {
terraform = {
domain = local.vm_domain
hostname = inst
}
}
config_nix = <<-EOF
{
# note interpolations here TF ones
imports = [
# shared NixOS config
${path.root}/../common/shared.nix
# FIXME: separate template options by service
${path.root}/options.nix
# for service `forgejo` import `forgejo.nix`
${path.root}/../../machines/dev/${inst}/${name}.nix
# FIXME: get VM details from TF
${path.root}/../../machines/dev/${inst}
];
}
EOF
}
}
}

29
infra/dev/options.nix Normal file
View file

@ -0,0 +1,29 @@
# nix options expected to be set from TF here
# TODO: could (part of) this be generated somehow? c.f #275
{
lib,
...
}:
let
inherit (lib) types mkOption;
inherit (types) str enum;
in
{
options.terraform = {
domain = mkOption {
type = enum [
"fediversity.net"
];
description = ''
Apex domain under which the services will be deployed.
'';
default = "fediversity.net";
};
hostname = mkOption {
type = str;
description = ''
Internal name of the host, e.g. test01
'';
};
};
}

1
infra/dev/variables.tf Normal file
View file

@ -0,0 +1 @@

View file

@ -1,15 +0,0 @@
<!-- This file is auto-generated by `machines.md.sh` from the machines'
configuration. -->
# Machines
Currently, this repository keeps track of the following VMs:
Machine | Proxmox | Description
--------|---------|-------------
[`fedi200`](./fedi200) | fediversity | Testing machine for Hans
[`fedi201`](./fedi201) | fediversity | FediPanel
[`vm02116`](./vm02116) | procolix | Forgejo
[`vm02187`](./vm02187) | procolix | Wiki
This table excludes all machines with names starting with `test`.

View file

@ -1,43 +0,0 @@
#!/usr/bin/env sh
set -euC
cd "$(dirname "$0")"
{
cat <<\EOF
<!-- This file is auto-generated by `machines.md.sh` from the machines'
configuration. -->
# Machines
Currently, this repository keeps track of the following VMs:
Machine | Proxmox | Description
--------|---------|-------------
EOF
vmOptions=$(
cd ..
nix eval \
--impure --raw --expr "
builtins.toJSON (builtins.getFlake (builtins.toString ./.)).vmOptions
" \
--log-format raw --quiet
)
## NOTE: `jq`'s `keys` is alphabetically sorted, just what we want here.
for machine in $(echo "$vmOptions" | jq -r 'keys[]'); do
if [ "${machine#test}" = "$machine" ]; then
proxmox=$(echo "$vmOptions" | jq -r ".$machine.proxmox")
description=$(echo "$vmOptions" | jq -r ".$machine.description" | head -n 1)
# shellcheck disable=SC2016
printf '[`%s`](./%s) | %s | %s\n' "$machine" "$machine" "$proxmox" "$description"
fi
done
cat <<\EOF
This table excludes all machines with names starting with `test`.
EOF
} >| machines.md

View file

@ -1,38 +0,0 @@
{
fediversityVm = {
vmId = 2116;
proxmox = "procolix";
description = "Forgejo";
ipv4.address = "185.206.232.34";
ipv6.address = "2a00:51c0:12:1201::20";
};
nixos.module =
{ lib, ... }:
{
imports = [
./forgejo.nix
];
## vm02116 is running on old hardware based on a Xen VM environment, so it
## needs these extra options. Once the VM gets moved to a newer node, these
## two options can safely be removed.
boot.initrd.availableKernelModules = [ "xen_blkfront" ];
services.xe-guest-utilities.enable = true;
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/3802a66d-e31a-4650-86f3-b51b11918853";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/2CE2-1173";
fsType = "vfat";
};
};
};
}

View file

@ -1,36 +0,0 @@
{
fediversityVm = {
vmId = 2187;
proxmox = "procolix";
description = "Wiki";
ipv4.address = "185.206.232.187";
ipv6.address = "2a00:51c0:12:1201::187";
};
nixos.module =
{ lib, ... }:
{
imports = [
./wiki.nix
];
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/a46a9c46-e32b-4216-a4aa-8819b2cd0d49";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/6AB5-4FA8";
fsType = "vfat";
options = [
"fmask=0022"
"dmask=0022"
];
};
};
};
}

View file

@ -1,223 +0,0 @@
# Provisioning VMs via Proxmox
NOTE: This directory is outdated and most of the interesting code has moved to
`infra/`. There is still some information to extract from here, but treat all
that you read with a grain of salt.
## Quick links
Proxmox API doc
: <https://pve.proxmox.com/pve-docs/api-viewer>
Fediversity Proxmox
: <http://192.168.51.81:8006/>
## Basic terminology
Node
: physical host
## Fediversity Proxmox
- It is only accessible via Procolix\'s VPN:
- Get credentials for the VPN portal and Proxmox from
[Kevin](https://git.fediversity.eu/kevin).
- Log in to the [VPN
portal](https://vpn.fediversity.eu/vpn-user-portal/home).
- Create a **New Configuration**:
- Select **WireGuard (UDP)**
- Enter some name, e.g. `fediversity`
- Click Download
- Write the WireGuard configuration to a file
`fediversity-vpn.config` next to your NixOS configuration
- Add that file's path to `.git/info/exclude` and make sure
it doesn't otherwise leak (for example, use
[Agenix](https://github.com/ryantm/agenix) to manage
secrets)
- To your NixOS configuration, add
``` nix
networking.wg-quick.interfaces.fediversity.configFile = toString ./fediversity-vpn.config;
```
- Select "Promox VE authentication server".
- Ignore the "You do not have a valid subscription" message.
## Automatically
This directory contains scripts that can automatically provision or
remove a Proxmox VM. For now, they are tied to one node in the
Fediversity Proxmox, but it would not be difficult to make them more
generic. Try:
```sh
bash proxmox/provision.sh --help
bash proxmox/remove.sh --help
```
## Preparing the machine configuration
- It is nicer if the machine is a QEMU guest. On NixOS:
``` nix
services.qemuGuest.enable = true
```
- Choose name for your machine.
- Choose static IPs for your machine. The IPv4 and IPv6 subnets
available for Fediversity testing are:
- `95.215.187.0/24`. Gateway is `95.215.187.1`.
- `2a00:51c0:13:1305::/64`. Gateway is `2a00:51c0:13:1305::1`.
- I have been using id `XXX` (starting from `001`), name `fediXXX`,
`95.215.187.XXX` and `2a00:51c0:13:1305::XXX`.
- Name servers should be `95.215.185.6` and `95.215.185.7`.
- Check [Netbox](https://netbox.protagio.org) to see which addresses
are free.
## Manually via the GUI
### Upload your ISO
- Go to Fediversity proxmox.
- In the left view, expand under the node that you want and click on
"local".
- Select "ISO Images", then click "Upload".
- Note: You can also download from URL.
- Note: You should click on "local" and not "local-zfs".
### Creating the VM
- Click "Create VM" at the top right corner.
#### General
Node
: which node will host the VM; has to be the same
VM ID
: Has to be unique, probably best to use the `xxxx` in `vm0xxxx`
(yet to be decided)
Name
: Usually `vm` + 5 digits, e.g. `vm02199`
Resource pool
: Fediversity
#### OS
Use CD/DVD disc image file (iso)
:
Storage
: local, means storage of the node.
ISO image
: select the image previously uploaded
No need to touch anything else
#### System
BIOS
: OVMF (UEFI)
EFI Storage
: `linstor_storage`; this is a storage shared by all of the Proxmox
machines.
Pre-Enroll keys
: MUST be unchecked
Qemu Agent
: check
#### Disks
- Tick "advanced" at the bottom.
- Disk size (GiB) :: 40 (depending on requirements)
- SSD emulation :: check (only visible if "Advanced" is checked)
- Discard :: check, so that blocks of removed data are cleared
#### CPU
Sockets
: 1 (depending on requirements)
Cores
: 2 (depending on requirements)
Enable NUMA
: check
#### Memory
Memory (MiB)
: choose what you want
Ballooning Device
: leave checked (only visible if "Advanced" is checked)
#### Network
Bridge
: `vnet1306`. This is the provisioning bridge;
we will change it later.
Firewall
: uncheck, we will handle the firewall on the VM itself
#### Confirm
### Install and start the VM
- Start the VM a first time.
- Select the VM in the left panel. You might have to expand the
node on which it is hosted.
- Select "Console" and start the VM.
- Install the VM as you would any other machine.
- [*Shutdown the VM*]{.spurious-link target="Shutdown the VM"}.
- After the VM has been installed:
- Select the VM again, then go to "Hardware".
- Double click on the CD/DVD Drive line. Select "Do not use any
media" and press OK.
- Double click on Network Device, and change the bridge to
`vnet1305`, the public bridge.
- Start the VM again.
### Remove the VM
- [*Shutdown the VM*]{.spurious-link target="Shutdown the VM"}.
- On the top right corner, click "More", then "Remove".
- Enter the ID of the machine.
- Check "Purge from job configurations"
- Check "Destroy unreferenced disks owned by guest"
- Click "Remove".
### Move the VM to another node
- Make sure there is no ISO plugged in.
- Click on the VM. Click migrate. Choose target node. Go.
- Since the storage is shared, it should go pretty fast (~1 minute).
### Shutdown the VM
- Find the VM in the left panel.
- At the top right corner appears a "Shutdown" button with a submenu.
- Clicking "Shutdown" sends a signal to shutdown the machine. This
might not work if the machine is not listening for that signal.
- Brutal solution: in the submenu, select "Stop".
- The checkbox "Overrule active shutdown tasks" means that the machine
should be stopped even if a shutdown is currently ongoing. This is
particularly important if you have tried to shut the machine down
normally just before.

34
infra/operator/garage.nix Normal file
View file

@ -0,0 +1,34 @@
{ pkgs, ... }:
let
## NOTE: All of these secrets are publicly available in this source file
## and will end up in the Nix store. We don't care as they are only ever
## used for testing anyway.
##
## FIXME: Generate and store in state.
mastodonS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK3515373e4c851ebaad366558";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34";
};
peertubeS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK1f9feea9960f6f95ff404c9b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7295c4201966a02c2c3d25b5cea4a5ff782966a2415e3a196f91924631191395";
};
pixelfedS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GKb5615457d44214411e673b7b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "5be6799a88ca9b9d813d1a806b64f15efa49482dbe15339ddfaf7f19cf434987";
};
in
{
fediversity = {
garage.enable = true;
pixelfed = pixelfedS3KeyConfig { inherit pkgs; };
mastodon = mastodonS3KeyConfig { inherit pkgs; };
peertube = peertubeS3KeyConfig { inherit pkgs; };
};
}

91
infra/operator/main.tf Normal file
View file

@ -0,0 +1,91 @@
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "= 0.76.1"
}
}
}
provider "proxmox" {
endpoint = "https://192.168.51.81:8006/"
# because self-signed TLS certificate is in use
insecure = true
ssh {
agent = true
# TODO: uncomment and configure if using api_token instead of password
username = "root" # FIXME: #24
}
}
locals {
# user-facing applications
application_configs = {
# FIXME: wrap applications at the interface to grab them in one go?
mastodon = var.mastodon
pixelfed = var.pixelfed
peertube = var.peertube
}
# services shared between applications
peripherals = { for name in [
"garage"
] : name => {
# enable if any user applications are enabled
enable = anytrue([for _, app in local.application_configs: try(app.enable, false)])
}
}
}
module "nixos" {
source = "../sync-nix"
category = "operator"
description = each.key
config_nix = each.value.config_nix
config_tf = each.value.config_tf
# FIXME recheck what may be moved back to sync-nix
for_each = {for name, inst in merge(
local.peripherals,
# local.application_configs,
) : name => merge(inst, {
config_tf = {
fediversityVm = {
name = name # used in hostname, selecting secrets
domain = var.domain
}
fediversity = {
domain = var.domain
temp = {
initialUser = var.initialUser
}
}
}
config_nix = <<-EOF
{
# note interpolations here are TF ones
imports = [
# shared NixOS config
${path.root}/../common/shared.nix
# FIXME: separate template options by service
${path.root}/options.nix
# for service `mastodon` import `mastodon.nix`
# FIXME: get VM details from TF
${path.module}/${name}.nix
];
}
EOF
config_nix_base = <<-EOF
{
## FIXME: switch root authentication to users with password-less sudo, see #24
users.users.root.openssh.authorizedKeys.keys = let
keys = import ../../keys;
in [
# allow our panel vm access to the test machines
keys.panel
];
}
EOF
}) if try(inst.enable, false)}
}

View file

@ -0,0 +1,17 @@
{ pkgs, ... }:
let
mastodonS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK3515373e4c851ebaad366558";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34";
};
in
{
fediversity = {
mastodon = mastodonS3KeyConfig { inherit pkgs; } // {
enable = true;
};
temp.cores = 1; # FIXME: should come from TF eventually
};
}

View file

@ -0,0 +1,55 @@
# nix options expected to be set from TF here
# TODO: could (part of) this be generated somehow? c.f #275
{
lib,
...
}:
let
inherit (lib) types mkOption;
inherit (types) str enum submodule;
in
{
options.terraform = {
domain = mkOption {
type = enum [
"fediversity.net"
];
description = ''
Apex domain under which the services will be deployed.
'';
default = "fediversity.net";
};
hostname = mkOption {
type = str;
description = ''
Internal name of the host, e.g. test01
'';
};
initialUser = mkOption {
description = ''
Some services require an initial user to access them.
This option sets the credentials for such an initial user.
'';
type = submodule {
options = {
displayName = mkOption {
type = str;
description = "Display name of the user";
};
username = mkOption {
type = str;
description = "Username for login";
};
email = mkOption {
type = str;
description = "User's email address";
};
password = mkOption {
type = str;
description = "Password for login";
};
};
};
};
};
}

View file

@ -0,0 +1,20 @@
{ pkgs, ... }:
let
peertubeS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK1f9feea9960f6f95ff404c9b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7295c4201966a02c2c3d25b5cea4a5ff782966a2415e3a196f91924631191395";
};
in
{
fediversity = {
peertube = peertubeS3KeyConfig { inherit pkgs; } // {
enable = true;
## NOTE: Only ever used for testing anyway.
##
## FIXME: Generate and store in state.
secretsFile = pkgs.writeText "secret" "574e093907d1157ac0f8e760a6deb1035402003af5763135bae9cbd6abe32b24";
};
};
}

View file

@ -0,0 +1,16 @@
{ pkgs, ... }:
let
pixelfedS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GKb5615457d44214411e673b7b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "5be6799a88ca9b9d813d1a806b64f15efa49482dbe15339ddfaf7f19cf434987";
};
in
{
fediversity = {
pixelfed = pixelfedS3KeyConfig { inherit pkgs; } // {
enable = true;
};
};
}

View file

@ -0,0 +1,51 @@
# TODO: (partially) generate, say from nix modules, c.f. #275
variable "domain" {
type = string
default = "fediversity.net"
}
variable "mastodon" {
type = object({
enable = bool
})
default = {
enable = false
}
}
variable "pixelfed" {
type = object({
enable = bool
})
default = {
enable = false
}
}
variable "peertube" {
type = object({
enable = bool
})
default = {
enable = false
}
}
variable "initialUser" {
type = object({
displayName = string
username = string
email = string
# TODO: mark (nested) credentials as sensitive
# https://discuss.hashicorp.com/t/is-it-possible-to-mark-an-attribute-of-an-object-as-sensitive/24649/2
password = string
})
# FIXME: remove default when the form provides this value, see #285
default = {
displayName = "Testy McTestface"
username = "test"
email = "test@test.com"
password = "testtest"
}
}

15
infra/pass-ssh-key.sh Executable file
View file

@ -0,0 +1,15 @@
#!/usr/bin/env bash
export host="$host"
mkdir -p etc/ssh
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
for keyname in ssh_host_ed25519_key ssh_host_ed25519_key.pub; do
if [[ $keyname == *.pub ]]; then
umask 0133
else
umask 0177
fi
cp "$SCRIPT_DIR/../infra/test-machines/${host}/${keyname}" ./etc/ssh/${keyname}
done

View file

@ -1,258 +0,0 @@
#!/usr/bin/env bash
set -euC
################################################################################
## Constants
## FIXME: There seems to be a problem with file upload where the task is
## registered to `node051` no matter what node we are actually uploading to? For
## now, let us just use `node051` everywhere.
readonly node=node051
readonly tmpdir=/tmp/proxmox-remove-$RANDOM
mkdir $tmpdir
################################################################################
## Parse arguments
api_url=
username=
password=
vm_ids_or_names=
help () {
cat <<EOF
Usage: $0 [OPTION...] ID_OR_NAME [ID_OR_NAME...]
Options:
--api-url STR Base URL of the Proxmox API (required)
--username STR Username, with provider (eg. niols@pve)
--password STR Password
-h|-?|--help Show this help and exit
Options can also be provided by adding assignments to a '.proxmox' file in the
current working directory. For instance, it could contain:
api_url=https://192.168.51.81:8006/api2/json
username=mireille@pve
debug=true
Command line options take precedence over options found in the '.proxmox' file.
EOF
}
# shellcheck disable=SC2059
die () { printf '\033[31m'; printf "$@"; printf '\033[0m\n'; exit 2; }
# shellcheck disable=SC2059
die_with_help () { printf '\033[31m'; printf "$@"; printf '\033[0m\n'; help; exit 2; }
if [ -f .proxmox ]; then
# shellcheck disable=SC1091
. "$PWD"/.proxmox
fi
while [ $# -gt 0 ]; do
argument=$1
shift
case $argument in
--api-url|--api_url) readonly api_url="$1"; shift ;;
--username) readonly username=$1; shift ;;
--password) readonly password=$1; shift ;;
-h|-\?|--help) help; exit 0 ;;
-*) die_with_help "Unknown argument: '%s'." "$argument" ;;
*) vm_ids_or_names="$vm_ids_or_names $argument" ;;
esac
done
if [ -z "$vm_ids_or_names" ]; then
die_with_help "Required: at least one VM id or name.\n"
fi
if [ -z "$api_url" ] || [ -z "$username" ] || [ -z "$password" ]; then
die_with_help "Required: '--api-url', '--username' and '--password'."
fi
################################################################################
## Getting started
printf 'Authenticating...'
response=$(
http \
--verify no \
POST "$api_url"/access/ticket \
"username=$username" \
"password=$password"
)
ticket=$(echo "$response" | jq -r .data.ticket)
readonly ticket
csrf_token=$(echo "$response" | jq -r .data.CSRFPreventionToken)
readonly csrf_token
printf ' done.\n'
acquire_lock () {
until mkdir "$tmpdir/lock-$1" 2>/dev/null; do sleep 1; done
}
release_lock () {
rmdir "$tmpdir/lock-$1"
}
proxmox () {
acquire_lock proxmox
http \
--verify no \
--form \
"$@" \
"Cookie:PVEAuthCookie=$ticket" \
"CSRFPreventionToken:$csrf_token"
release_lock proxmox
}
## Way to inject different behaviour on unexpected status.
# shellcheck disable=SC2317
default_proxmox_sync_unexpected_status_handler () {
die "unexpected status: '%s'" "$1"
}
proxmox_sync_unexpected_status_handler=default_proxmox_sync_unexpected_status_handler
## Synchronous variant for when the `proxmox` function would just respond an
## UPID in the `data` JSON field.
proxmox_sync () {
local response upid status
response=$(proxmox "$@")
upid=$(echo "$response" | jq -r .data)
while :; do
response=$(proxmox GET "$api_url/nodes/$node/tasks/$upid/status")
status=$(echo "$response" | jq -r .data.status)
case $status in
running) sleep 1 ;;
stopped) break ;;
*) "$proxmox_sync_unexpected_status_handler" "$status" ;;
esac
done
}
################################################################################
## Grab VM options
##
## Takes the name of the VM, grabs `.#vmOptions.<name>` and gets the id from it.
is_integer () {
[ "$1" -eq "$1" ] 2>/dev/null
}
grab_vm_options () {
local options
if is_integer "$1"; then
vm_id=$1
vm_name="#$1"
else
vm_name=$1
printf 'Grabing VM options for VM %s...\n' "$vm_name"
options=$(
nix eval \
--impure --raw --expr "
builtins.toJSON (builtins.getFlake (builtins.toString ./.)).vmOptions.$vm_name
" \
--log-format raw --quiet
)
proxmox=$(echo "$options" | jq -r .proxmox)
vm_id=$(echo "$options" | jq -r .vmId)
if [ "$proxmox" != fediversity ]; then
die "I do not know how to remove things that are not Fediversity VMs,
but I got proxmox = '%s' for VM %s." "$proxmox" "$vm_name"
fi
printf 'done grabing VM options for VM %s. Found VM %d on %s Proxmox.\n' \
"$vm_name" "$vm_id" "$proxmox"
fi
}
################################################################################
## Stop VM
stop_vm () {
printf 'Stopping VM %s...\n' "$vm_name"
proxmox_sync POST "$api_url/nodes/$node/qemu/$vm_id/status/stop" \
'overrule-shutdown'==1
printf 'done stopping VM %s.\n' "$vm_name"
}
################################################################################
## Delete VM
# shellcheck disable=SC2317
proxmox_sync_unexpected_status_handler_ignore_null () {
case $1 in
null)
printf "Attempted to delete VM %s, but got 'null' status. Maybe the VM already does not exist?\n" \
"$vm_name"
exit 0
;;
*)
default_proxmox_sync_unexpected_status_handler "$1"
;;
esac
}
delete_vm () {
printf 'Deleting VM %s...\n' "$vm_name"
proxmox_sync_unexpected_status_handler=proxmox_sync_unexpected_status_handler_ignore_null
proxmox_sync DELETE "$api_url/nodes/$node/qemu/$vm_id" \
'destroy-unreferenced-disks'==1 \
'purge'==1
proxmox_sync_unexpected_status_handler=default_proxmox_sync_unexpected_status_handler
printf 'done deleting VM %s.\n' "$vm_name"
}
################################################################################
## Main loop
printf 'Removing VMs%s...\n' "$vm_ids_or_names"
remove_vm () (
grab_vm_options "$1"
stop_vm
delete_vm
)
for vm_id_or_name in $vm_ids_or_names; do
remove_vm "$vm_id_or_name" &
done
nb_errors=0
while :; do
wait -n && :
case $? in
0) ;;
127) break ;;
*) nb_errors=$((nb_errors + 1)) ;;
esac
done
if [ "$nb_errors" != 0 ]; then
die 'encountered %d errors while removing VMs%s.' "$nb_errors" "$vm_ids_or_names"
fi
printf 'done removing VMs%s.\n' "$vm_ids_or_names"
################################################################################
## Cleanup
rm -Rf $tmpdir
exit 0

20
infra/setup.nix Normal file
View file

@ -0,0 +1,20 @@
{
pkgs,
lib,
sources,
...
}:
pkgs.writeScriptBin "setup" ''
# calculated pins
echo '${lib.strings.toJSON sources}' > sync-nix/.npins.json
# generate TF lock for nix's TF providers
for category in dev operator sync-nix; do
pushd "$category"
rm -rf .terraform/
rm -f .terraform.lock.hcl
# suppress warning on architecture-specific generated lock file:
# `Warning: Incomplete lock file information for providers`.
tofu init -input=false 1>/dev/null
popd
done
''

1
infra/shell.nix Normal file
View file

@ -0,0 +1 @@
(import ./. { }).shell

239
infra/sync-nix/main.tf Normal file
View file

@ -0,0 +1,239 @@
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "= 0.76.1"
}
}
}
locals {
system = "x86_64-linux"
node_name = "node051"
dump_name = "vzdump-qemu-nixos-fediversity-${var.category}.vma.zst"
# dependency paths pre-calculated from npins
pins = jsondecode(file("${path.module}/.npins.json"))
# nix path: expose pins, use nixpkgs in flake commands (`nix run`)
nix_path = "${join(":", [for name, dir in local.pins : "${name}=${dir}"])}:flake=${local.pins["nixpkgs"]}:flake"
config_tf = merge(var.config_tf, {
})
# FIXME pass IP from generated VM
# vm_host = "${var.hostname}.${var.vm_domain}"
# vm_host = "${proxmox_virtual_environment_vm.nix_vm.ipv4_addresses[0]}"
vm_host = "fedi202.abundos.eu"
}
# FIXME move to host
# FIXME add proxmox
data "external" "base-hash" {
program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ${path.module}/../common/nixos/base.nix)\\\"}\""]
}
# hash of our code directory, used to trigger re-deploy
# FIXME calculate separately to reduce false positives
data "external" "hash" {
program = ["sh", "-c", "echo \"{\\\"hash\\\":\\\"$(nix-hash ..)\\\"}\""]
}
# FIXME move to host
resource "terraform_data" "template" {
triggers_replace = [
data.external.base-hash.result,
]
provisioner "local-exec" {
working_dir = path.root
environment = {
NIX_PATH = local.nix_path
}
# FIXME configure to use actual base image
command = <<-EOF
set -euo pipefail
nixos-generate -f proxmox -o /tmp/nixos-image
ln -s /tmp/nixos-image/vzdump-qemu-nixos-*.vma.zst /tmp/nixos-image/${local.dump_name}
EOF
}
}
# FIXME move to host
resource "proxmox_virtual_environment_file" "upload" {
lifecycle {
replace_triggered_by = [
terraform_data.template,
]
}
content_type = "images"
datastore_id = "local"
node_name = local.node_name
overwrite = true
source_file {
path = "/tmp/nixos-image/${local.dump_name}"
file_name = local.dump_name
}
}
# FIXME distinguish var.category
data "proxmox_virtual_environment_vms" "nixos_base" {
node_name = local.node_name
filter {
name = "template"
values = [true]
}
# filter {
# name = "node_name"
# values = ["nixos-base"]
# }
}
resource "proxmox_virtual_environment_vm" "nix_vm" {
lifecycle {
replace_triggered_by = [
proxmox_virtual_environment_file.upload,
]
}
node_name = local.node_name
pool_id = "Fediversity"
description = var.description
started = true
agent {
enabled = true
}
cpu {
type = "x86-64-v2-AES"
cores = var.cores
sockets = var.sockets
numa = true
}
memory {
dedicated = var.memory
}
efi_disk {
datastore_id = "linstor_storage"
type = "4m"
}
disk {
datastore_id = "linstor_storage"
interface = "scsi0"
discard = "on"
iothread = true
size = var.disk_size
ssd = true
}
clone {
datastore_id = "local"
node_name = data.proxmox_virtual_environment_vms.nixos_base.vms[0].node_name
vm_id = data.proxmox_virtual_environment_vms.nixos_base.vms[0].vm_id
full = true
}
network_device {
model = "virtio"
bridge = "vnet1306"
}
operating_system {
type = "l26"
}
scsi_hardware = "virtio-scsi-single"
bios = "ovmf"
}
# TF resource to build and deploy NixOS instances.
resource "terraform_data" "nixos" {
# trigger rebuild/deploy if (FIXME?) any potentially used config/code changed,
# preventing these (20+s, build being bottleneck) when nothing changed.
# terraform-nixos separates these to only deploy if instantiate changed,
# yet building even then - which may be not as bad using deploy on remote.
# having build/deploy one resource reflects wanting to prevent no-op rebuilds
# over preventing (with less false positives) no-op deployments,
# as i could not find a way to do prevent no-op rebuilds without merging them:
# - generic resources cannot have outputs, while we want info from the instantiation (unless built on host?).
# - `data` always runs, which is slow for deploy and especially build.
triggers_replace = [
data.external.hash.result,
var.config_nix_base,
var.config_nix,
var.config_tf,
]
provisioner "local-exec" {
# directory to run the script from. we use the TF project root dir,
# here as a path relative from where TF is run from,
# matching calling modules' expectations on config_nix locations.
# note that absolute paths can cause false positives in triggers,
# so are generally discouraged in TF.
working_dir = path.root
environment = {
# nix path used on build, lets us refer to e.g. nixpkgs like `<nixpkgs>`
NIX_PATH = local.nix_path
}
# TODO: refactor back to command="ignoreme" interpreter=concat([]) to protect sensitive data from error logs?
# TODO: build on target?
command = <<-EOF
set -euo pipefail
# INSTANTIATE
command=(
nix-instantiate
--expr
'let
os = import <nixpkgs/nixos> {
system = "${local.system}";
configuration = {
# nix path for debugging
nix.nixPath = [ "${local.nix_path}" ];
}
// ${var.config_nix_base}
// ${var.config_nix}
# template parameters passed in from TF thru json
// builtins.fromJSON "${replace(jsonencode(local.config_tf), "\"", "\\\"")}";
};
in
# info we want to get back out
{
substituters = builtins.concatStringsSep " " os.config.nix.settings.substituters;
trusted_public_keys = builtins.concatStringsSep " " os.config.nix.settings.trusted-public-keys;
drv_path = os.config.system.build.toplevel.drvPath;
out_path = os.config.system.build.toplevel;
}'
)
# instantiate the config in /nix/store
"$${command[@]}" -A out_path
# get the other info
json="$("$${command[@]}" --eval --strict --json)"
# DEPLOY
declare substituters trusted_public_keys drv_path
# set our variables using the json object
eval "export $(echo $json | jaq -r 'to_entries | map("\(.key)=\(.value)") | @sh')"
host="root@${local.vm_host}" # FIXME: #24
buildArgs=(
--option extra-binary-caches https://cache.nixos.org/
--option substituters $substituters
--option trusted-public-keys $trusted_public_keys
)
sshOpts=(
-o BatchMode=yes
-o StrictHostKeyChecking=no
)
# get the realized derivation to deploy
outPath=$(nix-store --realize "$drv_path" "$${buildArgs[@]}")
# deploy the config by nix-copy-closure
NIX_SSHOPTS="$${sshOpts[*]}" nix-copy-closure --to "$host" "$outPath" --gzip --use-substitutes
# switch the remote host to the config
ssh "$${sshOpts[@]}" "$host" "nix-env --profile /nix/var/nix/profiles/system --set $outPath; $outPath/bin/switch-to-configuration switch"
EOF
}
}

View file

@ -0,0 +1,50 @@
variable "category" {
type = string
description = "Category to be used in naming the base image."
}
variable "description" {
type = string
default = ""
}
variable "sockets" {
type = number
description = "The number of sockets of the VM."
default = 1
}
variable "cores" {
type = number
description = "The number of cores of the VM."
default = 1
}
variable "memory" {
type = number
description = "The amount of memory of the VM in MiB."
default = 2048
}
variable "disk_size" {
type = number
description = "The amount of disk of the VM in GiB."
default = 32
}
variable "config_nix_base" {
type = string
description = "Nix configuration to be used in the deployed VM as well as the base install."
default = "{}"
}
variable "config_nix" {
type = string
description = "Nix configuration to be used in the deployed VM."
default = "{}"
}
variable "config_tf" {
type = any # map(any): all map elements must have the same type
default = {}
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7001;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.51";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::51";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7002;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.52";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::52";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7003;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.53";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::53";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7004;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.54";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::54";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7005;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.55";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::55";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7006;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.56";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::56";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7011;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.61";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::61";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7012;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.62";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::62";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7013;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.63";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::63";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,19 +0,0 @@
{
fediversityVm = {
vmId = 7014;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.64";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::64";
gateway = "2a00:51c0:13:1305::1";
};
};
}

37
infra/tests.nix Normal file
View file

@ -0,0 +1,37 @@
{ lib, pkgs }:
let
defaults = {
virtualisation = {
memorySize = 2048;
cores = 2;
};
};
tf = pkgs.callPackage ./tf.nix {
inherit lib pkgs;
};
tfEnv = pkgs.callPackage ./tf-env.nix { };
nodes = {
server = {
environment.systemPackages = [
tf
tfEnv
];
};
};
in
lib.mapAttrs (name: test: pkgs.testers.runNixOSTest (test // { inherit name; })) {
tf-validate-dev = {
inherit defaults nodes;
testScript = ''
server.wait_for_unit("multi-user.target")
server.succeed("${lib.getExe tf} -chdir='${tfEnv}/infra/dev' validate")
'';
};
tf-validate-operator = {
inherit defaults nodes;
testScript = ''
server.wait_for_unit("multi-user.target")
server.succeed("${lib.getExe tf} -chdir='${tfEnv}/infra/operator' validate")
'';
};
}

32
infra/tf-env.nix Normal file
View file

@ -0,0 +1,32 @@
{
lib,
pkgs,
sources ? import ../npins,
...
}:
pkgs.stdenv.mkDerivation {
name = "tf-repo";
src =
with lib.fileset;
toSource {
root = ../.;
# don't copy ignored files
fileset = intersection (gitTracked ../.) ../.;
};
buildInputs = [
(import ./tf.nix { inherit lib pkgs; })
(import ./setup.nix { inherit lib pkgs sources; })
];
buildPhase = ''
runHook preBuild
pushd infra
setup
popd
runHook postBuild
'';
installPhase = ''
runHook preInstall
cp -r . $out
runHook postInstall
'';
}

43
infra/tf.nix Normal file
View file

@ -0,0 +1,43 @@
# FIXME: use overlays so this gets imported just once?
{
lib,
pkgs,
sources ? import ../npins,
...
}:
let
tofuProvider =
provider:
if provider ? override then
provider.override (oldArgs: {
provider-source-address =
lib.replaceStrings [ "https://registry.terraform.io/providers" ] [ "registry.opentofu.org" ]
oldArgs.homepage;
})
else
provider;
tf = pkgs.opentofu;
mkProvider =
args:
pkgs.terraform-providers.mkProvider (
{ mkProviderFetcher = { repo, ... }: sources.${repo}; } // args
);
tfPlugins = (
p: [
p.external
(mkProvider {
owner = "bpg";
repo = "terraform-provider-proxmox";
rev = "v0.76.1";
spdx = "MPL-2.0";
hash = null;
vendorHash = "sha256-3KJ7gi3UEZu31LhEtcRssRUlfsi4mIx6FGTKi1TDRdg=";
homepage = "https://registry.terraform.io/providers/bpg/proxmox";
provider-source-address = "registry.opentofu.org/bpg/proxmox";
})
]
);
in
# tf.withPlugins tfPlugins
# https://github.com/NixOS/nixpkgs/pull/358522
tf.withPlugins (p: pkgs.lib.lists.map tofuProvider (tfPlugins p))

View file

@ -14,7 +14,7 @@ overwrite a secret without knowing its contents.)
In infra management, the systems' keys are used for security reasons; they
identify the machine that we are talking to. The contributor keys are used to
give access to the `root` user on these machines, which allows, among other
things, to deploy their configurations with NixOps4.
things, to deploy their configurations.
## Adding a contributor

4
machines/README.md Normal file
View file

@ -0,0 +1,4 @@
# Machines
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.

View file

@ -0,0 +1,2 @@
_: {
}

View file

@ -14,10 +14,4 @@
gateway = "2a00:51c0:13:1305::1";
};
};
nixos.module = {
imports = [
./fedipanel.nix
];
};
}

View file

@ -7,12 +7,12 @@ let
in
{
imports = [
<home-manager/nixos>
(import ../../../panel { }).module
];
security.acme = {
acceptTerms = true;
defaults.email = "beheer@procolix.com";
};
age.secrets.panel-ssh-key = {
@ -37,6 +37,24 @@ in
enable = true;
production = true;
domain = "demo.fediversity.eu";
# FIXME: make it work without this duplication
settings =
let
cfg = config.services.${name};
in
{
STATIC_ROOT = "/var/lib/${name}/static";
DEBUG = false;
ALLOWED_HOSTS = [
cfg.domain
cfg.host
"localhost"
"[::1]"
];
CSRF_TRUSTED_ORIGINS = [ "https://${cfg.domain}" ];
COMPRESS_OFFLINE = true;
LIBSASS_OUTPUT_STYLE = "compressed";
};
secrets = {
SECRET_KEY = config.age.secrets.panel-secret-key.path;
};

View file

@ -0,0 +1,31 @@
{ lib, ... }:
{
fediversityVm = {
vmId = 2116;
proxmox = "procolix";
description = "Forgejo";
ipv4.address = "185.206.232.34";
ipv6.address = "2a00:51c0:12:1201::20";
};
## vm02116 is running on old hardware based on a Xen VM environment, so it
## needs these extra options. Once the VM gets moved to a newer node, these
## two options can safely be removed.
boot.initrd.availableKernelModules = [ "xen_blkfront" ];
services.xe-guest-utilities.enable = true;
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/3802a66d-e31a-4650-86f3-b51b11918853";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/2CE2-1173";
fsType = "vfat";
};
};
}

View file

@ -0,0 +1,29 @@
{ lib, ... }:
{
fediversityVm = {
vmId = 2187;
proxmox = "procolix";
description = "Wiki";
ipv4.address = "185.206.232.187";
ipv6.address = "2a00:51c0:12:1201::187";
};
## NOTE: This VM was created manually, which requires us to override the
## default disko-based `fileSystems` definition.
fileSystems = lib.mkForce {
"/" = {
device = "/dev/disk/by-uuid/a46a9c46-e32b-4216-a4aa-8819b2cd0d49";
fsType = "ext4";
};
"/boot" = {
device = "/dev/disk/by-uuid/6AB5-4FA8";
fsType = "vfat";
options = [
"fmask=0022"
"dmask=0022"
];
};
};
}

View file

@ -25,18 +25,21 @@
"url": null,
"hash": "1w2gsy6qwxa5abkv8clb435237iifndcxq0s79wihqw11a5yb938"
},
"flake-parts": {
"type": "Git",
"disko": {
"type": "GitRelease",
"repository": {
"type": "GitHub",
"owner": "hercules-ci",
"repo": "flake-parts"
"owner": "nix-community",
"repo": "disko"
},
"branch": "main",
"pre_releases": false,
"version_upper_bound": null,
"release_prefix": null,
"submodules": false,
"revision": "c621e8422220273271f52058f618c94e405bb0f5",
"url": "https://github.com/hercules-ci/flake-parts/archive/c621e8422220273271f52058f618c94e405bb0f5.tar.gz",
"hash": "09j2dafd75ydlcw8v48vcpfm2mw0j6cs8286x2hha2lr08d232w4"
"version": "v1.11.0",
"revision": "cdf8deded8813edfa6e65544f69fdd3a59fa2bb4",
"url": "https://api.github.com/repos/nix-community/disko/tarball/v1.11.0",
"hash": "13brimg7z7k9y36n4jc1pssqyw94nd8qvgfjv53z66lv4xkhin92"
},
"git-hooks": {
"type": "Git",
@ -64,6 +67,19 @@
"url": "https://github.com/hercules-ci/gitignore.nix/archive/637db329424fd7e46cf4185293b9cc8c88c95394.tar.gz",
"hash": "02wxkdpbhlm3yk5mhkhsp3kwakc16xpmsf2baw57nz1dg459qv8w"
},
"home-manager": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "nix-community",
"repo": "home-manager"
},
"branch": "master",
"submodules": false,
"revision": "22b326b42bf42973d5e4fe1044591fb459e6aeac",
"url": "https://github.com/nix-community/home-manager/archive/22b326b42bf42973d5e4fe1044591fb459e6aeac.tar.gz",
"hash": "0hwllnym5mrrxinjsq0p9zn39i110c1xixp4x64svl7jjm5zb4c4"
},
"htmx": {
"type": "GitRelease",
"repository": {
@ -105,6 +121,19 @@
"revision": "f33a4d26226c05d501b9d4d3e5e60a3a59991921",
"url": "https://github.com/nixos/nixpkgs/archive/f33a4d26226c05d501b9d4d3e5e60a3a59991921.tar.gz",
"hash": "1b6dm1sn0bdpcsmxna0zzspjaixa2dald08005fry5jrbjvwafdj"
},
"terraform-provider-proxmox": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "kiaragrouwstra",
"repo": "terraform-provider-proxmox"
},
"branch": "content-type-images",
"submodules": false,
"revision": "fc12a93e0e00dd878f2bb3fd0e73575d0701b6fd",
"url": "https://github.com/kiaragrouwstra/terraform-provider-proxmox/archive/fc12a93e0e00dd878f2bb3fd0e73575d0701b6fd.tar.gz",
"hash": "1vbk4xig7dv7gccnfr7kaz6m8li8mggaz541cq3bvw08k4hf7465"
}
},
"version": 5

3
panel/.gitignore vendored
View file

@ -1,3 +1,6 @@
# pydantic-generated schema
/src/panel/configuration/schema.py
# Nix
.direnv
result*

View file

@ -21,11 +21,18 @@ in
pkgs.npins
manage
];
env = import ./env.nix { inherit lib pkgs; } // {
NPINS_DIRECTORY = toString ../npins;
CREDENTIALS_DIRECTORY = toString ./.credentials;
DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3";
};
env =
let
inherit (builtins) toString;
in
import ./env.nix { inherit lib pkgs; }
// {
NPINS_DIRECTORY = toString ../npins;
CREDENTIALS_DIRECTORY = toString ./.credentials;
DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3";
# locally: use a fixed relative reference, so we can use our newest files without copying to the store
REPO_DIR = toString ../.;
};
shellHook = ''
${lib.concatStringsSep "\n" (
map (file: "ln -sf ${file.from} ${toString ./src/${file.to}}") package.generated

View file

@ -3,16 +3,17 @@
pkgs,
...
}:
let
inherit (builtins) toString;
in
{
REPO_DIR = toString ../.;
# explicitly use nix, as e.g. lix does not have configurable-impure-env
BIN_PATH = lib.makeBinPath [
# explicitly use nix, as e.g. lix does not have configurable-impure-env
pkgs.nix
# nixops error maybe due to our flake git hook: executing 'git': No such file or directory
pkgs.lix
pkgs.bash
pkgs.coreutils
pkgs.openssh
pkgs.git
pkgs.direnv
pkgs.jaq # tf
pkgs.nixos-generators
(import ../infra/tf.nix { inherit lib pkgs; })
];
SSH_PRIVATE_KEY_FILE = "";
}

View file

@ -29,6 +29,9 @@ let
((pkgs.formats.pythonVars { }).generate "settings.py" cfg.settings)
(builtins.toFile "extra-settings.py" cfg.extra-settings)
];
REPO_DIR = import ../../infra/tf-env.nix {
inherit lib pkgs;
};
};
python-environment = pkgs.python3.withPackages (
@ -157,9 +160,7 @@ in
};
};
users.users.${name} = {
isNormalUser = true;
};
users.users.${name}.isNormalUser = true;
users.groups.${name} = { };
systemd.services.${name} = {
@ -167,6 +168,7 @@ in
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = [
pkgs.openssh
python-environment
manage-service
];

View file

@ -1,5 +1,6 @@
{
lib,
pkgs,
sqlite,
python3,
python3Packages,
@ -14,7 +15,7 @@ let
root = ../src;
fileset = intersection (gitTracked ../../.) ../src;
};
pyproject = with lib; fromTOML pyproject-toml;
pyproject = fromTOML pyproject-toml;
# TODO: define this globally
name = "panel";
# TODO: we may want this in a file so it's easier to read statically
@ -89,7 +90,13 @@ python3.pkgs.buildPythonPackage {
mkdir -p $out/bin
cp -v ${src}/manage.py $out/bin/manage.py
chmod +x $out/bin/manage.py
wrapProgram $out/bin/manage.py --prefix PYTHONPATH : "$PYTHONPATH"
wrapProgram $out/bin/manage.py \
--set REPO_DIR "${
import ../../infra/tf-env.nix {
inherit lib pkgs;
}
}" \
--prefix PYTHONPATH : "$PYTHONPATH"
${lib.concatStringsSep "\n" (
map (file: "cp ${file.from} $out/${python3.sitePackages}/${file.to}") generated
)}

View file

@ -1,3 +1,4 @@
# TODO upstream, see #248
{
lib,
buildPythonPackage,

View file

@ -10,14 +10,19 @@ For the full list of settings and their values, see
https://docs.djangoproject.com/en/4.2/ref/settings/
"""
import re
import sys
import subprocess
import os
import json
import importlib.util
import dj_database_url
from os import environ as env
from pathlib import Path
STORE_PATTERN = re.compile("^/nix/store/[^/]+$")
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
@ -242,6 +247,7 @@ if user_settings_file is not None:
# PATH to expose to launch button
bin_path=env['BIN_PATH']
# path of the root flake to trigger nixops from, see #94.
# path of the root flake to deploy from
# to deploy this should be specified, for dev just use a relative path.
repo_dir = env["REPO_DIR"]

View file

@ -1,6 +1,9 @@
from enum import Enum
import json
from os.path import expanduser
from pathlib import Path
import subprocess
import logging
import os
from django.urls import reverse_lazy
@ -19,6 +22,8 @@ from pydantic import BaseModel
from panel import models, settings
from panel.configuration import schema
logger = logging.getLogger(__name__)
class Index(TemplateView):
template_name = 'index.html'
@ -106,22 +111,37 @@ class DeploymentStatus(ConfigurationForm):
}
env = {
"PATH": settings.bin_path,
# needed by direnv
"HOME": str(Path.home()),
# "TF_LOG": "info",
} | {
# pass in form info to our deployment
"DEPLOYMENT": config.json()
# FIXME: ensure sensitive info is protected
f"TF_VAR_{k}": v if isinstance(v, str) else json.dumps(v) for k, v in json.loads(config.model_dump_json()).items()
}
# XXX should we not log this if it may show proxmox credentials from `.envrc`s?
# those could instead be passed as sensitive TF vars, but that would not address this.
logger.debug("env: %s", env)
cwd = f"{settings.repo_dir}/infra/operator"
# direnv wants this run upfront, and chaining in subprocess feels awkward
subprocess.check_call(["direnv", "allow"], cwd=cwd)
cmd = [
"nix",
"develop",
"--extra-experimental-features",
"configurable-impure-env",
"--command",
"nixops4",
# pick up env vars from any .gitignore'd `.envrc`.
# if this might fall back to `infra/.envrc`, which loads a nix environment,
# is this really an elegant approach to pass env vars for our
# different scenarios (deployed vs local panel vs local direct TF invocations)?
"direnv",
"exec",
cwd,
# run TF
"tofu",
# f"-chdir={cwd}",
"apply",
"test",
f"-state={cwd}/terraform.tfstate", # FIXME: separate users' state, see #313
"--auto-approve",
"-lock=false",
"-parallelism=1" # limit OOM risk
]
deployment_result = subprocess.run(
cmd,
cwd=settings.repo_dir,
env=env,
)
deployment_result = subprocess.run(cmd, cwd=cwd, env=env)
logger.debug("deployment_result: %s", deployment_result)
return deployment_result, config

View file

@ -5,9 +5,14 @@
*/
{
nixpkgs,
nixpkgs ? <nixpkgs>,
hostKeys ? { },
nixosConfiguration,
system ? builtins.currentSystem, # may need build on remote
nixosConfiguration ? import ../infra/common/nixos/base.nix,
conf ? import "${nixpkgs}/nixos/lib/eval-config.nix" {
system = builtins.currentSystem;
modules = [ nixosConfiguration ];
},
}:
let
@ -15,7 +20,6 @@ let
installer =
{
config,
pkgs,
lib,
...
@ -25,8 +29,8 @@ let
name = "bootstrap";
runtimeInputs = with pkgs; [ nixos-install-tools ];
text = ''
${nixosConfiguration.config.system.build.diskoScript}
nixos-install --no-root-password --no-channel-copy --system ${nixosConfiguration.config.system.build.toplevel}
${conf.config.system.build.diskoScript}
nixos-install --no-root-password --no-channel-copy --system ${conf.config.system.build.toplevel}
${concatStringsSep "\n" (
attrValues (
mapAttrs (kind: keys: ''
@ -42,10 +46,12 @@ let
};
in
{
imports = [ "${nixpkgs}/nixos/modules/installer/cd-dvd/installation-cd-minimal.nix" ];
nixpkgs.hostPlatform = "x86_64-linux";
imports = [
"${nixpkgs}/nixos/modules/installer/cd-dvd/installation-cd-minimal.nix"
];
nixpkgs.hostPlatform = system;
services.getty.autologinUser = lib.mkForce "root";
programs.bash.loginShellInit = nixpkgs.lib.getExe bootstrap;
programs.bash.loginShellInit = pkgs.lib.getExe bootstrap;
isoImage = {
compressImage = false;
@ -56,4 +62,7 @@ let
};
};
in
(nixpkgs.lib.nixosSystem { modules = [ installer ]; }).config.system.build.isoImage
(import "${nixpkgs}/nixos/lib/eval-config.nix" {
inherit system;
modules = [ installer ];
}).config.system.build.isoImage

10
secrets/.envrc Normal file
View file

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# the shebang is ignored, but nice for editors
# shellcheck shell=bash
if type -P lorri &>/dev/null; then
eval "$(lorri direnv)"
else
echo 'while direnv evaluated .envrc, could not find the command "lorri" [https://github.com/nix-community/lorri]'
use_nix
fi

View file

@ -22,7 +22,7 @@ As an example, let us add a secret in a file “cheeses” whose content should
extension); this will open your `$EDITOR` ; enter “best ones come
unpasteurised”, save and close.
3. If you are doing something flake-related such as NixOps4, remember to commit
3. If you are doing something flake-related, remember to commit
or at least stage the secret.
4. In the machine's configuration, load our `ageSecrets` NixOS module, declare the machine's host key and start using your secrets, eg.:

24
secrets/default.nix Normal file
View file

@ -0,0 +1,24 @@
{
system ? builtins.currentSystem,
sources ? import ../npins,
pkgs ? import sources.nixpkgs { inherit system; },
}:
let
inherit (sources) agenix;
in
{
# shell for testing TF directly
shell = pkgs.mkShellNoCC {
packages = [
(pkgs.callPackage "${agenix}/pkgs/agenix.nix" { })
];
};
# re-export inputs so they can be overridden granularly
# (they can't be accessed from the outside any other way)
inherit
sources
system
pkgs
;
}

1
secrets/shell.nix Normal file
View file

@ -0,0 +1 @@
(import ./. { }).shell

10
services/.envrc Normal file
View file

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# the shebang is ignored, but nice for editors
# shellcheck shell=bash
if type -P lorri &>/dev/null; then
eval "$(lorri direnv)"
else
echo 'while direnv evaluated .envrc, could not find the command "lorri" [https://github.com/nix-community/lorri]'
use_nix
fi

View file

@ -31,7 +31,7 @@ in
type = types.submodule {
options = {
cores = mkOption {
description = "number of cores; should be obtained from NixOps4";
description = "number of cores; should be obtained from TF";
type = types.int;
};

View file

@ -20,7 +20,7 @@ in
description = ''
Internal option change at your own risk
FIXME: should it be provided by NixOps4?
FIXME: should it be provided by TF?
or maybe we should just ask for a main secret from which to derive all the others?
'';
};

View file

@ -10,7 +10,7 @@ let
in
{
imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
imports = [ "${modulesPath}/virtualisation/qemu-vm.nix" ];
fediversity.garage.enable = true;

View file

@ -7,7 +7,7 @@
}:
{
imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
imports = [ "${modulesPath}/virtualisation/qemu-vm.nix" ];
config = lib.mkMerge [
{

Some files were not shown because too many files have changed in this diff Show more