Compare commits

..

23 commits

Author SHA1 Message Date
1076552f75
try a few ways to make form load/submit work, add stub on inline formsets to display nested forms
c.f.:

-
https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#inline-formsets
- https://swapps.com/blog/working-with-nested-forms-with-django/
2025-03-04 10:36:15 +01:00
4273c7a608
add model changes to make it boot (allow null) 2025-03-04 09:09:50 +01:00
a088d25eb7
adjust obsolete naming 2025-03-03 17:00:16 +01:00
b3b525dee4
clean-up 2025-03-03 15:58:37 +01:00
8fabd7f0f1
reverse relationship 2025-03-03 15:51:19 +01:00
0f2c5390d6
strip out admin stuff for now 2025-03-03 15:45:39 +01:00
fb5e8ae453
rename checkboxes to fix nix model 2025-03-03 15:31:27 +01:00
014c3efc70
expand to multiple lines to allow commenting parts 2025-03-03 15:29:25 +01:00
1ebea118cc
migration: rm replaces 2025-03-03 14:26:11 +01:00
99b8fe8870
Squash migrations 2025-03-03 12:28:31 +01:00
d32e11389c
Remove trailing whitespace 2025-03-03 12:28:31 +01:00
624e354da5
fix saving for sepparated models 2025-03-03 12:28:31 +01:00
7c8c6c7eb9
Seperation of configuration form 2025-03-03 12:28:31 +01:00
da2e7d93a9
add enable toggle, let operators have many configuraitons
this commit is a bit of a jumble, but it allows us to disable
a configuration so the associated deployment can in principle be
garbage-collected, and allows operators to have multiple configurations.
it also (as a temporary hack) ties the deployment subdomain to the username so it's clear to
operators where we're deploying to.
2025-03-03 12:28:31 +01:00
a97418c30e
rename form and add navigation element 2025-03-03 12:28:31 +01:00
af1c051498
don't use hungarian notation for model names 2025-03-03 12:28:31 +01:00
c8062a87d4
fix wrong rebase of base.html 2025-03-03 12:28:31 +01:00
2fb0ccf0aa
add redirection to deploy page after save 2025-03-03 12:28:31 +01:00
46107cb21d
Add domain dropdown menu 2025-03-03 12:28:31 +01:00
17570c6434
add first part of save able login 2025-03-03 12:28:31 +01:00
bb12139f5a
Add saveable Deploy services form 2025-03-03 12:28:31 +01:00
66e88325ec
add first part of save able login 2025-03-03 12:28:31 +01:00
Kiara Grouwstra
4c1aa34d97
add simple form, closes #106, #107
note that i haven't done much with validation or conditional
enabling/hiding - if we start doing front-end frameworks those may have
opinions on this
2025-03-03 12:28:31 +01:00
344 changed files with 11062 additions and 3815 deletions

4
.envrc
View file

@ -3,8 +3,8 @@
# shellcheck shell=bash
if type -P lorri &>/dev/null; then
eval "$(lorri direnv)"
eval "$(lorri direnv --flake .)"
else
echo 'while direnv evaluated .envrc, could not find the command "lorri" [https://github.com/nix-community/lorri]'
use_nix
use flake
fi

View file

@ -1,24 +0,0 @@
name: deploy-infra
on:
workflow_dispatch: # allows manual triggering
push:
branches:
- main
jobs:
deploy:
runs-on: native
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up SSH key for age secrets and SSH
run: |
env
mkdir -p ~/.ssh
echo "${{ secrets.CD_SSH_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
- name: Deploy
run: nix-shell --run 'eval "$(ssh-agent -s)" && ssh-add ~/.ssh/id_ed25519 && SHELL=$(which bash) nixops4 apply -v default'

View file

@ -13,46 +13,17 @@ jobs:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix-build -A tests
- run: nix build .#checks.x86_64-linux.pre-commit -L
check-data-model:
check-website:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix-shell --run 'nix-unit ./deployment/data-model-test.nix'
check-mastodon:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.test-mastodon-service -L
- run: cd website && nix-build -A tests
- run: cd website && nix-build -A build
check-peertube:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.test-peertube-service -L
check-panel:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix-build -A tests.panel
check-deployment-basic:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-basic -L
check-deployment-cli:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-cli -L
check-deployment-panel:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-panel -L
- run: nix build .#checks.x86_64-linux.peertube -L

View file

@ -1,24 +0,0 @@
name: update-dependencies
on:
workflow_dispatch: # allows manual triggering
# FIXME: re-enable when manual run works
# schedule:
# - cron: '0 0 1 * *' # monthly
jobs:
lockfile:
runs-on: native
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Update pins
run: nix-shell --run "npins update"
- name: Create PR
uses: https://github.com/KiaraGrouwstra/gitea-create-pull-request@f9f80aa5134bc5c03c38f5aaa95053492885b397
with:
remote-instance-api-version: v1
token: "${{ secrets.DEPLOY_KEY }}"
branch: npins-update
commit-message: "npins: update sources"
title: "npins: update sources"

View file

@ -14,12 +14,6 @@ There already exist solutions for self-hosting, but they're not suitable for wha
The ones we're aware of require substantial technical knowledge and time commitment by operators, especially for scaling to thousands of users.
Not everyone has the expertise and time to run their own server.
## Interactions
To reach these goals, we aim to implement the following interactions between [actors](#actors) (depicted with rounded corners) and system components (see the [glossary](#glossary), depicted with rectangles).
![](https://git.fediversity.eu/Fediversity/meta/raw/branch/main/architecture-docs/interactions.svg)
## Actors
- Fediversity project team
@ -63,11 +57,11 @@ To reach these goals, we aim to implement the following interactions between [ac
- [Fediverse](https://en.wikipedia.org/wiki/Fediverse)
A collection of social networking applications that can communicate with each other using a common protocol.
A collection of social networking services that can communicate with each other using a common protocol.
- Application
- Service
User-facing software (e.g. from Fediverse) run by the hosting provider for an operator.
A Fediverse application run by the hosting provider for an operator.
- Configuration
@ -79,11 +73,11 @@ To reach these goals, we aim to implement the following interactions between [ac
Make a resource, such as a virtual machine, available for use.
> Example: We use [Proxmox](https://www.proxmox.com) to provision VMs for applications run by operators.
> Example: We use [Proxmox](https://www.proxmox.com) to provision VMs for services run by operators.
- Deploy
Put software, such as applications, onto computers.
Put software, such as services, onto computers.
The software includes technical configuration that links software components.
Most user-facing configuration remains untouched by the deployment process.
@ -91,7 +85,7 @@ To reach these goals, we aim to implement the following interactions between [ac
- Migrate
Move service configurations and deployment state, including user data, from one hosting provider to another.
Move service configurations and user data to a different hosting provider.
- [NixOps4](https://github.com/nixops4/nixops4)
@ -109,18 +103,6 @@ To reach these goals, we aim to implement the following interactions between [ac
> Example: We need a resource provider for obtaining deployment secrets from a database.
- Runtime backend
A type of digital environment one can run operating systems such as NixOS on, e.g. bare-metal, a hypervisor, or a container runtime.
- Runtime environment
The thing a deployment runs on, an interface against which the deployment is working. See runtime backend.
- Runtime config
Configuration logic specific to a runtime backend, e.g. how to deploy, how to access object storage.
## Development
All the code made for this project is freely licenced under [EUPL](https://en.m.wikipedia.org/wiki/European_Union_Public_Licence).
@ -154,3 +136,6 @@ details as to what they are for. As an overview:
- [`services/`](./services) contains our effort to make Fediverse applications
work seemlessly together in our specific setting.
- [`website/`](./website) contains the framework and the content of [the
Fediversity website](https://fediversity.eu/)

View file

@ -1,85 +0,0 @@
{
system ? builtins.currentSystem,
sources ? import ./npins,
pkgs ? import sources.nixpkgs { inherit system; },
}:
let
inherit (sources)
nixpkgs
git-hooks
gitignore
;
inherit (pkgs) lib;
inherit (import sources.flake-inputs) import-flake;
inherit ((import-flake { src = ./.; }).inputs) nixops4;
panel = import ./panel { inherit sources system; };
pre-commit-check =
(import "${git-hooks}/nix" {
inherit nixpkgs system;
gitignore-nix-src = {
lib = import gitignore { inherit lib; };
};
}).run
{
src = ./.;
hooks =
let
## Add a directory here if pre-commit hooks shouldn't apply to it.
optout = [
"npins"
];
excludes = map (dir: "^${dir}/") optout;
addExcludes = lib.mapAttrs (_: c: c // { inherit excludes; });
in
addExcludes {
nixfmt-rfc-style.enable = true;
deadnix.enable = true;
trim-trailing-whitespace.enable = true;
shellcheck.enable = true;
};
};
in
{
# shell for testing TF directly
shell = pkgs.mkShellNoCC {
inherit (pre-commit-check) shellHook;
buildInputs = pre-commit-check.enabledPackages;
packages =
let
test-loop = pkgs.writeShellApplication {
name = "test-loop";
runtimeInputs = [
pkgs.watchexec
pkgs.nix-unit
];
text = ''
watchexec -w ${builtins.toString ./.} -- nix-unit ${builtins.toString ./deployment/data-model-test.nix} "$@"
'';
};
in
[
pkgs.npins
pkgs.nil
(pkgs.callPackage "${sources.agenix}/pkgs/agenix.nix" { })
pkgs.openssh
pkgs.httpie
pkgs.jq
pkgs.nix-unit
test-loop
nixops4.packages.${system}.default
];
};
tests = {
inherit pre-commit-check;
panel = panel.tests;
};
# re-export inputs so they can be overridden granularly
# (they can't be accessed from the outside any other way)
inherit
sources
system
pkgs
;
}

View file

@ -1,123 +1,6 @@
# Deployment
This directory contains work to generate a full Fediversity deployment from a minimal configuration.
This is different from [`../services/`](../services) that focuses on one machine, providing a polished and unified interface to different Fediverse services.
## Data model
The core piece of the project is the [Fediversity data model](./data-model.nix), which describes all entities and their interactions.
What can be done with it is exemplified in the [evaluation tests](./data-model-test.nix).
Run `test-loop` in the development environment when hacking on the data model or adding tests.
## Checks
There are three levels of deployment checks: `basic`, `cli`, `panel`.
They can be found in subdirectories of [`check/`](./check).
They can be run as part of `nix flake check` or individually as:
``` console
$ nix build .#checks.<system>.deployment-<name> -vL
```
Since `nixops4 apply` operates on a flake, the tests take this repository's flake as a template.
This also why there are some dummy files that will be overwritten inside the test.
### Basic deployment check
The basic deployment check is here as a building block and sanity check.
It does not actually use any of the code in this directory but checks that our test strategy is sound and that basic NixOps4 functionalities are here.
It is a NixOS test featuring one deployer machine and two target machines.
The deployment simply adds `pkgs.hello` to one and `pkgs.cowsay` to the other.
It is heavily inspired by [a similar test in `nixops4-nixos`].
[a similar test in nixops4-nixos]: https://github.com/nixops4/nixops4-nixos/blob/main/test/default/nixosTest.nix
This test involves three nodes:
- `deployer` is the node that will perform the deployment using `nixops4 apply`.
Because the test runs in a sandboxed environment, `deployer` will not have access to internet, and therefore it must already have all store paths needed for the target nodes.
- “target machines” are two eponymous nodes on which the packages `hello` and `cowsay` will be deployed.
They start with a minimal configuration.
``` mermaid
flowchart LR
deployer["deployer<br><font size='1'>has store paths<br>runs nixops4</font>"]
subgraph target_machines["target machines"]
direction TB
hello
cowsay
end
deployer -->|deploys| target_machines
```
### Service deployment check using `nixops4 apply`
This check omits the panel by running a direct invocation of NixOps4.
It deploys some services and checks that they are indeed on the target machines, then cleans them up and checks whether that works, too.
It builds upon the basic deployment check.
This test involves seven nodes:
- `deployer` is the node that will perform the deployment using `nixops4 apply`.
Because the test runs in a sandboxed environment, `deployer` will not have access to internet, and therefore it must already have all store paths needed for the target nodes.
- “target machines” are four nodes — `garage`, `mastodon`, `peertube`, and `pixelfed` — on which the services will be deployed.
They start with a minimal configuration.
- `acme` is a node that runs [Pebble], a miniature ACME server to deliver the certificates that the services expect.
- [WIP] `client` is a node that runs a browser controlled by some Selenium scripts in order to check that the services are indeed running and are accessible.
[Pebble]: https://github.com/letsencrypt/pebble
``` mermaid
flowchart LR
classDef invisible fill:none,stroke:none
subgraph left [" "]
direction TB
deployer["deployer<br><font size='1'>has store paths<br>runs nixops4</font>"]
client["client<br><font size='1'>Selenium scripts</font>"]
end
subgraph middle [" "]
subgraph target_machines["target machines"]
direction TB
garage
mastodon
peertube
pixelfed
end
end
subgraph right [" "]
direction TB
acme["acme<br><font size='1'>runs Pebble</font>"]
end
left ~~~ middle ~~~ right
class left,middle,right invisible
deployer -->|deploys| target_machines
client -->|tests| mastodon
client -->|tests| peertube
client -->|tests| pixelfed
target_machines -->|get certs| acme
```
### Service deployment check from the FediPanel
This is a full deployment check running the [FediPanel](../panel) on the deployer machine, deploying some services through it and checking that they are indeed on the target machines, then cleans them up and checks whether that works, too.
It builds upon the basic and CLI deployment checks, the only difference being that `deployer` runs NixOps4 only indirectly via the panel, and the `client` node is the one that triggers the deployment via a browser, the way a human would.
This repository contains work to generate a full Fediversity deployment from a
minimal configuration. This is different from [`../services/`](../services) that
focuses on one machine, providing a polished and unified interface to different
Fediverse services.

View file

@ -1,8 +0,0 @@
{
targetMachines = [
"hello"
"cowsay"
];
pathToRoot = ../../..;
pathFromRoot = ./.;
}

View file

@ -1,14 +0,0 @@
{
runNixOSTest,
inputs,
sources,
}:
runNixOSTest {
imports = [
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix) targetMachines pathToRoot pathFromRoot;
}

View file

@ -1,36 +0,0 @@
{
inputs,
sources,
lib,
providers,
...
}:
let
inherit (import ./constants.nix) targetMachines pathToRoot pathFromRoot;
in
{
providers = {
inherit (inputs.nixops4.modules.nixops4Provider) local;
};
resources = lib.genAttrs targetMachines (nodeName: {
type = providers.local.exec;
imports = [
inputs.nixops4-nixos.modules.nixops4Resource.nixos
../common/targetResource.nix
];
_module.args = { inherit inputs sources; };
inherit nodeName pathToRoot pathFromRoot;
nixos.module =
{ pkgs, ... }:
{
environment.systemPackages = [ pkgs.${nodeName} ];
};
});
}

View file

@ -1,22 +0,0 @@
{
inputs = {
nixops4.follows = "nixops4-nixos/nixops4";
nixops4-nixos.url = "github:nixops4/nixops4-nixos";
};
outputs =
inputs:
import ./mkFlake.nix inputs (
{ inputs, sources, ... }:
{
imports = [
inputs.nixops4.modules.flake.default
];
nixops4Deployments.check-deployment-basic = {
imports = [ ./deployment/check/basic/deployment.nix ];
_module.args = { inherit inputs sources; };
};
}
);
}

View file

@ -1,48 +0,0 @@
{ inputs, lib, ... }:
{
_class = "nixosTest";
name = "deployment-basic";
sourceFileset = lib.fileset.unions [
./constants.nix
./deployment.nix
];
nodes.deployer =
{ pkgs, ... }:
{
environment.systemPackages = [
inputs.nixops4.packages.${pkgs.system}.default
];
# FIXME: sad times
system.extraDependencies = with pkgs; [
jq
jq.inputDerivation
];
system.extraDependenciesFromModule =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
hello
cowsay
];
};
};
extraTestScript = ''
with subtest("Check the status before deployment"):
hello.fail("hello 1>&2")
cowsay.fail("cowsay 1>&2")
with subtest("Run the deployment"):
deployer.succeed("nixops4 apply check-deployment-basic --show-trace --no-interactive 1>&2")
with subtest("Check the deployment"):
hello.succeed("hello 1>&2")
cowsay.succeed("cowsay hi 1>&2")
'';
}

View file

@ -1,11 +0,0 @@
{
targetMachines = [
"garage"
"mastodon"
"peertube"
"pixelfed"
];
pathToRoot = ../../..;
pathFromRoot = ./.;
enableAcme = true;
}

View file

@ -1,19 +0,0 @@
{
runNixOSTest,
inputs,
sources,
}:
runNixOSTest {
imports = [
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
}

View file

@ -1,59 +0,0 @@
{
inputs,
sources,
lib,
}:
let
inherit (builtins) fromJSON readFile listToAttrs;
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
makeTargetResource = nodeName: {
imports = [ ../common/targetResource.nix ];
_module.args = { inherit inputs sources; };
inherit
nodeName
pathToRoot
pathFromRoot
enableAcme
;
};
## The deployment function - what we are here to test!
##
## TODO: Modularise `deployment/default.nix` to get rid of the nested
## function calls.
makeTestDeployment =
args:
(import ../..)
{
inherit lib;
inherit (inputs) nixops4 nixops4-nixos;
fediversity = import ../../../services/fediversity;
}
(listToAttrs (
map (nodeName: {
name = "${nodeName}ConfigurationResource";
value = makeTargetResource nodeName;
}) targetMachines
))
(fromJSON (readFile ../../configuration.sample.json) // args);
in
{
check-deployment-cli-nothing = makeTestDeployment { };
check-deployment-cli-mastodon-pixelfed = makeTestDeployment {
mastodon.enable = true;
pixelfed.enable = true;
};
check-deployment-cli-peertube = makeTestDeployment {
peertube.enable = true;
};
}

View file

@ -1,26 +0,0 @@
{
inputs = {
nixops4.follows = "nixops4-nixos/nixops4";
nixops4-nixos.url = "github:nixops4/nixops4-nixos";
};
outputs =
inputs:
import ./mkFlake.nix inputs (
{
inputs,
sources,
lib,
...
}:
{
imports = [
inputs.nixops4.modules.flake.default
];
nixops4Deployments = import ./deployment/check/cli/deployments.nix {
inherit inputs sources lib;
};
}
);
}

View file

@ -1,137 +0,0 @@
{
inputs,
hostPkgs,
lib,
...
}:
let
## Some places need a dummy file that will in fact never be used. We create
## it here.
dummyFile = hostPkgs.writeText "dummy" "";
in
{
_class = "nixosTest";
name = "deployment-cli";
sourceFileset = lib.fileset.unions [
./constants.nix
./deployments.nix
# REVIEW: I would like to be able to grab all of `/deployment` minus
# `/deployment/check`, but I can't because there is a bunch of other files
# in `/deployment`. Maybe we can think of a reorg making things more robust
# here? (comment also in panel test)
../../default.nix
../../options.nix
../../configuration.sample.json
../../../services/fediversity
];
nodes.deployer =
{ pkgs, ... }:
{
environment.systemPackages = [
inputs.nixops4.packages.${pkgs.system}.default
];
## FIXME: The following dependencies are necessary but I do not
## understand why they are not covered by the fake node.
system.extraDependencies = with pkgs; [
peertube
peertube.inputDerivation
gixy
gixy.inputDerivation
];
system.extraDependenciesFromModule = {
imports = [ ../../../services/fediversity ];
fediversity = {
domain = "fediversity.net"; # would write `dummy` but that would not type
garage.enable = true;
mastodon = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
peertube = {
enable = true;
secretsFile = dummyFile;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
pixelfed = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
temp.cores = 1;
temp.initialUser = {
username = "dummy";
displayName = "dummy";
email = "dummy";
passwordFile = dummyFile;
};
};
};
};
## NOTE: The target machines may need more RAM than the default to handle
## being deployed to, otherwise we get something like:
##
## pixelfed # [ 616.785499 ] sshd-session[1167]: Conection closed by 2001:db8:1::2 port 45004
## deployer # error: writing to file: No space left on device
## pixelfed # [ 616.788538 ] sshd-session[1151]: pam_unix(sshd:session): session closed for user port
## pixelfed # [ 616.793929 ] systemd-logind[719]: Session 4 logged out. Waiting for processes to exit.
## deployer # Error: Could not create resource
##
## These values have been trimmed down to the gigabyte.
nodes.mastodon.virtualisation.memorySize = 4 * 1024;
nodes.pixelfed.virtualisation.memorySize = 4 * 1024;
nodes.peertube.virtualisation.memorySize = 5 * 1024;
## FIXME: The test of presence of the services are very simple: we only
## check that there is a systemd service of the expected name on the
## machine. This proves at least that NixOps4 did something, and we cannot
## really do more for now because the services aren't actually working
## properly, in particular because of DNS issues. We should fix the services
## and check that they are working properly.
extraTestScript = ''
with subtest("Check the status of the services - there should be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with no services enabled"):
deployer.succeed("nixops4 apply check-deployment-cli-nothing --show-trace --no-interactive 1>&2")
with subtest("Check the status of the services - there should still be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with Mastodon and Pixelfed enabled"):
deployer.succeed("nixops4 apply check-deployment-cli-mastodon-pixelfed --show-trace --no-interactive 1>&2")
with subtest("Check the status of the services - expecting Garage, Mastodon and Pixelfed"):
garage.succeed("systemctl status garage.service")
mastodon.succeed("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.succeed("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with only Peertube enabled"):
deployer.succeed("nixops4 apply check-deployment-cli-peertube --show-trace --no-interactive 1>&2")
with subtest("Check the status of the services - expecting Garage and Peertube"):
garage.succeed("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.succeed("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
'';
}

View file

@ -1,106 +0,0 @@
{
inputs,
lib,
pkgs,
config,
sources,
...
}:
let
inherit (lib)
mkOption
mkForce
concatLists
types
;
in
{
_class = "nixos";
imports = [ ./sharedOptions.nix ];
options.system.extraDependenciesFromModule = mkOption {
type = types.deferredModule;
description = ''
Grab the derivations needed to build the given module and dump them in
system.extraDependencies. You want to put in this module a superset of
all the things that you will need on your target machines.
NOTE: This will work as long as the union of all these configurations do
not have conflicts that would prevent evaluation.
'';
default = { };
};
config = {
virtualisation = {
## NOTE: The deployer machines needs more RAM and default than the
## default. These values have been trimmed down to the gigabyte.
## Memory use is expected to be dominated by the NixOS evaluation,
## which happens on the deployer.
memorySize = 4 * 1024;
diskSize = 4 * 1024;
cores = 2;
};
nix.settings = {
substituters = mkForce [ ];
hashed-mirrors = null;
connect-timeout = 1;
extra-experimental-features = "flakes";
};
system.extraDependencies =
[
inputs.nixops4
inputs.nixops4-nixos
inputs.nixpkgs
sources.flake-parts
sources.flake-inputs
sources.git-hooks
pkgs.stdenv
pkgs.stdenvNoCC
]
++ (
let
## We build a whole NixOS system that contains the module
## `system.extraDependenciesFromModule`, only to grab its
## configuration and the store paths needed to build it and
## dump them in `system.extraDependencies`.
machine =
(pkgs.nixos [
./targetNode.nix
config.system.extraDependenciesFromModule
{
nixpkgs.hostPlatform = "x86_64-linux";
_module.args = { inherit inputs sources; };
enableAcme = config.enableAcme;
acmeNodeIP = config.acmeNodeIP;
}
]).config;
in
[
machine.system.build.toplevel.inputDerivation
machine.system.build.etc.inputDerivation
machine.system.build.etcBasedir.inputDerivation
machine.system.build.etcMetadataImage.inputDerivation
machine.system.build.extraUtils.inputDerivation
machine.system.path.inputDerivation
machine.system.build.setEnvironment.inputDerivation
machine.system.build.vm.inputDerivation
machine.system.build.bootStage1.inputDerivation
machine.system.build.bootStage2.inputDerivation
]
++ concatLists (
lib.mapAttrsToList (
_k: v: if v ? source.inputDerivation then [ v.source.inputDerivation ] else [ ]
) machine.environment.etc
)
);
};
}

View file

@ -1,200 +0,0 @@
{
inputs,
lib,
config,
hostPkgs,
sources,
...
}:
let
inherit (builtins)
concatStringsSep
toJSON
;
inherit (lib)
types
fileset
mkOption
genAttrs
attrNames
optionalString
;
inherit (hostPkgs)
runCommandNoCC
writeText
system
;
forConcat = xs: f: concatStringsSep "\n" (map f xs);
## We will need to override some inputs by the empty flake, so we make one.
emptyFlake = runCommandNoCC "empty-flake" { } ''
mkdir $out
echo "{ outputs = { self }: {}; }" > $out/flake.nix
'';
in
{
_class = "nixosTest";
imports = [
./sharedOptions.nix
];
options = {
## FIXME: I wish I could just use `testScript` but with something like
## `mkOrder` to put this module's string before something else.
extraTestScript = mkOption { };
sourceFileset = mkOption {
## REVIEW: Upstream to nixpkgs?
type = types.mkOptionType {
name = "fileset";
description = "fileset";
descriptionClass = "noun";
check = (x: (builtins.tryEval (fileset.unions [ x ])).success);
merge = (_: defs: fileset.unions (map (x: x.value) defs));
};
description = ''
A fileset that will be copied to the deployer node in the current
working directory. This should contain all the files that are
necessary to run that particular test, such as the NixOS
modules necessary to evaluate a deployment.
'';
};
};
config = {
sourceFileset = fileset.unions [
# NOTE: not the flake itself; it will be overridden.
../../../mkFlake.nix
../../../flake.lock
../../../npins
./sharedOptions.nix
./targetNode.nix
./targetResource.nix
(config.pathToCwd + "/flake-under-test.nix")
];
acmeNodeIP = config.nodes.acme.networking.primaryIPAddress;
nodes =
{
deployer = {
imports = [ ./deployerNode.nix ];
_module.args = { inherit inputs sources; };
enableAcme = config.enableAcme;
acmeNodeIP = config.nodes.acme.networking.primaryIPAddress;
};
}
//
(
if config.enableAcme then
{
acme = {
## FIXME: This makes `nodes.acme` into a local resolver. Maybe this will
## break things once we play with DNS?
imports = [ "${inputs.nixpkgs}/nixos/tests/common/acme/server" ];
## We aren't testing ACME - we just want certificates.
systemd.services.pebble.environment.PEBBLE_VA_ALWAYS_VALID = "1";
};
}
else
{ }
)
//
genAttrs config.targetMachines (_: {
imports = [ ./targetNode.nix ];
_module.args = { inherit inputs sources; };
enableAcme = config.enableAcme;
acmeNodeIP = if config.enableAcme then config.nodes.acme.networking.primaryIPAddress else null;
});
testScript = ''
${forConcat (attrNames config.nodes) (n: ''
${n}.start()
'')}
${forConcat (attrNames config.nodes) (n: ''
${n}.wait_for_unit("multi-user.target")
'')}
## A subset of the repository that is necessary for this test. It will be
## copied inside the test. The smaller this set, the faster our CI, because we
## won't need to re-run when things change outside of it.
with subtest("Unpacking"):
deployer.succeed("cp -r --no-preserve=mode ${
fileset.toSource {
root = ../../..;
fileset = config.sourceFileset;
}
}/* .")
with subtest("Configure the network"):
${forConcat config.targetMachines (
tm:
let
targetNetworkJSON = writeText "target-network.json" (
toJSON config.nodes.${tm}.system.build.networkConfig
);
in
''
deployer.copy_from_host("${targetNetworkJSON}", "${config.pathFromRoot}/${tm}-network.json")
''
)}
with subtest("Configure the deployer key"):
deployer.succeed("""mkdir -p ~/.ssh && ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa""")
deployer_key = deployer.succeed("cat ~/.ssh/id_rsa.pub").strip()
${forConcat config.targetMachines (tm: ''
${tm}.succeed(f"mkdir -p /root/.ssh && echo '{deployer_key}' >> /root/.ssh/authorized_keys")
'')}
with subtest("Configure the target host key"):
${forConcat config.targetMachines (tm: ''
host_key = ${tm}.succeed("ssh-keyscan ${tm} | grep -v '^#' | cut -f 2- -d ' ' | head -n 1")
deployer.succeed(f"echo '{host_key}' > ${config.pathFromRoot}/${tm}_host_key.pub")
'')}
## NOTE: This is super slow. It could probably be optimised in Nix, for
## instance by allowing to grab things directly from the host's store.
##
## NOTE: We use the repository as-is (cf `src` above), overriding only
## `flake.nix` by our `flake-under-test.nix`. We also override the flake
## lock file to use locally available inputs, as we cannot download them.
##
with subtest("Override the flake and its lock"):
deployer.succeed("cp ${config.pathFromRoot}/flake-under-test.nix flake.nix")
deployer.succeed("""
nix flake lock --extra-experimental-features 'flakes nix-command' \
--offline -v \
--override-input nixops4 ${inputs.nixops4.packages.${system}.flake-in-a-bottle} \
\
--override-input nixops4-nixos ${inputs.nixops4-nixos} \
--override-input nixops4-nixos/flake-parts ${inputs.nixops4-nixos.inputs.flake-parts} \
--override-input nixops4-nixos/flake-parts/nixpkgs-lib ${inputs.nixops4-nixos.inputs.flake-parts.inputs.nixpkgs-lib} \
--override-input nixops4-nixos/nixops4-nixos ${emptyFlake} \
--override-input nixops4-nixos/nixpkgs ${inputs.nixops4-nixos.inputs.nixpkgs} \
--override-input nixops4-nixos/nixops4 ${
inputs.nixops4-nixos.inputs.nixops4.packages.${system}.flake-in-a-bottle
} \
--override-input nixops4-nixos/git-hooks-nix ${emptyFlake} \
;
""")
${optionalString config.enableAcme ''
with subtest("Set up handmade DNS"):
deployer.succeed("echo '${config.nodes.acme.networking.primaryIPAddress}' > ${config.pathFromRoot}/acme_server_ip")
''}
${config.extraTestScript}
'';
};
}

View file

@ -1,68 +0,0 @@
/**
This file contains options shared by various components of the integration test, i.e. deployment resources, test nodes, target configurations, etc.
All these components are declared as modules, but are part of different evaluations, which is the options in this file can't be shared "directly".
Instead, each component imports this module and the same values are set for each of them from a common call site.
Not all components will use all the options, which allows not setting all the values.
*/
{ config, lib, ... }:
let
inherit (lib) mkOption types;
in
# `config` not set and imported from multiple places: no fixed module class
{
options = {
targetMachines = mkOption {
type = with types; listOf str;
description = ''
Names of the nodes in the NixOS test that are target machines. This is
used by the infrastructure to extract their network configuration, among
other things, and re-import it in the deployment.
'';
};
pathToRoot = mkOption {
type = types.path;
description = ''
Path from the location of the working directory to the root of the
repository.
'';
};
pathFromRoot = mkOption {
type = types.path;
description = ''
Path from the root of the repository to the working directory.
'';
apply = x: lib.path.removePrefix config.pathToRoot x;
};
pathToCwd = mkOption {
type = types.path;
description = ''
Path to the current working directory. This is a shortcut for
pathToRoot/pathFromRoot.
'';
default = config.pathToRoot + "/${config.pathFromRoot}";
};
enableAcme = mkOption {
type = types.bool;
description = ''
Whether to enable ACME in the NixOS test. This will add an ACME server
to the node and connect all the target machines to it.
'';
default = false;
};
acmeNodeIP = mkOption {
type = types.str;
description = ''
The IP of the ACME node in the NixOS test. This option will be set
during the test to the correct value.
'';
};
};
}

View file

@ -1,67 +0,0 @@
{
inputs,
config,
lib,
modulesPath,
...
}:
let
testCerts = import "${inputs.nixpkgs}/nixos/tests/common/acme/server/snakeoil-certs.nix";
inherit (lib) mkIf mkMerge;
in
{
_class = "nixos";
imports = [
(modulesPath + "/profiles/qemu-guest.nix")
(modulesPath + "/../lib/testing/nixos-test-base.nix")
./sharedOptions.nix
];
config = mkMerge [
{
## Test framework disables switching by default. That might be OK by itself,
## but we also use this config for getting the dependencies in
## `deployer.system.extraDependencies`.
system.switch.enable = true;
nix = {
## Not used; save a large copy operation
channel.enable = false;
registry = lib.mkForce { };
};
services.openssh = {
enable = true;
settings.PermitRootLogin = "yes";
};
networking.firewall.allowedTCPPorts = [ 22 ];
## Test VMs don't have a bootloader by default.
boot.loader.grub.enable = false;
}
(mkIf config.enableAcme {
security.acme = {
acceptTerms = true;
defaults.email = "test@test.com";
defaults.server = "https://acme.test/dir";
};
security.pki.certificateFiles = [
## NOTE: This certificate is the one used by the Pebble HTTPS server.
## This is NOT the root CA of the Pebble server. We do add it here so
## that Pebble clients can talk to its API, but this will not allow
## those machines to verify generated certificates.
testCerts.ca.cert
];
## FIXME: it is a bit sad that all this logistics is necessary. look into
## better DNS stuff
networking.extraHosts = "${config.acmeNodeIP} acme.test";
})
];
}

View file

@ -1,51 +0,0 @@
{
inputs,
lib,
config,
sources,
...
}:
let
inherit (builtins) readFile;
inherit (lib) trim mkOption types;
in
{
_class = "nixops4Resource";
imports = [ ./sharedOptions.nix ];
options = {
nodeName = mkOption {
type = types.str;
description = ''
The name of the node in the NixOS test;
needed for recovering the node configuration to prepare its deployment.
'';
};
};
config = {
ssh = {
host = config.nodeName;
hostPublicKey = readFile (config.pathToCwd + "/${config.nodeName}_host_key.pub");
};
nixpkgs = inputs.nixpkgs;
nixos.module = {
imports = [
./targetNode.nix
(lib.modules.importJSON (config.pathToCwd + "/${config.nodeName}-network.json"))
];
_module.args = { inherit inputs sources; };
enableAcme = config.enableAcme;
acmeNodeIP = trim (readFile (config.pathToCwd + "/acme_server_ip"));
nixpkgs.hostPlatform = "x86_64-linux";
};
};
}

View file

@ -1,11 +0,0 @@
{
targetMachines = [
"garage"
"mastodon"
"peertube"
"pixelfed"
];
pathToRoot = ../../..;
pathFromRoot = ./.;
enableAcme = true;
}

View file

@ -1,19 +0,0 @@
{
runNixOSTest,
inputs,
sources,
}:
runNixOSTest {
imports = [
../common/nixosTest.nix
./nixosTest.nix
];
_module.args = { inherit inputs sources; };
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
}

View file

@ -1,58 +0,0 @@
{
inputs,
sources,
lib,
}:
let
inherit (builtins) fromJSON listToAttrs;
inherit (import ./constants.nix)
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
makeTargetResource = nodeName: {
imports = [ ../common/targetResource.nix ];
_module.args = { inherit inputs sources; };
inherit
nodeName
pathToRoot
pathFromRoot
enableAcme
;
};
## The deployment function - what we are here to test!
##
## TODO: Modularise `deployment/default.nix` to get rid of the nested
## function calls.
makeTestDeployment =
args:
(import ../..)
{
inherit lib;
inherit (inputs) nixops4 nixops4-nixos;
fediversity = import ../../../services/fediversity;
}
(listToAttrs (
map (nodeName: {
name = "${nodeName}ConfigurationResource";
value = makeTargetResource nodeName;
}) targetMachines
))
args;
in
makeTestDeployment (
fromJSON (
let
env = builtins.getEnv "DEPLOYMENT";
in
if env == "" then
throw "The DEPLOYMENT environment needs to be set. You do not want to use this deployment unless in the `deployment-panel` NixOS test."
else
env
)
)

View file

@ -1,26 +0,0 @@
{
inputs = {
nixops4.follows = "nixops4-nixos/nixops4";
nixops4-nixos.url = "github:nixops4/nixops4-nixos";
};
outputs =
inputs:
import ./mkFlake.nix inputs (
{
inputs,
sources,
lib,
...
}:
{
imports = [
inputs.nixops4.modules.flake.default
];
nixops4Deployments.check-deployment-panel = import ./deployment/check/panel/deployment.nix {
inherit inputs sources lib;
};
}
);
}

View file

@ -1,377 +0,0 @@
{
inputs,
lib,
hostPkgs,
config,
...
}:
let
inherit (lib)
getExe
;
## Some places need a dummy file that will in fact never be used. We create
## it here.
dummyFile = hostPkgs.writeText "dummy" "dummy";
panelPort = 8000;
panelUser = "test";
panelEmail = "test@test.com";
panelPassword = "ouiprdaaa43"; # panel's manager complains if too close to username or email
fediUser = "test";
fediEmail = "test@test.com";
fediPassword = "testtest";
fediName = "Testy McTestface";
toPythonBool = b: if b then "True" else "False";
interactWithPanel =
{
baseUri,
enableMastodon,
enablePeertube,
enablePixelfed,
}:
hostPkgs.writers.writePython3Bin "interact-with-panel"
{
libraries = with hostPkgs.python3Packages; [ selenium ];
flakeIgnore = [
"E302" # expected 2 blank lines, found 0
"E303" # too many blank lines
"E305" # expected 2 blank lines after end of function or class
"E501" # line too long (> 79 characters)
"E731" # do not assign lambda expression, use a def
];
}
''
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
print("Create and configure driver...")
options = Options()
options.add_argument("--headless")
options.binary_location = "${getExe hostPkgs.firefox-unwrapped}"
service = webdriver.FirefoxService(executable_path="${getExe hostPkgs.geckodriver}")
driver = webdriver.Firefox(options=options, service=service)
driver.set_window_size(1280, 960)
driver.implicitly_wait(360)
driver.command_executor.set_timeout(3600)
print("Open login page...")
driver.get("${baseUri}/login/")
print("Enter username...")
driver.find_element(By.XPATH, "//input[@name = 'username']").send_keys("${panelUser}")
print("Enter password...")
driver.find_element(By.XPATH, "//input[@name = 'password']").send_keys("${panelPassword}")
print("Click Login button...")
driver.find_element(By.XPATH, "//button[normalize-space() = 'Login']").click()
print("Open configuration page...")
driver.get("${baseUri}/configuration/")
# Helpers to actually set and not add or switch input values.
def input_set(elt, keys):
elt.clear()
elt.send_keys(keys)
def checkbox_set(elt, new_value):
if new_value != elt.is_selected():
elt.click()
print("Enable Fediversity...")
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'enable']"), True)
print("Fill in initialUser info...")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.username']"), "${fediUser}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.password']"), "${fediPassword}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.email']"), "${fediEmail}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.displayName']"), "${fediName}")
print("Enable services...")
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'mastodon.enable']"), ${toPythonBool enableMastodon})
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'peertube.enable']"), ${toPythonBool enablePeertube})
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'pixelfed.enable']"), ${toPythonBool enablePixelfed})
print("Start deployment...")
driver.find_element(By.XPATH, "//button[@id = 'deploy-button']").click()
print("Wait for deployment status to show up...")
get_deployment_result = lambda d: d.find_element(By.XPATH, "//div[@id = 'deployment-result']//p")
WebDriverWait(driver, timeout=3660, poll_frequency=10).until(get_deployment_result)
deployment_result = get_deployment_result(driver).get_attribute('innerHTML')
print("Quit...")
driver.quit()
match deployment_result:
case 'Deployment Succeeded':
print("Deployment has succeeded; exiting normally")
exit(0)
case 'Deployment Failed':
print("Deployment has failed; exiting with return code `1`")
exit(1)
case _:
print(f"Unexpected deployment result: {deployment_result}; exiting with return code `2`")
exit(2)
'';
in
{
_class = "nixosTest";
name = "deployment-panel";
sourceFileset = lib.fileset.unions [
./constants.nix
./deployment.nix
# REVIEW: I would like to be able to grab all of `/deployment` minus
# `/deployment/check`, but I can't because there is a bunch of other files
# in `/deployment`. Maybe we can think of a reorg making things more robust
# here? (comment also in CLI test)
../../default.nix
../../options.nix
../../../services/fediversity
];
## The panel's module sets `nixpkgs.overlays` which clashes with
## `pkgsReadOnly`. We disable it here.
node.pkgsReadOnly = false;
nodes.deployer =
{ pkgs, ... }:
{
imports = [
(import ../../../panel { }).module
];
## FIXME: This should be in the common stuff.
security.acme = {
acceptTerms = true;
defaults.email = "test@test.com";
defaults.server = "https://acme.test/dir";
};
security.pki.certificateFiles = [
(import "${inputs.nixpkgs}/nixos/tests/common/acme/server/snakeoil-certs.nix").ca.cert
];
networking.extraHosts = "${config.acmeNodeIP} acme.test";
services.panel = {
enable = true;
production = true;
domain = "deployer";
secrets = {
SECRET_KEY = dummyFile;
};
port = panelPort;
deployment = {
flake = "/run/fedipanel/flake";
name = "check-deployment-panel";
};
};
environment.systemPackages = [ pkgs.expect ];
## FIXME: The following dependencies are necessary but I do not
## understand why they are not covered by the fake node.
system.extraDependencies = with pkgs; [
peertube
peertube.inputDerivation
gixy # a configuration checker for nginx
gixy.inputDerivation
];
system.extraDependenciesFromModule = {
imports = [ ../../../services/fediversity ];
fediversity = {
domain = "fediversity.net"; # would write `dummy` but that would not type
garage.enable = true;
mastodon = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
peertube = {
enable = true;
secretsFile = dummyFile;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
pixelfed = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
temp.cores = 1;
temp.initialUser = {
username = "dummy";
displayName = "dummy";
email = "dummy";
passwordFile = dummyFile;
};
};
};
};
nodes.client =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
httpie
dnsutils # for `dig`
openssl
cacert
wget
python3
python3Packages.selenium
firefox-unwrapped
geckodriver
];
security.pki.certificateFiles = [
config.nodes.acme.test-support.acme.caCert
];
networking.extraHosts = "${config.acmeNodeIP} acme.test";
};
## NOTE: The target machines may need more RAM than the default to handle
## being deployed to, otherwise we get something like:
##
## pixelfed # [ 616.785499 ] sshd-session[1167]: Conection closed by 2001:db8:1::2 port 45004
## deployer # error: writing to file: No space left on device
## pixelfed # [ 616.788538 ] sshd-session[1151]: pam_unix(sshd:session): session closed for user port
## pixelfed # [ 616.793929 ] systemd-logind[719]: Session 4 logged out. Waiting for processes to exit.
## deployer # Error: Could not create resource
##
## These values have been trimmed down to the gigabyte.
nodes.mastodon.virtualisation.memorySize = 4 * 1024;
nodes.pixelfed.virtualisation.memorySize = 4 * 1024;
nodes.peertube.virtualisation.memorySize = 5 * 1024;
## FIXME: The test of presence of the services are very simple: we only
## check that there is a systemd service of the expected name on the
## machine. This proves at least that NixOps4 did something, and we cannot
## really do more for now because the services aren't actually working
## properly, in particular because of DNS issues. We should fix the services
## and check that they are working properly.
extraTestScript = ''
## TODO: We want a nicer way to control where the FediPanel consumes its
## flake, which can default to the store but could also be somewhere else if
## someone wanted to change the code of the flake.
##
with subtest("Give the panel access to the flake"):
deployer.succeed("mkdir /run/fedipanel /run/fedipanel/flake >&2")
deployer.succeed("cp -R . /run/fedipanel/flake >&2")
deployer.succeed("chown -R panel:panel /run/fedipanel >&2")
## TODO: I want a programmatic way to provide an SSH key to the panel (and
## therefore NixOps4). This should happen either in the Python code, but
## maybe it is fair that that one picks up on the user's key? It could
## also be in the Nix packaging.
##
with subtest("Set up the panel's SSH keys"):
deployer.succeed("mkdir /home/panel/.ssh >&2")
deployer.succeed("cp -R /root/.ssh/* /home/panel/.ssh >&2")
deployer.succeed("chown -R panel:panel /home/panel/.ssh >&2")
deployer.succeed("chmod 600 /home/panel/.ssh/* >&2")
## TODO: This is a hack to accept the root CA used by Pebble on the client
## machine. Pebble randomizes everything, so the only way to get it is to
## call the /roots/0 endpoint at runtime, leaving not much margin for a nice
## Nixy way of adding the certificate. There is no way around it as this is
## by design in Pebble, showing in fact that Pebble was not the appropriate
## tool for our use and that nixpkgs does not in fact provide an easy way to
## generate _usable_ certificates in NixOS tests. I suggest we merge this,
## and track the task to set it up in a cleaner way. I would tackle this in
## a subsequent PR, and hopefully even contribute this BetterWay(tm) to
## nixpkgs. — Niols
##
with subtest("Set up ACME root CA on client"):
client.succeed("""
cd /etc/ssl/certs
curl -o pebble-root-ca.pem https://acme.test:15000/roots/0
curl -o pebble-intermediate-ca.pem https://acme.test:15000/intermediates/0
{ cat ca-bundle.crt
cat pebble-root-ca.pem
cat pebble-intermediate-ca.pem
} > new-ca-bundle.crt
rm ca-bundle.crt ca-certificates.crt
mv new-ca-bundle.crt ca-bundle.crt
ln -s ca-bundle.crt ca-certificates.crt
""")
## TODO: I would hope for a more declarative way to add users. This should
## be handled by the Nix packaging of the FediPanel. — Niols
##
with subtest("Create panel user"):
deployer.succeed("""
expect -c '
spawn manage createsuperuser --username ${panelUser} --email ${panelEmail}
expect "Password: "; send "${panelPassword}\\n";
expect "Password (again): "; send "${panelPassword}\\n"
interact
' >&2
""")
with subtest("Check the status of the services - there should be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with no services enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = false;
enablePeertube = false;
enablePixelfed = false;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - there should still be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with Mastodon and Pixelfed enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = true;
enablePeertube = false;
enablePixelfed = true;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - expecting Garage, Mastodon and Pixelfed"):
garage.succeed("systemctl status garage.service")
mastodon.succeed("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.succeed("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with only Peertube enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = false;
enablePeertube = true;
enablePixelfed = false;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - expecting Garage and Peertube"):
garage.succeed("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.succeed("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
'';
}

View file

@ -1,70 +0,0 @@
let
inherit (import ../default.nix { }) pkgs inputs;
inherit (pkgs) lib;
inherit (lib) mkOption;
eval =
module:
(lib.evalModules {
specialArgs = {
inherit inputs;
};
modules = [
module
./data-model.nix
];
}).config;
in
{
_class = "nix-unit";
test-eval = {
expr =
let
fediversity = eval (
{ config, ... }:
{
config = {
applications.hello =
{ ... }:
{
description = ''Command-line tool that will print "Hello, world!" on the terminal'';
module =
{ ... }:
{
options = {
enable = lib.mkEnableOption "Hello in the shell";
};
};
implementation =
cfg:
lib.optionalAttrs cfg.enable {
dummy.login-shell.packages.hello = pkgs.hello;
};
};
};
options = {
example-configuration = mkOption {
type = config.configuration;
readOnly = true;
default = {
enable = true;
applications.hello.enable = true;
};
};
};
}
);
in
{
inherit (fediversity)
example-configuration
;
};
expected = {
example-configuration = {
enable = true;
applications.hello.enable = true;
};
};
};
}

View file

@ -1,89 +0,0 @@
{
lib,
config,
...
}:
let
inherit (lib) mkOption types;
inherit (lib.types)
attrsOf
attrTag
deferredModuleWith
submodule
optionType
functionTo
;
functionType = import ./function.nix;
application-resources = {
options.resources = mkOption {
# TODO: maybe transpose, and group the resources by type instead
type = attrsOf (
attrTag (lib.mapAttrs (_name: resource: mkOption { type = resource.request; }) config.resources)
);
};
};
in
{
_class = "nixops4Deployment";
options = {
applications = mkOption {
description = "Collection of Fediversity applications";
type = attrsOf (
submodule (application: {
_class = "fediversity-application";
options = {
description = mkOption {
description = "Description to be shown in the application overview";
type = types.str;
};
module = mkOption {
description = "Operator-facing configuration options for the application";
type = deferredModuleWith { staticModules = [ { _class = "fediversity-application-config"; } ]; };
};
implementation = mkOption {
description = "Mapping of application configuration to deployment resources, a description of what an application needs to run";
type = application.config.config-mapping.function-type;
};
resources = mkOption {
description = "Compute resources required by an application";
type = functionTo application.config.config-mapping.output-type;
readOnly = true;
default = input: (application.config.implementation input).output;
};
config-mapping = mkOption {
description = "Function type for the mapping from application configuration to required resources";
type = submodule functionType;
readOnly = true;
default = {
input-type = application.config.module;
output-type = application-resources;
};
};
};
})
);
};
configuration = mkOption {
description = "Configuration type declaring options to be set by operators";
type = optionType;
readOnly = true;
default = submodule {
options = {
enable = lib.mkEnableOption {
description = "your Fediversity configuration";
};
applications = lib.mapAttrs (
_name: application:
mkOption {
description = application.description;
type = submodule application.module;
default = { };
}
) config.applications;
};
};
};
};
}

View file

@ -33,116 +33,82 @@
## information coming from the FediPanel.
##
## FIXME: lock step the interface with the definitions in the FediPanel
panelConfigNullable:
panelConfig:
let
inherit (lib) mkIf;
inherit (lib) mkMerge mkIf;
## The convertor from module options to JSON schema does not generate proper
## JSON schema types, forcing us to use nullable fields for default values.
## However, working with those fields in the deployment code is annoying (and
## unusual for Nix programmers), so we sanitize the input here and add back
## the default value by hand.
nonNull = x: v: if x == null then v else x;
panelConfig = {
domain = nonNull panelConfigNullable.domain "fediversity.net";
initialUser = nonNull panelConfigNullable.initialUser {
displayName = "Testy McTestface";
username = "test";
password = "testtest";
email = "test@test.com";
};
mastodon = nonNull panelConfigNullable.mastodon { enable = false; };
peertube = nonNull panelConfigNullable.peertube { enable = false; };
pixelfed = nonNull panelConfigNullable.pixelfed { enable = false; };
};
in
## Regular arguments of a NixOps4 deployment module.
{ config, providers, ... }:
{ providers, ... }:
let
cfg = config.deployment;
in
{
_class = "nixops4Deployment";
providers = { inherit (nixops4.modules.nixops4Provider) local; };
options = {
deployment = lib.mkOption {
description = ''
Configuration to be deployed
'';
# XXX(@fricklerhandwerk):
# misusing this will produce obscure errors that will be truncated by NixOps4
type = lib.types.submodule ./options.nix;
default = panelConfig;
};
};
config = {
providers = { inherit (nixops4.modules.nixops4Provider) local; };
resources =
let
## NOTE: All of these secrets are publicly available in this source file
## and will end up in the Nix store. We don't care as they are only ever
## used for testing anyway.
##
## FIXME: Generate and store in NixOps4's state.
mastodonS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK3515373e4c851ebaad366558";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34";
};
peertubeS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK1f9feea9960f6f95ff404c9b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7295c4201966a02c2c3d25b5cea4a5ff782966a2415e3a196f91924631191395";
};
pixelfedS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GKb5615457d44214411e673b7b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "5be6799a88ca9b9d813d1a806b64f15efa49482dbe15339ddfaf7f19cf434987";
};
makeConfigurationResource = resourceModule: config: {
type = providers.local.exec;
imports = [
nixops4-nixos.modules.nixops4Resource.nixos
resourceModule
{
## NOTE: With NixOps4, there are several levels and all of them live
## in the NixOS module system:
##
## 1. Each NixOps4 deployment is a module.
## 2. Each NixOps4 resource is a module. This very comment is
## inside an attrset imported as a module in a resource.
## 3. Each NixOps4 'configuration' resource contains an attribute
## 'nixos.module', itself a NixOS configuration module.
nixos.module =
{ ... }:
{
imports = [
config
fediversity
];
};
}
];
resources =
let
## NOTE: All of these secrets are publicly available in this source file
## and will end up in the Nix store. We don't care as they are only ever
## used for testing anyway.
##
## FIXME: Generate and store in NixOps4's state.
mastodonS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK3515373e4c851ebaad366558";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34";
};
peertubeS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GK1f9feea9960f6f95ff404c9b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "7295c4201966a02c2c3d25b5cea4a5ff782966a2415e3a196f91924631191395";
};
pixelfedS3KeyConfig =
{ pkgs, ... }:
{
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GKb5615457d44214411e673b7b";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "5be6799a88ca9b9d813d1a806b64f15efa49482dbe15339ddfaf7f19cf434987";
};
in
makeConfigurationResource = resourceModule: config: {
type = providers.local.exec;
imports = [
nixops4-nixos.modules.nixops4Resource.nixos
resourceModule
{
{
## NOTE: With NixOps4, there are several levels and all of them live
## in the NixOS module system:
##
## 1. Each NixOps4 deployment is a module.
## 2. Each NixOps4 resource is a module. This very comment is
## inside an attrset imported as a module in a resource.
## 3. Each NixOps4 'configuration' resource contains an attribute
## 'nixos.module', itself a NixOS configuration module.
nixos.module =
{ ... }:
{
imports = [
config
fediversity
];
};
}
];
};
in
mkMerge [
(mkIf (panelConfig.mastodon.enable || panelConfig.peertube.enable || panelConfig.pixelfed.enable) {
garage-configuration = makeConfigurationResource garageConfigurationResource (
{ pkgs, ... }:
mkIf (cfg.mastodon.enable || cfg.peertube.enable || cfg.pixelfed.enable) {
{
fediversity = {
inherit (cfg) domain;
inherit (panelConfig) domain;
garage.enable = true;
pixelfed = pixelfedS3KeyConfig { inherit pkgs; };
mastodon = mastodonS3KeyConfig { inherit pkgs; };
@ -150,17 +116,19 @@ in
};
}
);
})
(mkIf panelConfig.mastodon.enable {
mastodon-configuration = makeConfigurationResource mastodonConfigurationResource (
{ pkgs, ... }:
mkIf cfg.mastodon.enable {
{
fediversity = {
inherit (cfg) domain;
inherit (panelConfig) domain;
temp.initialUser = {
inherit (cfg.initialUser) username email displayName;
inherit (panelConfig.initialUser) username email displayName;
# FIXME: disgusting, but nvm, this is going to be replaced by
# proper central authentication at some point
passwordFile = pkgs.writeText "password" cfg.initialUser.password;
passwordFile = pkgs.writeText "password" panelConfig.initialUser.password;
};
mastodon = mastodonS3KeyConfig { inherit pkgs; } // {
@ -171,17 +139,19 @@ in
};
}
);
})
(mkIf panelConfig.peertube.enable {
peertube-configuration = makeConfigurationResource peertubeConfigurationResource (
{ pkgs, ... }:
mkIf cfg.peertube.enable {
{
fediversity = {
inherit (cfg) domain;
inherit (panelConfig) domain;
temp.initialUser = {
inherit (cfg.initialUser) username email displayName;
inherit (panelConfig.initialUser) username email displayName;
# FIXME: disgusting, but nvm, this is going to be replaced by
# proper central authentication at some point
passwordFile = pkgs.writeText "password" cfg.initialUser.password;
passwordFile = pkgs.writeText "password" panelConfig.initialUser.password;
};
peertube = peertubeS3KeyConfig { inherit pkgs; } // {
@ -194,17 +164,19 @@ in
};
}
);
})
(mkIf panelConfig.pixelfed.enable {
pixelfed-configuration = makeConfigurationResource pixelfedConfigurationResource (
{ pkgs, ... }:
mkIf cfg.pixelfed.enable {
{
fediversity = {
inherit (cfg) domain;
inherit (panelConfig) domain;
temp.initialUser = {
inherit (cfg.initialUser) username email displayName;
inherit (panelConfig.initialUser) username email displayName;
# FIXME: disgusting, but nvm, this is going to be replaced by
# proper central authentication at some point
passwordFile = pkgs.writeText "password" cfg.initialUser.password;
passwordFile = pkgs.writeText "password" panelConfig.initialUser.password;
};
pixelfed = pixelfedS3KeyConfig { inherit pkgs; } // {
@ -213,6 +185,6 @@ in
};
}
);
};
};
})
];
}

View file

@ -1,26 +0,0 @@
{ inputs, sources, ... }:
{
_class = "flake";
perSystem =
{ pkgs, ... }:
{
checks = {
deployment-basic = import ./check/basic {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
deployment-cli = import ./check/cli {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
deployment-panel = import ./check/panel {
inherit (pkgs.testers) runNixOSTest;
inherit inputs sources;
};
};
};
}

View file

@ -1,37 +0,0 @@
/**
Modular function type
*/
{ config, lib, ... }:
let
inherit (lib) mkOption types;
inherit (types)
deferredModule
submodule
functionTo
optionType
;
in
{
options = {
input-type = mkOption {
type = deferredModule;
};
output-type = mkOption {
type = deferredModule;
};
function-type = mkOption {
type = optionType;
readOnly = true;
default = functionTo (submodule {
options = {
input = mkOption {
type = submodule config.input-type;
};
output = mkOption {
type = submodule config.output-type;
};
};
});
};
};
}

View file

@ -1,104 +0,0 @@
/**
Deployment options as to be presented in the front end.
These are converted to JSON schema in order to generate front-end forms etc.
For this to work, options must not have types `functionTo` or `package`, and must not access `config` for their default values.
The options are written in a cumbersome way because the JSON schema converter can't evaluate a submodule option's default value, which thus all must be set to `null`.
This can be fixed if we made the converter aware of [`$defs`], but that would likely amount to half a rewrite.
[`$defs`]: https://json-schema.org/understanding-json-schema/structuring#defs
*/
{
lib,
...
}:
let
inherit (lib) types mkOption;
in
{
_class = "nixops4Deployment";
options = {
enable = lib.mkEnableOption "Fediversity configuration";
domain = mkOption {
type =
with types;
enum [
"fediversity.net"
];
description = ''
Apex domain under which the services will be deployed.
'';
default = "fediversity.net";
};
pixelfed = mkOption {
description = ''
Configuration for the Pixelfed service
'';
type =
with types;
nullOr (submodule {
options = {
enable = lib.mkEnableOption "Pixelfed";
};
});
default = null;
};
peertube = mkOption {
description = ''
Configuration for the PeerTube service
'';
type =
with types;
nullOr (submodule {
options = {
enable = lib.mkEnableOption "Peertube";
};
});
default = null;
};
mastodon = mkOption {
description = ''
Configuration for the Mastodon service
'';
type =
with types;
nullOr (submodule {
options = {
enable = lib.mkEnableOption "Mastodon";
};
});
default = null;
};
initialUser = mkOption {
description = ''
Some services require an initial user to access them.
This option sets the credentials for such an initial user.
'';
type =
with types;
nullOr (submodule {
options = {
displayName = mkOption {
type = types.str;
description = "Display name of the user";
};
username = mkOption {
type = types.str;
description = "Username for login";
};
email = mkOption {
type = types.str;
description = "User's email address";
};
password = mkOption {
type = types.str;
description = "Password for login";
};
};
});
default = null;
};
};
}

794
flake.lock generated
View file

@ -1,5 +1,26 @@
{
"nodes": {
"agenix": {
"inputs": {
"darwin": "darwin",
"home-manager": "home-manager",
"nixpkgs": "nixpkgs",
"systems": "systems"
},
"locked": {
"lastModified": 1736955230,
"narHash": "sha256-uenf8fv2eG5bKM8C/UvFaiJMZ4IpUFaQxk9OH5t/1gA=",
"owner": "ryantm",
"repo": "agenix",
"rev": "e600439ec4c273cf11e06fe4d9d906fb98fa097c",
"type": "github"
},
"original": {
"owner": "ryantm",
"repo": "agenix",
"type": "github"
}
},
"crane": {
"flake": false,
"locked": {
@ -17,7 +38,88 @@
"type": "github"
}
},
"crane_2": {
"flake": false,
"locked": {
"lastModified": 1727316705,
"narHash": "sha256-/mumx8AQ5xFuCJqxCIOFCHTVlxHkMT21idpbgbm/TIE=",
"owner": "ipetkov",
"repo": "crane",
"rev": "5b03654ce046b5167e7b0bccbd8244cb56c16f0e",
"type": "github"
},
"original": {
"owner": "ipetkov",
"ref": "v0.19.0",
"repo": "crane",
"type": "github"
}
},
"darwin": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1700795494,
"narHash": "sha256-gzGLZSiOhf155FW7262kdHo2YDeugp3VuIFb4/GGng0=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "4b9b83d5a92e8c1fbfd8eb27eda375908c11ec4d",
"type": "github"
},
"original": {
"owner": "lnl7",
"ref": "master",
"repo": "nix-darwin",
"type": "github"
}
},
"disko": {
"inputs": {
"nixpkgs": "nixpkgs_2"
},
"locked": {
"lastModified": 1740485968,
"narHash": "sha256-WK+PZHbfDjLyveXAxpnrfagiFgZWaTJglewBWniTn2Y=",
"owner": "nix-community",
"repo": "disko",
"rev": "19c1140419c4f1cdf88ad4c1cfb6605597628940",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "disko",
"type": "github"
}
},
"dream2nix": {
"inputs": {
"nixpkgs": [
"nixops4",
"nix-cargo-integration",
"nixpkgs"
],
"purescript-overlay": "purescript-overlay",
"pyproject-nix": "pyproject-nix"
},
"locked": {
"lastModified": 1735160684,
"narHash": "sha256-n5CwhmqKxifuD4Sq4WuRP/h5LO6f23cGnSAuJemnd/4=",
"owner": "nix-community",
"repo": "dream2nix",
"rev": "8ce6284ff58208ed8961681276f82c2f8f978ef4",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "dream2nix",
"type": "github"
}
},
"dream2nix_2": {
"inputs": {
"nixpkgs": [
"nixops4-nixos",
@ -25,8 +127,8 @@
"nix-cargo-integration",
"nixpkgs"
],
"purescript-overlay": "purescript-overlay",
"pyproject-nix": "pyproject-nix"
"purescript-overlay": "purescript-overlay_2",
"pyproject-nix": "pyproject-nix_2"
},
"locked": {
"lastModified": 1735160684,
@ -90,6 +192,54 @@
"type": "github"
}
},
"flake-compat_4": {
"flake": false,
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-compat_5": {
"flake": false,
"locked": {
"lastModified": 1733328505,
"narHash": "sha256-NeCCThCEP3eCl2l/+27kNNK7QrwZB1IJCrXfrbv5oqU=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "ff81ac966bb2cae68946d5ed5fc4994f96d0ffec",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-compat_6": {
"flake": false,
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-parts": {
"inputs": {
"nixpkgs-lib": "nixpkgs-lib"
@ -127,6 +277,64 @@
}
},
"flake-parts_3": {
"inputs": {
"nixpkgs-lib": [
"nixops4",
"nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1733312601,
"narHash": "sha256-4pDvzqnegAfRkPwO3wmwBhVi/Sye1mzps0zHWYnP88c=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "205b12d8b7cd4802fbcb8e8ef6a0f1408781a4f9",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_4": {
"inputs": {
"nixpkgs-lib": "nixpkgs-lib_3"
},
"locked": {
"lastModified": 1738453229,
"narHash": "sha256-7H9XgNiGLKN1G1CgRh0vUL4AheZSYzPm+zmZ7vxbJdo=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "32ea77a06711b758da0ad9bd6a844c5740a87abd",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_5": {
"inputs": {
"nixpkgs-lib": "nixpkgs-lib_4"
},
"locked": {
"lastModified": 1738453229,
"narHash": "sha256-7H9XgNiGLKN1G1CgRh0vUL4AheZSYzPm+zmZ7vxbJdo=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "32ea77a06711b758da0ad9bd6a844c5740a87abd",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_6": {
"inputs": {
"nixpkgs-lib": [
"nixops4-nixos",
@ -151,7 +359,7 @@
},
"flake-utils": {
"inputs": {
"systems": "systems"
"systems": "systems_2"
},
"locked": {
"lastModified": 1710146030,
@ -167,11 +375,29 @@
"type": "github"
}
},
"git-hooks-nix": {
"flake-utils_2": {
"inputs": {
"systems": "systems_3"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"git-hooks": {
"inputs": {
"flake-compat": "flake-compat",
"gitignore": "gitignore",
"nixpkgs": "nixpkgs"
"nixpkgs": "nixpkgs_3"
},
"locked": {
"lastModified": 1737465171,
@ -187,7 +413,62 @@
"type": "github"
}
},
"git-hooks-nix": {
"inputs": {
"flake-compat": [
"nixops4",
"nix"
],
"gitignore": [
"nixops4",
"nix"
],
"nixpkgs": [
"nixops4",
"nix",
"nixpkgs"
],
"nixpkgs-stable": [
"nixops4",
"nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1734279981,
"narHash": "sha256-NdaCraHPp8iYMWzdXAt5Nv6sA3MUzlCiGiR586TCwo0=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "aa9f40c906904ebd83da78e7f328cd8aeaeae785",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"git-hooks-nix_2": {
"inputs": {
"flake-compat": "flake-compat_4",
"gitignore": "gitignore_2",
"nixpkgs": "nixpkgs_5"
},
"locked": {
"lastModified": 1737465171,
"narHash": "sha256-R10v2hoJRLq8jcL4syVFag7nIGE7m13qO48wRIukWNg=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "9364dc02281ce2d37a1f55b6e51f7c0f65a75f17",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"git-hooks-nix_3": {
"inputs": {
"flake-compat": [
"nixops4-nixos",
@ -227,6 +508,27 @@
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"gitignore_2": {
"inputs": {
"nixpkgs": [
"nixops4-nixos",
@ -248,6 +550,27 @@
"type": "github"
}
},
"home-manager": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1703113217,
"narHash": "sha256-7ulcXOk63TIT2lVDSExj7XzFx09LpdSAPtvgtM7yQPE=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "3bfaacf46133c037bb356193bd2f1765d9dc82c1",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
"mk-naked-shell": {
"flake": false,
"locked": {
@ -264,14 +587,29 @@
"type": "github"
}
},
"mk-naked-shell_2": {
"flake": false,
"locked": {
"lastModified": 1681286841,
"narHash": "sha256-3XlJrwlR0nBiREnuogoa5i1b4+w/XPe0z8bbrJASw0g=",
"owner": "yusdacra",
"repo": "mk-naked-shell",
"rev": "7612f828dd6f22b7fb332cc69440e839d7ffe6bd",
"type": "github"
},
"original": {
"owner": "yusdacra",
"repo": "mk-naked-shell",
"type": "github"
}
},
"nix": {
"inputs": {
"flake-compat": "flake-compat_2",
"flake-parts": "flake-parts_3",
"git-hooks-nix": "git-hooks-nix_2",
"git-hooks-nix": "git-hooks-nix",
"nixfmt": "nixfmt",
"nixpkgs": [
"nixops4-nixos",
"nixops4",
"nixpkgs"
],
@ -299,7 +637,6 @@
"dream2nix": "dream2nix",
"mk-naked-shell": "mk-naked-shell",
"nixpkgs": [
"nixops4-nixos",
"nixops4",
"nixpkgs"
],
@ -321,6 +658,63 @@
"type": "github"
}
},
"nix-cargo-integration_2": {
"inputs": {
"crane": "crane_2",
"dream2nix": "dream2nix_2",
"mk-naked-shell": "mk-naked-shell_2",
"nixpkgs": [
"nixops4-nixos",
"nixops4",
"nixpkgs"
],
"parts": "parts_2",
"rust-overlay": "rust-overlay_2",
"treefmt": "treefmt_2"
},
"locked": {
"lastModified": 1738390452,
"narHash": "sha256-o8kg4q1V2xV9ZlszAnizXJK2c+fbhAeLbGoTUa2u1Bw=",
"owner": "yusdacra",
"repo": "nix-cargo-integration",
"rev": "718d577808e3e31f445b6564d58b755294e72f42",
"type": "github"
},
"original": {
"owner": "yusdacra",
"repo": "nix-cargo-integration",
"type": "github"
}
},
"nix_2": {
"inputs": {
"flake-compat": "flake-compat_5",
"flake-parts": "flake-parts_6",
"git-hooks-nix": "git-hooks-nix_3",
"nixfmt": "nixfmt_2",
"nixpkgs": [
"nixops4-nixos",
"nixops4",
"nixpkgs"
],
"nixpkgs-23-11": "nixpkgs-23-11_2",
"nixpkgs-regression": "nixpkgs-regression_2"
},
"locked": {
"lastModified": 1738382221,
"narHash": "sha256-3+qJBts5RTAtjCExo6bkqrttL+skpYZzPOVEzPSwVtc=",
"owner": "NixOS",
"repo": "nix",
"rev": "d949c8de7c7e84bb7537dc772609686c37033a3b",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "master",
"repo": "nix",
"type": "github"
}
},
"nixfmt": {
"inputs": {
"flake-utils": "flake-utils"
@ -339,12 +733,30 @@
"type": "github"
}
},
"nixfmt_2": {
"inputs": {
"flake-utils": "flake-utils_2"
},
"locked": {
"lastModified": 1736283758,
"narHash": "sha256-hrKhUp2V2fk/dvzTTHFqvtOg000G1e+jyIam+D4XqhA=",
"owner": "NixOS",
"repo": "nixfmt",
"rev": "8d4bd690c247004d90d8554f0b746b1231fe2436",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixfmt",
"type": "github"
}
},
"nixops4": {
"inputs": {
"flake-parts": "flake-parts_2",
"nix": "nix",
"nix-cargo-integration": "nix-cargo-integration",
"nixpkgs": "nixpkgs_2",
"nixpkgs": "nixpkgs_4",
"nixpkgs-old": "nixpkgs-old"
},
"locked": {
@ -363,9 +775,9 @@
},
"nixops4-nixos": {
"inputs": {
"flake-parts": "flake-parts",
"git-hooks-nix": "git-hooks-nix",
"nixops4": "nixops4",
"flake-parts": "flake-parts_4",
"git-hooks-nix": "git-hooks-nix_2",
"nixops4": "nixops4_2",
"nixops4-nixos": [
"nixops4-nixos"
],
@ -389,18 +801,40 @@
"type": "github"
}
},
"nixops4_2": {
"inputs": {
"flake-parts": "flake-parts_5",
"nix": "nix_2",
"nix-cargo-integration": "nix-cargo-integration_2",
"nixpkgs": "nixpkgs_6",
"nixpkgs-old": "nixpkgs-old_2"
},
"locked": {
"lastModified": 1739444468,
"narHash": "sha256-brg21neEI7pUnSRksOx9IE8/Kcr8OmEg4NIxL0sSy+U=",
"owner": "nixops4",
"repo": "nixops4",
"rev": "fb601ee79d8c9e3e7aca98dd2334ea91da3ea7e0",
"type": "github"
},
"original": {
"owner": "nixops4",
"repo": "nixops4",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1730768919,
"narHash": "sha256-8AKquNnnSaJRXZxc5YmF/WfmxiHX6MMZZasRP6RRQkE=",
"lastModified": 1703013332,
"narHash": "sha256-+tFNwMvlXLbJZXiMHqYq77z/RfmpfpiI3yjL6o/Zo9M=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a04d33c0c3f1a59a2c1cb0c6e34cd24500e5a1dc",
"rev": "54aac082a4d9bb5bbc5c4e899603abfb76a3f6d6",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
@ -421,6 +855,22 @@
"type": "github"
}
},
"nixpkgs-23-11_2": {
"locked": {
"lastModified": 1717159533,
"narHash": "sha256-oamiKNfr2MS6yH64rUn99mIZjc45nGJlj9eGth/3Xuw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a62e6edd6d5e1fa0329b8653c801147986f8d446",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a62e6edd6d5e1fa0329b8653c801147986f8d446",
"type": "github"
}
},
"nixpkgs-lib": {
"locked": {
"lastModified": 1738452942,
@ -445,6 +895,30 @@
"url": "https://github.com/NixOS/nixpkgs/archive/072a6db25e947df2f31aab9eccd0ab75d5b2da11.tar.gz"
}
},
"nixpkgs-lib_3": {
"locked": {
"lastModified": 1738452942,
"narHash": "sha256-vJzFZGaCpnmo7I6i416HaBLpC+hvcURh/BQwROcGIp8=",
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/072a6db25e947df2f31aab9eccd0ab75d5b2da11.tar.gz"
},
"original": {
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/072a6db25e947df2f31aab9eccd0ab75d5b2da11.tar.gz"
}
},
"nixpkgs-lib_4": {
"locked": {
"lastModified": 1738452942,
"narHash": "sha256-vJzFZGaCpnmo7I6i416HaBLpC+hvcURh/BQwROcGIp8=",
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/072a6db25e947df2f31aab9eccd0ab75d5b2da11.tar.gz"
},
"original": {
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/072a6db25e947df2f31aab9eccd0ab75d5b2da11.tar.gz"
}
},
"nixpkgs-old": {
"locked": {
"lastModified": 1735563628,
@ -461,6 +935,22 @@
"type": "github"
}
},
"nixpkgs-old_2": {
"locked": {
"lastModified": 1735563628,
"narHash": "sha256-OnSAY7XDSx7CtDoqNh8jwVwh4xNL/2HaJxGjryLWzX8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b134951a4c9f3c995fd7be05f3243f8ecd65d798",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-regression": {
"locked": {
"lastModified": 1643052045,
@ -477,7 +967,55 @@
"type": "github"
}
},
"nixpkgs-regression_2": {
"locked": {
"lastModified": 1643052045,
"narHash": "sha256-uGJ0VXIhWKGXxkeNnq4TvV3CIOkUJ3PAoLZ3HMzNVMw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1738136902,
"narHash": "sha256-pUvLijVGARw4u793APze3j6mU1Zwdtz7hGkGGkD87qw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "9a5db3142ce450045840cc8d832b13b8a2018e0c",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_3": {
"locked": {
"lastModified": 1730768919,
"narHash": "sha256-8AKquNnnSaJRXZxc5YmF/WfmxiHX6MMZZasRP6RRQkE=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a04d33c0c3f1a59a2c1cb0c6e34cd24500e5a1dc",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_4": {
"locked": {
"lastModified": 1738410390,
"narHash": "sha256-xvTo0Aw0+veek7hvEVLzErmJyQkEcRk6PSR4zsRQFEc=",
@ -493,7 +1031,77 @@
"type": "github"
}
},
"nixpkgs_5": {
"locked": {
"lastModified": 1730768919,
"narHash": "sha256-8AKquNnnSaJRXZxc5YmF/WfmxiHX6MMZZasRP6RRQkE=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a04d33c0c3f1a59a2c1cb0c6e34cd24500e5a1dc",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_6": {
"locked": {
"lastModified": 1738410390,
"narHash": "sha256-xvTo0Aw0+veek7hvEVLzErmJyQkEcRk6PSR4zsRQFEc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "3a228057f5b619feb3186e986dbe76278d707b6e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_7": {
"locked": {
"lastModified": 1740463929,
"narHash": "sha256-4Xhu/3aUdCKeLfdteEHMegx5ooKQvwPHNkOgNCXQrvc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "5d7db4668d7a0c6cc5fc8cf6ef33b008b2b1ed8b",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-24.11",
"repo": "nixpkgs",
"type": "github"
}
},
"parts": {
"inputs": {
"nixpkgs-lib": [
"nixops4",
"nix-cargo-integration",
"nixpkgs"
]
},
"locked": {
"lastModified": 1736143030,
"narHash": "sha256-+hu54pAoLDEZT9pjHlqL9DNzWz0NbUn8NEAHP7PQPzU=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "b905f6fc23a9051a6e1b741e1438dbfc0634c6de",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"parts_2": {
"inputs": {
"nixpkgs-lib": [
"nixops4-nixos",
@ -520,7 +1128,6 @@
"inputs": {
"flake-compat": "flake-compat_3",
"nixpkgs": [
"nixops4-nixos",
"nixops4",
"nix-cargo-integration",
"dream2nix",
@ -542,6 +1149,32 @@
"type": "github"
}
},
"purescript-overlay_2": {
"inputs": {
"flake-compat": "flake-compat_6",
"nixpkgs": [
"nixops4-nixos",
"nixops4",
"nix-cargo-integration",
"dream2nix",
"nixpkgs"
],
"slimlock": "slimlock_2"
},
"locked": {
"lastModified": 1728546539,
"narHash": "sha256-Sws7w0tlnjD+Bjck1nv29NjC5DbL6nH5auL9Ex9Iz2A=",
"owner": "thomashoneyman",
"repo": "purescript-overlay",
"rev": "4ad4c15d07bd899d7346b331f377606631eb0ee4",
"type": "github"
},
"original": {
"owner": "thomashoneyman",
"repo": "purescript-overlay",
"type": "github"
}
},
"pyproject-nix": {
"flake": false,
"locked": {
@ -559,16 +1192,57 @@
"type": "github"
}
},
"pyproject-nix_2": {
"flake": false,
"locked": {
"lastModified": 1702448246,
"narHash": "sha256-hFg5s/hoJFv7tDpiGvEvXP0UfFvFEDgTdyHIjDVHu1I=",
"owner": "davhau",
"repo": "pyproject.nix",
"rev": "5a06a2697b228c04dd2f35659b4b659ca74f7aeb",
"type": "github"
},
"original": {
"owner": "davhau",
"ref": "dream2nix",
"repo": "pyproject.nix",
"type": "github"
}
},
"root": {
"inputs": {
"nixops4": [
"nixops4-nixos",
"nixops4"
],
"nixops4-nixos": "nixops4-nixos"
"agenix": "agenix",
"disko": "disko",
"flake-parts": "flake-parts",
"git-hooks": "git-hooks",
"nixops4": "nixops4",
"nixops4-nixos": "nixops4-nixos",
"nixpkgs": "nixpkgs_7"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": [
"nixops4",
"nix-cargo-integration",
"nixpkgs"
]
},
"locked": {
"lastModified": 1738376888,
"narHash": "sha256-S6ErHxkSm0iA7ZMsjjDaASWxbELYcdfv8BhOkkj1rHw=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "83284068670d5ae4a43641c4afb150f3446be70d",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"rust-overlay_2": {
"inputs": {
"nixpkgs": [
"nixops4-nixos",
@ -592,6 +1266,30 @@
}
},
"slimlock": {
"inputs": {
"nixpkgs": [
"nixops4",
"nix-cargo-integration",
"dream2nix",
"purescript-overlay",
"nixpkgs"
]
},
"locked": {
"lastModified": 1688756706,
"narHash": "sha256-xzkkMv3neJJJ89zo3o2ojp7nFeaZc2G0fYwNXNJRFlo=",
"owner": "thomashoneyman",
"repo": "slimlock",
"rev": "cf72723f59e2340d24881fd7bf61cb113b4c407c",
"type": "github"
},
"original": {
"owner": "thomashoneyman",
"repo": "slimlock",
"type": "github"
}
},
"slimlock_2": {
"inputs": {
"nixpkgs": [
"nixops4-nixos",
@ -631,7 +1329,59 @@
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"treefmt": {
"inputs": {
"nixpkgs": [
"nixops4",
"nix-cargo-integration",
"nixpkgs"
]
},
"locked": {
"lastModified": 1738070913,
"narHash": "sha256-j6jC12vCFsTGDmY2u1H12lMr62fnclNjuCtAdF1a4Nk=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "bebf27d00f7d10ba75332a0541ac43676985dea3",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "treefmt-nix",
"type": "github"
}
},
"treefmt_2": {
"inputs": {
"nixpkgs": [
"nixops4-nixos",

View file

@ -1,52 +1,69 @@
{
inputs = {
nixops4.follows = "nixops4-nixos/nixops4";
nixpkgs.url = "github:nixos/nixpkgs/nixos-24.11";
flake-parts.url = "github:hercules-ci/flake-parts";
git-hooks.url = "github:cachix/git-hooks.nix";
agenix.url = "github:ryantm/agenix";
disko.url = "github:nix-community/disko";
nixops4.url = "github:nixops4/nixops4";
nixops4-nixos.url = "github:nixops4/nixops4-nixos";
};
outputs =
inputs:
import ./mkFlake.nix inputs (
{ inputs, sources, ... }:
{
imports = [
"${sources.git-hooks}/flake-module.nix"
inputs.nixops4.modules.flake.default
inputs@{ flake-parts, ... }:
flake-parts.lib.mkFlake { inherit inputs; } {
systems = [
"x86_64-linux"
"aarch64-linux"
"x86_64-darwin"
"aarch64-darwin"
];
./deployment/flake-part.nix
./infra/flake-part.nix
./keys/flake-part.nix
./secrets/flake-part.nix
./services/tests/flake-part.nix
];
imports = [
inputs.git-hooks.flakeModule
inputs.nixops4.modules.flake.default
perSystem =
{
pkgs,
lib,
system,
...
}:
{
checks = {
panel = (import ./. { inherit sources system; }).tests.panel.basic;
./infra/flake-part.nix
./services/flake-part.nix
];
perSystem =
{
config,
pkgs,
lib,
inputs',
...
}:
{
formatter = pkgs.nixfmt-rfc-style;
pre-commit.settings.hooks =
let
## Add a directory here if pre-commit hooks shouldn't apply to it.
optout = [ "npins" ];
excludes = map (dir: "^${dir}/") optout;
addExcludes = lib.mapAttrs (_: c: c // { inherit excludes; });
in
addExcludes {
nixfmt-rfc-style.enable = true;
deadnix.enable = true;
trim-trailing-whitespace.enable = true;
shellcheck.enable = true;
};
formatter = pkgs.nixfmt-rfc-style;
pre-commit.settings.hooks =
let
## Add a directory here if pre-commit hooks shouldn't apply to it.
optout = [ "npins" ];
excludes = map (dir: "^${dir}/") optout;
addExcludes = lib.mapAttrs (_: c: c // { inherit excludes; });
in
addExcludes {
nixfmt-rfc-style.enable = true;
deadnix.enable = true;
trim-trailing-whitespace.enable = true;
shellcheck.enable = true;
};
devShells.default = pkgs.mkShell {
packages = [
pkgs.nil
inputs'.agenix.packages.default
inputs'.nixops4.packages.default
pkgs.httpie
pkgs.jq
];
shellHook = config.pre-commit.installationScript;
};
}
);
};
};
}

View file

@ -1,19 +1,20 @@
# Infra
This directory contains the definition of [the VMs](../machines/machines.md) that host our
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.
## Provisioning VMs with an initial configuration
> NOTE[Niols]: This is still very manual and clunky. Two things will happen:
> 1. In the near future, I will improve the provisioning script to make this a bit less clunky.
> 2. In the far future, NixOps4 will be able to communicate with Proxmox directly and everything will become much cleaner.
NOTE[Niols]: This is very manual and clunky. Two things will happen. In the near
future, I will improve the provisioning script to make this a bit less clunky.
In the far future, NixOps4 will be able to communicate with Proxmox directly and
everything will become much cleaner.
1. Choose names for your VMs. It is recommended to choose `fediXXX`, with `XXX`
above 100. For instance, `fedi117`.
2. Add a basic configuration for the machine. These typically go in
`machines/dev/<name>/default.nix`. You can look at other `fediXXX` VMs to
`infra/machines/<name>/default.nix`. You can look at other `fediXXX` VMs to
find inspiration. You probably do not need a `nixos.module` option at this
point.
@ -24,7 +25,8 @@ infrastructure.
Those files need to exist during provisioning, but their content matters only
when updating the machines' configuration.
> FIXME: Remove this step by making the provisioning script not fail with the public key does not exist yet.
FIXME: Remove this step by making the provisioning script not fail with the
public key does not exist yet.
3. Run the provisioning script:
```
@ -42,11 +44,11 @@ infrastructure.
ssh fedi117.abundos.eu 'sudo cat /etc/ssh/ssh_host_ed25519_key.pub' > keys/systems/fedi117.pub
```
> FIXME: Make the provisioning script do that for us.
FIXME: Make the provisioning script do that for us.
7. Regenerate the list of machines:
```
sh machines/machines.md.sh
sh infra/machines.md.sh
```
Commit it with the machine's configuration, public key, etc.
@ -54,7 +56,7 @@ infrastructure.
just enough for it to boot and be reachable. Go on to the next section to
update the machine and put an actual configuration.
> FIXME: Figure out why the full configuration isn't on the machine at this
FIXME: Figure out why the full configuration isn't on the machine at this
point and fix it.
## Updating existing VM configurations

BIN
infra/architecture.pdf Normal file

Binary file not shown.

View file

@ -5,9 +5,8 @@ let
in
{
_class = "nixos";
imports = [
./hardware.nix
./networking.nix
./users.nix
];
@ -23,9 +22,4 @@ in
nix.extraOptions = ''
experimental-features = nix-command flakes
'';
boot.loader = {
systemd-boot.enable = true;
efi.canTouchEfiVariables = true;
};
}

View file

@ -1,16 +1,20 @@
{ sources, ... }:
{
_class = "nixos";
{ modulesPath, ... }:
imports = [
"${sources.nixpkgs}/nixos/modules/profiles/qemu-guest.nix"
];
{
imports = [ (modulesPath + "/profiles/qemu-guest.nix") ];
boot = {
loader = {
systemd-boot.enable = true;
efi.canTouchEfiVariables = true;
};
initrd = {
availableKernelModules = [
"ata_piix"
"uhci_hcd"
"virtio_pci"
"virtio_scsi"
"sd_mod"
"sr_mod"
];

View file

@ -1,64 +1,63 @@
{ config, lib, ... }:
let
inherit (lib) mkDefault mkIf mkMerge;
inherit (lib) mkDefault;
in
{
_class = "nixos";
config = {
services.openssh = {
enable = true;
settings.PasswordAuthentication = false;
};
networking = mkMerge [
{
hostName = config.fediversityVm.name;
domain = config.fediversityVm.domain;
networking = {
hostName = config.fediversityVm.name;
domain = config.fediversityVm.domain;
## REVIEW: Do we actually need that, considering that we have static IPs?
useDHCP = mkDefault true;
## REVIEW: Do we actually need that, considering that we have static IPs?
useDHCP = mkDefault true;
## Disable the default firewall and use nftables instead, with a custom
## Procolix-made ruleset.
firewall.enable = false;
nftables = {
enable = true;
rulesetFile = ./nftables-ruleset.nft;
interfaces = {
eth0 = {
ipv4 = {
addresses = [
{
inherit (config.fediversityVm.ipv4) address prefixLength;
}
];
};
ipv6 = {
addresses = [
{
inherit (config.fediversityVm.ipv6) address prefixLength;
}
];
};
};
}
};
## IPv4
(mkIf config.fediversityVm.ipv4.enable {
interfaces.${config.fediversityVm.ipv4.interface}.ipv4.addresses = [
{ inherit (config.fediversityVm.ipv4) address prefixLength; }
];
defaultGateway = {
address = config.fediversityVm.ipv4.gateway;
interface = config.fediversityVm.ipv4.interface;
};
nameservers = [
"95.215.185.6"
"95.215.185.7"
];
})
defaultGateway = {
address = config.fediversityVm.ipv4.gateway;
interface = "eth0";
};
defaultGateway6 = {
address = config.fediversityVm.ipv6.gateway;
interface = "eth0";
};
## IPv6
(mkIf config.fediversityVm.ipv6.enable {
interfaces.${config.fediversityVm.ipv6.interface}.ipv6.addresses = [
{ inherit (config.fediversityVm.ipv6) address prefixLength; }
];
defaultGateway6 = {
address = config.fediversityVm.ipv6.gateway;
interface = config.fediversityVm.ipv6.interface;
};
nameservers = [
"2a00:51c0::5fd7:b906"
"2a00:51c0::5fd7:b907"
];
})
];
nameservers = [
"95.215.185.6"
"95.215.185.7"
"2a00:51c0::5fd7:b906"
"2a00:51c0::5fd7:b907"
];
firewall.enable = false;
nftables = {
enable = true;
rulesetFile = ./nftables-ruleset.nft;
};
};
};
}

View file

@ -1,13 +1,5 @@
{
config,
...
}:
{
_class = "nixos";
users.users = {
root.openssh.authorizedKeys.keys = config.users.users.procolix.openssh.authorizedKeys.keys;
procolix = {
isNormalUser = true;
extraGroups = [ "wheel" ];
@ -41,14 +33,6 @@
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHTIqF4CAylSxKPiSo5JOPuocn0y2z38wOSsQ1MUaZ2"
];
};
Lois = {
isNormalUser = true;
extraGroups = [ "wheel" ];
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKVQ7yYXr4ZguGWYHZ7v2L3kPmYjaFo46PTgAEviW5D8 lois@mouse"
];
};
};
security.sudo.wheelNeedsPassword = false;

View file

@ -6,8 +6,6 @@ let
in
{
# `config` not set and imported from multiple places: no fixed module class
options.fediversityVm = {
##########################################################################
@ -91,17 +89,6 @@ in
};
ipv4 = {
enable = mkOption {
default = true;
};
interface = mkOption {
description = ''
The interface that carries the machine's IPv4 network.
'';
default = "eth0";
};
address = mkOption {
description = ''
The IP address of the machine, version 4. It will be injected as a
@ -127,17 +114,6 @@ in
};
ipv6 = {
enable = mkOption {
default = true;
};
interface = mkOption {
description = ''
The interface that carries the machine's IPv6 network.
'';
default = "eth0";
};
address = mkOption {
description = ''
The IP address of the machine, version 6. It will be injected as a

View file

@ -2,9 +2,6 @@
inputs,
lib,
config,
sources,
keys,
secrets,
...
}:
@ -13,10 +10,12 @@ let
inherit (lib.attrsets) concatMapAttrs optionalAttrs;
inherit (lib.strings) removeSuffix;
secretsPrefix = ../../secrets;
secrets = import (secretsPrefix + "/secrets.nix");
keys = import ../../keys;
in
{
_class = "nixops4Resource";
imports = [ ./options.nix ];
fediversityVm.hostPublicKey = mkDefault keys.systems.${config.fediversityVm.name};
@ -26,15 +25,15 @@ in
hostPublicKey = config.fediversityVm.hostPublicKey;
};
inherit (inputs) nixpkgs;
nixpkgs = inputs.nixpkgs;
## The configuration of the machine. We strive to keep in this file only the
## options that really need to be injected from the resource. Everything else
## should go into the `./nixos` subdirectory.
nixos.module = {
imports = [
"${sources.agenix}/modules/age.nix"
"${sources.disko}/module.nix"
inputs.agenix.nixosModules.default
inputs.disko.nixosModules.default
./options.nix
./nixos
];
@ -43,24 +42,18 @@ in
## configuration.
fediversityVm = config.fediversityVm;
## Read all the secrets, filter the ones that are supposed to be readable with
## public key, and create a mapping from `<name>.file` to the absolute path of
## the secret's file.
## Read all the secrets, filter the ones that are supposed to be readable
## with this host's public key, and add them correctly to the configuration
## as `age.secrets.<name>.file`.
age.secrets = concatMapAttrs (
name: secret:
optionalAttrs (elem config.fediversityVm.hostPublicKey secret.publicKeys) {
${removeSuffix ".age" name}.file = secrets.rootPath + "/${name}";
}
) secrets.mapping;
optionalAttrs (elem config.fediversityVm.hostPublicKey secret.publicKeys) ({
${removeSuffix ".age" name}.file = secretsPrefix + "/${name}";
})
) secrets;
## FIXME: Remove direct root authentication once the NixOps4 NixOS provider
## supports users with password-less sudo.
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors ++ [
# allow our panel vm access to the test machines
keys.panel
# allow continuous deployment access
keys.cd
];
users.users.root.openssh.authorizedKeys.keys = attrValues keys.contributors;
};
}

View file

@ -1,9 +1,7 @@
{
self,
inputs,
lib,
sources,
keys,
secrets,
...
}:
@ -23,43 +21,11 @@ let
makeResourceModule =
{ vmName, isTestVm }:
{
# TODO(@fricklerhandwerk): this is terrible but IMO we should just ditch flake-parts and have our own data model for how the project is organised internally
_module.args = {
inherit
inputs
keys
secrets
;
};
nixos.module.imports = [
./common/proxmox-qemu-vm.nix
_module.args = { inherit inputs; };
imports = [
./common/resource.nix
(if isTestVm then ./test-machines + "/${vmName}" else ./machines + "/${vmName}")
];
nixos.specialArgs = {
inherit sources;
};
imports =
[
./common/resource.nix
]
++ (
if isTestVm then
[
../machines/operator/${vmName}
{
nixos.module.users.users.root.openssh.authorizedKeys.keys = [
# allow our panel vm access to the test machines
keys.panel
];
}
]
else
[
../machines/dev/${vmName}
]
);
fediversityVm.name = vmName;
};
@ -69,33 +35,17 @@ let
vmNames:
{ providers, ... }:
{
# XXX: this type merge is for adding `specialArgs` to resource modules
options.resources = mkOption {
type =
with lib.types;
lazyAttrsOf (submoduleWith {
class = "nixops4Resource";
modules = [ ];
# TODO(@fricklerhandwerk): we may want to pass through all of `specialArgs`
# once we're sure it's sane. leaving it here for better control during refactoring.
specialArgs = {
inherit sources;
};
});
};
config = {
providers.local = inputs.nixops4.modules.nixops4Provider.local;
resources = genAttrs vmNames (vmName: {
type = providers.local.exec;
imports = [
inputs.nixops4-nixos.modules.nixops4Resource.nixos
(makeResourceModule {
inherit vmName;
isTestVm = false;
})
];
});
};
providers.local = inputs.nixops4.modules.nixops4Provider.local;
resources = genAttrs vmNames (vmName: {
type = providers.local.exec;
imports = [
inputs.nixops4-nixos.modules.nixops4Resource.nixos
(makeResourceModule {
inherit vmName;
isTestVm = false;
})
];
});
};
makeDeployment' = vmName: makeDeployment [ vmName ];
@ -107,7 +57,7 @@ let
{
inherit lib;
inherit (inputs) nixops4 nixops4-nixos;
fediversity = import ../services/fediversity;
inherit (self.nixosModules) fediversity;
}
{
garageConfigurationResource = makeResourceModule {
@ -115,11 +65,11 @@ let
isTestVm = true;
};
mastodonConfigurationResource = makeResourceModule {
vmName = "test06"; # somehow `test02` has a problem - use test06 instead
vmName = "test02";
isTestVm = true;
};
peertubeConfigurationResource = makeResourceModule {
vmName = "test05";
vmName = "test03";
isTestVm = true;
};
pixelfedConfigurationResource = makeResourceModule {
@ -130,7 +80,7 @@ let
nixops4ResourceNixosMockOptions = {
## NOTE: We allow the use of a few options from
## `nixops4-nixos.modules.nixops4Resource.nixos` such that we can
## `inputs.nixops4-nixos.modules.nixops4Resource.nixos` such that we can
## reuse modules that make use of them.
##
## REVIEW: We can probably do much better and cleaner. On the other hand,
@ -155,10 +105,7 @@ let
## Given a VM name, make a NixOS configuration for this machine.
makeConfiguration =
isTestVm: vmName:
let
inherit (sources) nixpkgs;
in
import "${nixpkgs}/nixos" {
inputs.nixpkgs.lib.nixosSystem {
modules = [
(makeResourceConfig { inherit vmName isTestVm; }).nixos.module
];
@ -182,12 +129,12 @@ let
listSubdirectories = path: attrNames (filterAttrs (_: type: type == "directory") (readDir path));
machines = listSubdirectories ../machines/dev;
testMachines = listSubdirectories ../machines/operator;
machines = listSubdirectories ./machines;
testMachines = listSubdirectories ./test-machines;
in
{
_class = "flake";
flake.lib.makeInstallerIso = import ./makeInstallerIso.nix;
## - Each normal or test machine gets a NixOS configuration.
## - Each normal or test machine gets a VM options entry.
@ -196,17 +143,7 @@ in
## - We add a “test” deployment with all test machines.
nixops4Deployments = genAttrs machines makeDeployment' // {
default = makeDeployment machines;
test = makeTestDeployment (
fromJSON (
let
env = builtins.getEnv "DEPLOYMENT";
in
if env != "" then
env
else
builtins.trace "env var DEPLOYMENT not set, falling back to ../deployment/configuration.sample.json!" (readFile ../deployment/configuration.sample.json)
)
);
test = makeTestDeployment (fromJSON (readFile ./test-machines/configuration.json));
};
flake.nixosConfigurations =
genAttrs machines (makeConfiguration false)

View file

@ -7,10 +7,9 @@ Currently, this repository keeps track of the following VMs:
Machine | Proxmox | Description
--------|---------|-------------
[`fedi200`](./dev/fedi200) | fediversity | Testing machine for Hans
[`fedi201`](./dev/fedi201) | fediversity | FediPanel
[`vm02116`](./dev/vm02116) | procolix | Forgejo
[`vm02187`](./dev/vm02187) | procolix | Wiki
| `forgejo-ci` | n/a (physical) | Forgejo actions runner |
[`fedi200`](./fedi200) | fediversity | Testing machine for Hans
[`fedi201`](./fedi201) | fediversity | FediPanel
[`vm02116`](./vm02116) | procolix | Forgejo
[`vm02187`](./vm02187) | procolix | Wiki
This table excludes all machines with names starting with `test`.

View file

@ -20,7 +20,7 @@ vmOptions=$(
cd ..
nix eval \
--impure --raw --expr "
builtins.toJSON (builtins.getFlake (builtins.toString ../.)).vmOptions
builtins.toJSON (builtins.getFlake (builtins.toString ./.)).vmOptions
" \
--log-format raw --quiet
)
@ -32,12 +32,11 @@ for machine in $(echo "$vmOptions" | jq -r 'keys[]'); do
description=$(echo "$vmOptions" | jq -r ".$machine.description" | head -n 1)
# shellcheck disable=SC2016
printf '[`%s`](./dev/%s) | %s | %s\n' "$machine" "$machine" "$proxmox" "$description"
printf '[`%s`](./%s) | %s | %s\n' "$machine" "$machine" "$proxmox" "$description"
fi
done
cat <<\EOF
| `forgejo-ci` | n/a (physical) | Forgejo actions runner |
This table excludes all machines with names starting with `test`.
EOF

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 200;
proxmox = "fediversity";
@ -16,10 +14,4 @@
gateway = "2a00:51c0:13:1305::1";
};
};
nixos.module = {
imports = [
../../../infra/common/proxmox-qemu-vm.nix
];
};
}

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 201;
proxmox = "fediversity";
@ -19,7 +17,6 @@
nixos.module = {
imports = [
../../../infra/common/proxmox-qemu-vm.nix
./fedipanel.nix
];
};

View file

@ -0,0 +1,39 @@
{
config,
...
}:
let
name = "panel";
panel = (import ../../../panel/default.nix { }).package;
in
{
imports = [
../../../panel/nix/configuration.nix
];
environment.systemPackages = [
panel
];
security.acme = {
acceptTerms = true;
defaults.email = "beheer@procolix.com";
};
services.${name} = {
enable = true;
package = panel;
production = true;
domain = "demo.fediversity.eu";
host = "0.0.0.0";
secrets = {
SECRET_KEY = config.age.secrets.panel-secret-key.path;
};
port = 8000;
settings = {
DATABASE_URL = "sqlite:///var/lib/${name}/db.sqlite3";
CREDENTIALS_DIRECTORY = "/var/lib/${name}/.credentials";
STATIC_ROOT = "/var/lib/${name}/static";
};
};
}

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 2116;
proxmox = "procolix";
@ -14,7 +12,6 @@
{ lib, ... }:
{
imports = [
../../../infra/common/proxmox-qemu-vm.nix
./forgejo.nix
];

View file

@ -5,8 +5,6 @@ let
in
{
_class = "nixos";
services.forgejo = {
enable = true;

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 2187;
proxmox = "procolix";
@ -14,7 +12,6 @@
{ lib, ... }:
{
imports = [
../../../infra/common/proxmox-qemu-vm.nix
./wiki.nix
];

View file

@ -1,8 +1,6 @@
{ config, ... }:
{
_class = "nixos";
services.phpfpm.pools.mediawiki.phpOptions = ''
upload_max_filesize = 1024M;
post_max_size = 1024M;

View file

@ -15,6 +15,7 @@ let
installer =
{
config,
pkgs,
lib,
...

View file

@ -229,7 +229,7 @@ build_iso () {
nix build \
--impure --expr "
let flake = builtins.getFlake (builtins.toString ./.); in
import ./makeInstallerIso.nix {
flake.lib.makeInstallerIso {
nixosConfiguration = flake.nixosConfigurations.$vm_name;
nixpkgs = flake.inputs.nixpkgs;
$nix_host_keys

View file

@ -1,5 +1,5 @@
{
"domain": "fediversity.net",
"domain": "abundos.eu",
"mastodon": { "enable": false },
"peertube": { "enable": false },
"pixelfed": { "enable": false },

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7001;
proxmox = "fediversity";

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7002;
proxmox = "fediversity";

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7003;
proxmox = "fediversity";

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7004;
proxmox = "fediversity";

View file

@ -1,6 +1,4 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7005;
proxmox = "fediversity";

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMlsYTtMx3hFO8B5B8iHaXL2JKj9izHeC+/AMhIWXBPs cd-age

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKVQ7yYXr4ZguGWYHZ7v2L3kPmYjaFo46PTgAEviW5D8 lois@mouse

View file

@ -1 +1 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICJj4L5Yt9ABdQeEkJI6VuJEyUSVbCHMxYLdvVcB/pXh niols@wallace/fediversity
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEElREJN0AC7lbp+5X204pQ5r030IbgCllsIxyU3iiKY niols@wallace

View file

@ -34,6 +34,4 @@ in
{
contributors = collectKeys ./contributors;
systems = collectKeys ./systems;
panel = removeTrailingWhitespace (readFile ./panel-ssh-key.pub);
cd = removeTrailingWhitespace (readFile ./cd-ssh-key.pub);
}

View file

@ -1,5 +0,0 @@
{
_class = "flake";
_module.args.keys = import ./.;
}

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWfTPs64ImqGh3c/Y+3zqB9YVr5ApsKiS/aTLGXUTzb panel@fedi201

View file

@ -1 +1 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBpnV6zzgdJN5pjw2oWryneE6kZ5rQ343Ut4ed12Cm9 root@fedi201
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhSlUo7L/TjoAILfLv/BDxlBT+rGudh9VoK50Uiu2lZ root@fedi201

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFXQW5fxJoNY9wtTMsNExgbAbvyljIRGBLjY+USh/0A

View file

@ -1,4 +0,0 @@
# Machines
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.

View file

@ -1,49 +0,0 @@
{
config,
sources,
...
}:
let
name = "panel";
in
{
_class = "nixos";
imports = [
(import ../../../panel { }).module
"${sources.home-manager}/nixos"
];
security.acme = {
acceptTerms = true;
defaults.email = "beheer@procolix.com";
};
age.secrets.panel-ssh-key = {
owner = name;
mode = "400";
};
programs.ssh.startAgent = true;
home-manager = {
users.${name}.home = {
stateVersion = "25.05";
file.".ssh/config" = {
text = ''
IdentityFile ${config.age.secrets.panel-ssh-key.path}
'';
};
};
};
services.${name} = {
enable = true;
production = true;
domain = "demo.fediversity.eu";
secrets = {
SECRET_KEY = config.age.secrets.panel-secret-key.path;
};
port = 8000;
};
}

View file

@ -1,70 +0,0 @@
{ lib, ... }:
let
inherit (lib) mkDefault mkForce;
in
{
_class = "nixops4Resource";
# NOTE: This needs an SSH config entry `forgejo-ci` to locate and access the
# machine. This is because different people access the machine in different
# way (eg. via a proxy vs. via Procolix's VPN). This might look like:
#
# Host forgejo-ci
# HostName 45.142.234.216
# HostKeyAlias forgejo-ci
#
# The `HostKeyAlias` statement is crucial. Without it, deployment will fail
# with the SSH error “Host key verification failed”.
ssh.host = mkForce "forgejo-ci";
fediversityVm = {
domain = "procolix.com";
ipv4 = {
interface = "enp1s0f0";
address = "192.168.201.65";
prefixLength = 24;
gateway = "192.168.201.1";
};
ipv6.enable = false;
};
nixos.module =
{ config, ... }:
{
_class = "nixos";
imports = [
./forgejo-actions-runner.nix
];
hardware.cpu.intel.updateMicrocode = mkDefault config.hardware.enableRedistributableFirmware;
networking = {
nftables.enable = mkForce false;
hostId = "1d6ea552";
};
## NOTE: This is a physical machine, so is not covered by disko
fileSystems."/" = lib.mkForce {
device = "rpool/root";
fsType = "zfs";
};
fileSystems."/home" = {
device = "rpool/home";
fsType = "zfs";
};
fileSystems."/boot" = lib.mkForce {
device = "/dev/disk/by-uuid/50B2-DD3F";
fsType = "vfat";
options = [
"fmask=0077"
"dmask=0077"
];
};
};
}

View file

@ -1,47 +0,0 @@
{ pkgs, config, ... }:
{
_class = "nixos";
services.gitea-actions-runner = {
package = pkgs.forgejo-actions-runner;
instances.default = {
enable = true;
name = config.networking.fqdn;
url = "https://git.fediversity.eu";
tokenFile = config.age.secrets.forgejo-runner-token.path;
settings = {
log.level = "info";
runner = {
file = ".runner";
# Take only 1 job at a time to avoid clashing NixOS tests, see #362
capacity = 1;
timeout = "3h";
insecure = false;
fetch_timeout = "5s";
fetch_interval = "2s";
};
};
## This runner supports Docker (with a default Ubuntu image) and native
## modes. In native mode, it contains a few default packages.
labels = [
"docker:docker://node:16-bullseye"
"native:host"
];
hostPackages = with pkgs; [
bash
git
nix
nodejs
];
};
};
## For the Docker mode of the runner.
virtualisation.docker.enable = true;
}

View file

@ -1,21 +0,0 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7006;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.56";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::56";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACC9tVetM5x9PPgc6quVSq8Hk6bJ0VzO9hB7n470pSlhTwAAAIh7l3X0e5d1
9AAAAAtzc2gtZWQyNTUxOQAAACC9tVetM5x9PPgc6quVSq8Hk6bJ0VzO9hB7n470pSlhTw
AAAEBu+g3q61og0mJ+mHv0oBaQXuJO4BOGcpR3p0UI8sdeyr21V60znH08+Bzqq5VKrweT
psnRXM72EHufjvSlKWFPAAAAAAECAwQF
-----END OPENSSH PRIVATE KEY-----

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL21V60znH08+Bzqq5VKrweTpsnRXM72EHufjvSlKWFP

View file

@ -1,21 +0,0 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7011;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.61";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::61";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACCWc7GuMI3Gzkj+mSep6MVbKDccS52jVw+nBs27yFCGVgAAAIhCymnvQspp
7wAAAAtzc2gtZWQyNTUxOQAAACCWc7GuMI3Gzkj+mSep6MVbKDccS52jVw+nBs27yFCGVg
AAAEAvr1aiy0DIjgdLH9bBq9uD4pf8Wakgqr34oWDPB2/E75Zzsa4wjcbOSP6ZJ6noxVso
NxxLnaNXD6cGzbvIUIZWAAAAAAECAwQF
-----END OPENSSH PRIVATE KEY-----

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZzsa4wjcbOSP6ZJ6noxVsoNxxLnaNXD6cGzbvIUIZW

View file

@ -1,21 +0,0 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7012;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.62";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::62";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACBuvrzv3i07NFxONsNP7uJmefebrBhfo0pwzmC3NCAOZwAAAIiA+nIugPpy
LgAAAAtzc2gtZWQyNTUxOQAAACBuvrzv3i07NFxONsNP7uJmefebrBhfo0pwzmC3NCAOZw
AAAEDkpXNePQeHnf4vkDkhZI/ab9Ds2igfY0a5U1p4PrEmvm6+vO/eLTs0XE42w0/u4mZ5
95usGF+jSnDOYLc0IA5nAAAAAAECAwQF
-----END OPENSSH PRIVATE KEY-----

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG6+vO/eLTs0XE42w0/u4mZ595usGF+jSnDOYLc0IA5n

View file

@ -1,21 +0,0 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7013;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.63";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::63";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACATzdyp4E+PX6lDfw2CmezguYn/lPgbpI+NUbmseEwAgwAAAIi2z3O2ts9z
tgAAAAtzc2gtZWQyNTUxOQAAACATzdyp4E+PX6lDfw2CmezguYn/lPgbpI+NUbmseEwAgw
AAAEDj2sn4VJhBL2a7j41mjdMWIdJ/u1betSxZ393lNd3+pBPN3KngT49fqUN/DYKZ7OC5
if+U+Bukj41Ruax4TACDAAAAAAECAwQF
-----END OPENSSH PRIVATE KEY-----

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBPN3KngT49fqUN/DYKZ7OC5if+U+Bukj41Ruax4TACD

View file

@ -1,21 +0,0 @@
{
_class = "nixops4Resource";
fediversityVm = {
vmId = 7014;
proxmox = "fediversity";
hostPublicKey = builtins.readFile ./ssh_host_ed25519_key.pub;
unsafeHostPrivateKey = builtins.readFile ./ssh_host_ed25519_key;
domain = "abundos.eu";
ipv4 = {
address = "95.215.187.64";
gateway = "95.215.187.1";
};
ipv6 = {
address = "2a00:51c0:13:1305::64";
gateway = "2a00:51c0:13:1305::1";
};
};
}

View file

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACB028Q86t8RXi7617OrckxNPKNwnpGGZqhXhppHB5n9tQAAAIhfhYlCX4WJ
QgAAAAtzc2gtZWQyNTUxOQAAACB028Q86t8RXi7617OrckxNPKNwnpGGZqhXhppHB5n9tQ
AAAEAualLRodpovSzGAhza2OVvg5Yp8xv3A7xUNNbKsMTKSHTbxDzq3xFeLvrXs6tyTE08
o3CekYZmqFeGmkcHmf21AAAAAAECAwQF
-----END OPENSSH PRIVATE KEY-----

View file

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHTbxDzq3xFeLvrXs6tyTE08o3CekYZmqFeGmkcHmf21

34
matrix/.gitignore vendored Normal file
View file

@ -0,0 +1,34 @@
# Eerst: GEEN PDF/PS IN GIT!
*.pdf
*.ps
# ---> LyX
# Ignore LyX backup and autosave files
# http://www.lyx.org/
*.lyx~
*.lyx#
# ---> Vim
# Swap
[._]*.s[a-v][a-z]
!*.svg # comment out if you don't need vector files
[._]*.sw[a-p]
[._]s[a-rt-v][a-z]
[._]ss[a-gi-z]
[._]sw[a-p]
# Session
Session.vim
Sessionx.vim
# Temporary
.netrwhist
*~
# Auto-generated tag files
tags
# Persistent undo
[._]*.un~
# En geen vaults
/ansible/group_vars/matrix/vault.yaml

124
matrix/README.md Normal file
View file

@ -0,0 +1,124 @@
---
gitea: none
include_toc: true
---
# A complete Matrix installation
This documentation describes how to build a complete Matrix environment with
all bells and whistles. Not just the Synapse server, but (almost) every bit
you want.
The main focus will be on the server itself, Synapse, but there's a lot more
than just that.
This documentation isn't ready yet, and if you find errors or room for improvement,
please let me know. You can do that via Matrix, obviously (`@hans:woefdram.nl`), via
e-mail (`docs@fediversity.eu`), or make a Pull Request.
## Overview
A complete Matrix environment consists of many parts. Other than the Matrix
server itself (Synapse) there are all kinds of other things that we need:
* [Synapse](https://element-hq.github.io/synapse/latest/)
* Webclient ([Element Web](https://github.com/element-hq/element-web))
* [Element Call](https://github.com/element-hq/element-call) for audio/video
conferencing
* Management with [Synapse-Admin](https://github.com/Awesome-Technologies/synapse-admin)
* Moderation with [Draupnir](https://github.com/the-draupnir-project/Draupnir)
* [Consent
tracking](https://element-hq.github.io/synapse/latest/consent_tracking.html)
* Authentication via
[OpenID](https://element-hq.github.io/synapse/latest/openid.html) (later)
* Several [bridges](https://matrix.org/ecosystem/bridges/) (later)
# Overview
This documentation aims to describe the installation of a complete Matrix
platform, with all bells and whistles. Several components are involved and
finishing the installation of one can be necessary for the installation of the
next.
Before you start, make sure you take a look at the [checklist](checklist.md).
These are the components we're going to use:
## Synapse
This is the core component: the Matrix server itself, you should probably
install this first.
Because not every usecase is the same, we'll describe two different
architectures:
** [Monolithic](synapse)
This is the default way of installing Synapse, this is suitable for scenarios
with not too many users, and, importantly, users do not join many very crowded
rooms.
** [Worker-based](synapse/workers)
For servers that get a bigger load, for example those that host users that use
many big rooms, we'll describe how to process that higher load by distributing
it over workers.
## PostgreSQL
This is the database Synapse uses. This should be the first thing you install
after Synapse, and once you're done, reconfigure the default Synapse install
to use PostgreSQL.
If you have already added stuff to the SQLite database that Synapse installs
by default that you don't want to lose: [here's how to migrate from SQLite to
PostgreSQL](https://element-hq.github.io/synapse/latest/postgres.html#porting-from-sqlite).
## nginx
We need a webserver for several things, see how to [configure nginx](nginx)
here.
If you install this, make sure to check which certificates you need, fix the
DNS entries and probably keep TTL for for those entries very low until after
the installation, when you know everything's working.
## Element Call
Element Call is the new way to have audio and video conferences, both
one-on-one and with groups. This does not use Jitsi and keeps E2EE intact. See
how to [setup and configure it](element-call).
# Element Web
This is the fully-fledged web client, which is very [easy to set
up](element-web).
# TURN
We may need a TURN server, and we'll use
[coturn](coturn) for that.
It's apparently also possible to use the built-in TURN server in Livekit,
which we'll use if we use [Element Call](element-call). It's either/or, so make
sure you pick the right approach.
You could possibly use both coturn and LiveKit, if you insist on being able to
use both legacy and Element Call functionality. This is not documented here
yet.
# Draupnir
With Draupnir you can do moderation. It requires a few changes to both Synapse
and nginx, here's how to [install and configure Draupnir](draupnir).

Some files were not shown because too many files have changed in this diff Show more