Compare commits

...

20 commits

Author SHA1 Message Date
91a61b95f0
flesh out attic
TODO keys nginx-port testing
2025-06-19 11:28:35 +02:00
fc87501b65
WIP: add attic cache, see #92 2025-06-19 11:23:46 +02:00
d67f533948 fix running nixops4 apply test (#391)
Closes #390

Reviewed-on: Fediversity/Fediversity#391
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-committed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
2025-06-19 08:26:20 +02:00
bd1cfd7a7c Introduce test for deploying all services via FediPanel (#361)
Closes #277

Same as #329 but where we run the FediPanel and interact with it via a browser
instead of running NixOps4 directly.

Reviewed-on: Fediversity/Fediversity#361
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Reviewed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-18 12:37:47 +02:00
939f9d961d add data model entity: application (#387)
part of #103.

Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Reviewed-on: Fediversity/Fediversity#387
Co-authored-by: Kiara Grouwstra <kiara@procolix.eu>
Co-committed-by: Kiara Grouwstra <kiara@procolix.eu>
2025-06-17 17:11:52 +02:00
4801433ae0 Get rid of the need for deployer.pub (#385)
The tests still work because we manually write the deployer's public key in `/root/.ssh/authorized_keys` on the target machines. In itself, however, the configuration that we push does not allow the deployer to push anything on the target machines.

Context: Fediversity/Fediversity#361 (comment)
Reviewed-on: Fediversity/Fediversity#385
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-17 16:34:29 +02:00
3a3a083793 FediPanel: allow configuring flake and deployment (#376)
Last part of #361.

Builds on top of #375.

Reviewed-on: Fediversity/Fediversity#376
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-15 16:55:19 +02:00
ace56e754e FediPanel: do not call nix develop (#375)
Yet another piece of #361.

Reviewed-on: Fediversity/Fediversity#375
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-15 15:06:23 +02:00
dbb4ce67fc move machines to reflect a semantic structure (#367)
later we may want to distinguish dev vs host as well, tho eventually we expect not to have hard-coded machines anyway.

split off from #319.

Reviewed-on: Fediversity/Fediversity#367
Co-authored-by: Kiara Grouwstra <kiara@procolix.eu>
Co-committed-by: Kiara Grouwstra <kiara@procolix.eu>
2025-06-15 15:01:56 +02:00
5a514b96e9 use deployed environment for launching nixops4 from the panel 2025-06-13 16:39:34 +02:00
1b832c1f5b bypass native flake input for Nixpkgs (#374)
@Niols the sheer amount of hassle and noise indicates that it may be better to first split out a `flake.nix` just for the tests. And all this clutter doesn't even explain yet *why* we thought it needs to be there.

closes #279.

Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Reviewed-on: Fediversity/Fediversity#374
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-committed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
2025-06-12 13:05:11 +02:00
69b2e535fe Document nullable fields sanitation (#365)
Reviewed-on: Fediversity/Fediversity#365
Reviewed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-10 11:57:01 +02:00
09119803e8 Deployment: handle nullable config fields
This is quite frustrating. In the meantime, it does get the deployment
working again.
2025-06-06 11:50:48 +02:00
4dd1491e71 FediPanel: fix deployment status
also remove unused `dummy_user`
2025-06-06 11:02:40 +02:00
2f55e1512a FediPanel: bump nginx timeout to an hour 2025-06-06 10:57:19 +02:00
b59f8a4183 simplify login tests (#352)
don't go through template generation but use underlying the tag
implementation directly

Co-authored-by: Nicolas Jeannerod <nicolas.jeannerod@moduscreate.com>
Reviewed-on: Fediversity/Fediversity#352
2025-06-06 10:56:34 +02:00
56b953526b Deployment tests: Check status of services before deploying 2025-06-06 10:54:06 +02:00
1f8677e83d FediPanel: better logging of NixOps4 2025-06-06 10:53:22 +02:00
2fae356d0a Deployment tests: also make acmeNodeIP available in NixOS test 2025-06-06 10:52:49 +02:00
046f7c5998 Deployment tests: comment on Pebble's certificate 2025-06-06 10:52:18 +02:00
74 changed files with 1378 additions and 181 deletions

View file

@ -15,6 +15,12 @@ jobs:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- run: nix-build -A tests - run: nix-build -A tests
check-data-model:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix-shell --run 'nix-unit ./deployment/data-model-test.nix'
check-peertube: check-peertube:
runs-on: native runs-on: native
steps: steps:
@ -38,3 +44,9 @@ jobs:
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-cli -L - run: nix build .#checks.x86_64-linux.deployment-cli -L
check-deployment-panel:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-panel -L

View file

@ -154,6 +154,3 @@ details as to what they are for. As an overview:
- [`services/`](./services) contains our effort to make Fediverse applications - [`services/`](./services) contains our effort to make Fediverse applications
work seemlessly together in our specific setting. work seemlessly together in our specific setting.
- [`website/`](./website) contains the framework and the content of [the
Fediversity website](https://fediversity.eu/)

View file

@ -41,6 +41,23 @@ in
shell = pkgs.mkShellNoCC { shell = pkgs.mkShellNoCC {
inherit (pre-commit-check) shellHook; inherit (pre-commit-check) shellHook;
buildInputs = pre-commit-check.enabledPackages; buildInputs = pre-commit-check.enabledPackages;
packages =
let
test-loop = pkgs.writeShellApplication {
name = "test-loop";
runtimeInputs = [
pkgs.watchexec
pkgs.nix-unit
];
text = ''
watchexec -w ${builtins.toString ./.} -- nix-unit ${builtins.toString ./deployment/data-model-test.nix} "$@"
'';
};
in
[
pkgs.nix-unit
test-loop
];
}; };
tests = { tests = {

View file

@ -3,6 +3,13 @@
This directory contains work to generate a full Fediversity deployment from a minimal configuration. This directory contains work to generate a full Fediversity deployment from a minimal configuration.
This is different from [`../services/`](../services) that focuses on one machine, providing a polished and unified interface to different Fediverse services. This is different from [`../services/`](../services) that focuses on one machine, providing a polished and unified interface to different Fediverse services.
## Data model
The core piece of the project is the [Fediversity data model](./data-model.nix), which describes all entities and their interactions.
What can be done with it is exemplified in the [evaluation tests](./data-model-test.nix).
Run `test-loop` in the development environment when hacking on the data model or adding tests.
## Checks ## Checks
There are three levels of deployment checks: `basic`, `cli`, `panel`. There are three levels of deployment checks: `basic`, `cli`, `panel`.
@ -109,8 +116,8 @@ flowchart LR
target_machines -->|get certs| acme target_machines -->|get certs| acme
``` ```
### [WIP] Service deployment check from the panel ### Service deployment check from the FediPanel
This is a full deployment check running the panel on the deployer machine, deploying some services through the panel and checking that they are indeed on the target machines, then cleans them up and checks whether that works, too. This is a full deployment check running the [FediPanel](../panel) on the deployer machine, deploying some services through it and checking that they are indeed on the target machines, then cleans them up and checks whether that works, too.
It builds upon the basic and CLI deployment checks. It builds upon the basic and CLI deployment checks, the only difference being that `deployer` runs NixOps4 only indirectly via the panel, and the `client` node is the one that triggers the deployment via a browser, the way a human would.

View file

@ -10,6 +10,12 @@
inputs.nixops4.packages.${pkgs.system}.default inputs.nixops4.packages.${pkgs.system}.default
]; ];
# FIXME: sad times
system.extraDependencies = with pkgs; [
jq
jq.inputDerivation
];
system.extraDependenciesFromModule = system.extraDependenciesFromModule =
{ pkgs, ... }: { pkgs, ... }:
{ {

View file

@ -1 +0,0 @@
## This is a placeholder file. It will be overwritten by the test.

View file

@ -79,10 +79,16 @@ in
## and check that they are working properly. ## and check that they are working properly.
extraTestScript = '' extraTestScript = ''
with subtest("Check the status of the services - there should be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with no services enabled"): with subtest("Run deployment with no services enabled"):
deployer.succeed("nixops4 apply check-deployment-cli-nothing --show-trace --no-interactive 1>&2") deployer.succeed("nixops4 apply check-deployment-cli-nothing --show-trace --no-interactive 1>&2")
with subtest("Check the status of the services - there should be none"): with subtest("Check the status of the services - there should still be none"):
garage.fail("systemctl status garage.service") garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service") mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service") peertube.fail("systemctl status peertube.service")

View file

@ -14,6 +14,8 @@ let
types types
; ;
sources = import ../../../npins;
in in
{ {
imports = [ ./sharedOptions.nix ]; imports = [ ./sharedOptions.nix ];
@ -57,6 +59,8 @@ in
"${inputs.nixops4-nixos}" "${inputs.nixops4-nixos}"
"${inputs.nixpkgs}" "${inputs.nixpkgs}"
"${sources.flake-inputs}"
pkgs.stdenv pkgs.stdenv
pkgs.stdenvNoCC pkgs.stdenvNoCC
] ]

View file

@ -53,6 +53,7 @@ in
}; };
config = { config = {
acmeNodeIP = config.nodes.acme.networking.primaryIPAddress;
nodes = nodes =
{ {
@ -118,7 +119,6 @@ in
with subtest("Configure the deployer key"): with subtest("Configure the deployer key"):
deployer.succeed("""mkdir -p ~/.ssh && ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa""") deployer.succeed("""mkdir -p ~/.ssh && ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa""")
deployer_key = deployer.succeed("cat ~/.ssh/id_rsa.pub").strip() deployer_key = deployer.succeed("cat ~/.ssh/id_rsa.pub").strip()
deployer.succeed(f"echo '{deployer_key}' > ${config.pathFromRoot}/deployer.pub")
${forConcat config.targetMachines (tm: '' ${forConcat config.targetMachines (tm: ''
${tm}.succeed(f"mkdir -p /root/.ssh && echo '{deployer_key}' >> /root/.ssh/authorized_keys") ${tm}.succeed(f"mkdir -p /root/.ssh && echo '{deployer_key}' >> /root/.ssh/authorized_keys")
'')} '')}

View file

@ -50,13 +50,16 @@ in
}; };
security.pki.certificateFiles = [ security.pki.certificateFiles = [
## NOTE: This certificate is the one used by the Pebble HTTPS server.
## This is NOT the root CA of the Pebble server. We do add it here so
## that Pebble clients can talk to its API, but this will not allow
## those machines to verify generated certificates.
testCerts.ca.cert testCerts.ca.cert
]; ];
## FIXME: it is a bit sad that all this logistics is necessary. look into ## FIXME: it is a bit sad that all this logistics is necessary. look into
## better DNS stuff ## better DNS stuff
networking.extraHosts = "${config.acmeNodeIP} acme.test"; networking.extraHosts = "${config.acmeNodeIP} acme.test";
}) })
]; ];
} }

View file

@ -0,0 +1,91 @@
{
self,
inputs,
lib,
...
}:
let
inherit (builtins)
fromJSON
listToAttrs
;
targetMachines = [
"garage"
"mastodon"
"peertube"
"pixelfed"
];
pathToRoot = /. + (builtins.unsafeDiscardStringContext self);
pathFromRoot = ./.;
enableAcme = true;
in
{
perSystem =
{ pkgs, ... }:
{
checks.deployment-panel = pkgs.testers.runNixOSTest {
imports = [
../common/nixosTest.nix
./nixosTest.nix
];
_module.args.inputs = inputs;
inherit
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
};
};
nixops4Deployments =
let
makeTargetResource = nodeName: {
imports = [ ../common/targetResource.nix ];
_module.args.inputs = inputs;
inherit
nodeName
pathToRoot
pathFromRoot
enableAcme
;
};
## The deployment function - what we are here to test!
##
## TODO: Modularise `deployment/default.nix` to get rid of the nested
## function calls.
makeTestDeployment =
args:
(import ../..)
{
inherit lib;
inherit (inputs) nixops4 nixops4-nixos;
fediversity = import ../../../services/fediversity;
}
(listToAttrs (
map (nodeName: {
name = "${nodeName}ConfigurationResource";
value = makeTargetResource nodeName;
}) targetMachines
))
args;
in
{
check-deployment-panel = makeTestDeployment (
fromJSON (
let
env = builtins.getEnv "DEPLOYMENT";
in
if env == "" then
throw "The DEPLOYMENT environment needs to be set. You do not want to use this deployment unless in the `deployment-panel` NixOS test."
else
env
)
);
};
}

View file

@ -0,0 +1,362 @@
{
inputs,
lib,
hostPkgs,
config,
...
}:
let
inherit (lib)
getExe
;
## Some places need a dummy file that will in fact never be used. We create
## it here.
dummyFile = hostPkgs.writeText "dummy" "dummy";
panelPort = 8000;
panelUser = "test";
panelEmail = "test@test.com";
panelPassword = "ouiprdaaa43"; # panel's manager complains if too close to username or email
fediUser = "test";
fediEmail = "test@test.com";
fediPassword = "testtest";
fediName = "Testy McTestface";
toPythonBool = b: if b then "True" else "False";
interactWithPanel =
{
baseUri,
enableMastodon,
enablePeertube,
enablePixelfed,
}:
hostPkgs.writers.writePython3Bin "interact-with-panel"
{
libraries = with hostPkgs.python3Packages; [ selenium ];
flakeIgnore = [
"E302" # expected 2 blank lines, found 0
"E303" # too many blank lines
"E305" # expected 2 blank lines after end of function or class
"E501" # line too long (> 79 characters)
"E731" # do not assign lambda expression, use a def
];
}
''
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
print("Create and configure driver...")
options = Options()
options.add_argument("--headless")
options.binary_location = "${getExe hostPkgs.firefox-unwrapped}"
service = webdriver.FirefoxService(executable_path="${getExe hostPkgs.geckodriver}")
driver = webdriver.Firefox(options=options, service=service)
driver.set_window_size(1280, 960)
driver.implicitly_wait(360)
driver.command_executor.set_timeout(3600)
print("Open login page...")
driver.get("${baseUri}/login/")
print("Enter username...")
driver.find_element(By.XPATH, "//input[@name = 'username']").send_keys("${panelUser}")
print("Enter password...")
driver.find_element(By.XPATH, "//input[@name = 'password']").send_keys("${panelPassword}")
print("Click Login button...")
driver.find_element(By.XPATH, "//button[normalize-space() = 'Login']").click()
print("Open configuration page...")
driver.get("${baseUri}/configuration/")
# Helpers to actually set and not add or switch input values.
def input_set(elt, keys):
elt.clear()
elt.send_keys(keys)
def checkbox_set(elt, new_value):
if new_value != elt.is_selected():
elt.click()
print("Enable Fediversity...")
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'enable']"), True)
print("Fill in initialUser info...")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.username']"), "${fediUser}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.password']"), "${fediPassword}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.email']"), "${fediEmail}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.displayName']"), "${fediName}")
print("Enable services...")
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'mastodon.enable']"), ${toPythonBool enableMastodon})
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'peertube.enable']"), ${toPythonBool enablePeertube})
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'pixelfed.enable']"), ${toPythonBool enablePixelfed})
print("Start deployment...")
driver.find_element(By.XPATH, "//button[@id = 'deploy-button']").click()
print("Wait for deployment status to show up...")
get_deployment_result = lambda d: d.find_element(By.XPATH, "//div[@id = 'deployment-result']//p")
WebDriverWait(driver, timeout=3660, poll_frequency=10).until(get_deployment_result)
deployment_result = get_deployment_result(driver).get_attribute('innerHTML')
print("Quit...")
driver.quit()
match deployment_result:
case 'Deployment Succeeded':
print("Deployment has succeeded; exiting normally")
exit(0)
case 'Deployment Failed':
print("Deployment has failed; exiting with return code `1`")
exit(1)
case _:
print(f"Unexpected deployment result: {deployment_result}; exiting with return code `2`")
exit(2)
'';
in
{
name = "deployment-panel";
## The panel's module sets `nixpkgs.overlays` which clashes with
## `pkgsReadOnly`. We disable it here.
node.pkgsReadOnly = false;
nodes.deployer =
{ pkgs, ... }:
{
imports = [
(import ../../../panel { }).module
];
## FIXME: This should be in the common stuff.
security.acme = {
acceptTerms = true;
defaults.email = "test@test.com";
defaults.server = "https://acme.test/dir";
};
security.pki.certificateFiles = [
(import "${inputs.nixpkgs}/nixos/tests/common/acme/server/snakeoil-certs.nix").ca.cert
];
networking.extraHosts = "${config.acmeNodeIP} acme.test";
services.panel = {
enable = true;
production = true;
domain = "deployer";
secrets = {
SECRET_KEY = dummyFile;
};
port = panelPort;
nixops4Package = inputs.nixops4.packages.${pkgs.system}.default;
deployment = {
flake = "/run/fedipanel/flake";
name = "check-deployment-panel";
};
};
environment.systemPackages = [ pkgs.expect ];
## FIXME: The following dependencies are necessary but I do not
## understand why they are not covered by the fake node.
system.extraDependencies = with pkgs; [
peertube
peertube.inputDerivation
gixy # a configuration checker for nginx
gixy.inputDerivation
];
system.extraDependenciesFromModule = {
imports = [ ../../../services/fediversity ];
fediversity = {
domain = "fediversity.net"; # would write `dummy` but that would not type
garage.enable = true;
mastodon = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
peertube = {
enable = true;
secretsFile = dummyFile;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
pixelfed = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
temp.cores = 1;
temp.initialUser = {
username = "dummy";
displayName = "dummy";
email = "dummy";
passwordFile = dummyFile;
};
};
};
};
nodes.client =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
httpie
dnsutils # for `dig`
openssl
cacert
wget
python3
python3Packages.selenium
firefox-unwrapped
geckodriver
];
security.pki.certificateFiles = [
config.nodes.acme.test-support.acme.caCert
];
networking.extraHosts = "${config.acmeNodeIP} acme.test";
};
## NOTE: The target machines may need more RAM than the default to handle
## being deployed to, otherwise we get something like:
##
## pixelfed # [ 616.785499 ] sshd-session[1167]: Conection closed by 2001:db8:1::2 port 45004
## deployer # error: writing to file: No space left on device
## pixelfed # [ 616.788538 ] sshd-session[1151]: pam_unix(sshd:session): session closed for user port
## pixelfed # [ 616.793929 ] systemd-logind[719]: Session 4 logged out. Waiting for processes to exit.
## deployer # Error: Could not create resource
##
## These values have been trimmed down to the gigabyte.
nodes.mastodon.virtualisation.memorySize = 4 * 1024;
nodes.pixelfed.virtualisation.memorySize = 4 * 1024;
nodes.peertube.virtualisation.memorySize = 5 * 1024;
## FIXME: The test of presence of the services are very simple: we only
## check that there is a systemd service of the expected name on the
## machine. This proves at least that NixOps4 did something, and we cannot
## really do more for now because the services aren't actually working
## properly, in particular because of DNS issues. We should fix the services
## and check that they are working properly.
extraTestScript = ''
## TODO: We want a nicer way to control where the FediPanel consumes its
## flake, which can default to the store but could also be somewhere else if
## someone wanted to change the code of the flake.
##
with subtest("Give the panel access to the flake"):
deployer.succeed("mkdir /run/fedipanel /run/fedipanel/flake >&2")
deployer.succeed("cp -R . /run/fedipanel/flake >&2")
deployer.succeed("chown -R panel:panel /run/fedipanel >&2")
## TODO: I want a programmatic way to provide an SSH key to the panel (and
## therefore NixOps4). This should happen either in the Python code, but
## maybe it is fair that that one picks up on the user's key? It could
## also be in the Nix packaging.
##
with subtest("Set up the panel's SSH keys"):
deployer.succeed("mkdir /home/panel/.ssh >&2")
deployer.succeed("cp -R /root/.ssh/* /home/panel/.ssh >&2")
deployer.succeed("chown -R panel:panel /home/panel/.ssh >&2")
deployer.succeed("chmod 600 /home/panel/.ssh/* >&2")
## TODO: This is a hack to accept the root CA used by Pebble on the client
## machine. Pebble randomizes everything, so the only way to get it is to
## call the /roots/0 endpoint at runtime, leaving not much margin for a nice
## Nixy way of adding the certificate. There is no way around it as this is
## by design in Pebble, showing in fact that Pebble was not the appropriate
## tool for our use and that nixpkgs does not in fact provide an easy way to
## generate _usable_ certificates in NixOS tests. I suggest we merge this,
## and track the task to set it up in a cleaner way. I would tackle this in
## a subsequent PR, and hopefully even contribute this BetterWay(tm) to
## nixpkgs. — Niols
##
with subtest("Set up ACME root CA on client"):
client.succeed("""
cd /etc/ssl/certs
curl -o pebble-root-ca.pem https://acme.test:15000/roots/0
curl -o pebble-intermediate-ca.pem https://acme.test:15000/intermediates/0
{ cat ca-bundle.crt
cat pebble-root-ca.pem
cat pebble-intermediate-ca.pem
} > new-ca-bundle.crt
rm ca-bundle.crt ca-certificates.crt
mv new-ca-bundle.crt ca-bundle.crt
ln -s ca-bundle.crt ca-certificates.crt
""")
## TODO: I would hope for a more declarative way to add users. This should
## be handled by the Nix packaging of the FediPanel. — Niols
##
with subtest("Create panel user"):
deployer.succeed("""
expect -c '
spawn manage createsuperuser --username ${panelUser} --email ${panelEmail}
expect "Password: "; send "${panelPassword}\\n";
expect "Password (again): "; send "${panelPassword}\\n"
interact
' >&2
""")
with subtest("Check the status of the services - there should be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with no services enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = false;
enablePeertube = false;
enablePixelfed = false;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - there should still be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with Mastodon and Pixelfed enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = true;
enablePeertube = false;
enablePixelfed = true;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - expecting Garage, Mastodon and Pixelfed"):
garage.succeed("systemctl status garage.service")
mastodon.succeed("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.succeed("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with only Peertube enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = false;
enablePeertube = true;
enablePixelfed = false;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - expecting Garage and Peertube"):
garage.succeed("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.succeed("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
'';
}

View file

@ -0,0 +1,45 @@
let
inherit (import ../default.nix { }) pkgs;
inherit (pkgs) lib;
eval =
module:
(lib.evalModules {
modules = [
module
./data-model.nix
];
}).config;
in
{
test-eval = {
expr =
let
example = eval {
runtime-environments.bar.nixos = {
module =
{ ... }:
{
system.stateVersion = "25.05";
};
};
applications.foo = {
module =
{ pkgs, ... }:
{
environment.systemPackages = [
pkgs.hello
];
};
};
};
in
{
has-runtime = lib.isAttrs example.runtime-environments.bar.nixos.module;
has-application = lib.isAttrs example.applications.foo.module;
};
expected = {
has-runtime = true;
has-application = true;
};
};
}

43
deployment/data-model.nix Normal file
View file

@ -0,0 +1,43 @@
{
lib,
...
}:
let
inherit (lib) types mkOption;
in
with types;
{
options = {
runtime-environments = mkOption {
description = "Collection of runtime environments into which applications can be deployed";
type = attrsOf (attrTag {
nixos = mkOption {
description = "A single NixOS machine";
type = submodule {
options = {
module = mkOption {
description = "The NixOS module describing the base configuration for that machine";
type = deferredModule;
};
};
};
};
});
};
applications = mkOption {
description = "Collection of Fediversity applications";
type = attrsOf (submoduleWith {
modules = [
{
options = {
module = mkOption {
description = "The NixOS module for that application, for configuring that application";
type = deferredModule;
};
};
}
];
});
};
};
}

View file

@ -33,11 +33,29 @@
## information coming from the FediPanel. ## information coming from the FediPanel.
## ##
## FIXME: lock step the interface with the definitions in the FediPanel ## FIXME: lock step the interface with the definitions in the FediPanel
panelConfig: panelConfigNullable:
let let
inherit (lib) mkIf; inherit (lib) mkIf;
## The convertor from module options to JSON schema does not generate proper
## JSON schema types, forcing us to use nullable fields for default values.
## However, working with those fields in the deployment code is annoying (and
## unusual for Nix programmers), so we sanitize the input here and add back
## the default value by hand.
nonNull = x: v: if x == null then v else x;
panelConfig = {
domain = nonNull panelConfigNullable.domain "fediversity.net";
initialUser = nonNull panelConfigNullable.initialUser {
displayName = "Testy McTestface";
username = "test";
password = "testtest";
email = "test@test.com";
};
mastodon = nonNull panelConfigNullable.mastodon { enable = false; };
peertube = nonNull panelConfigNullable.peertube { enable = false; };
pixelfed = nonNull panelConfigNullable.pixelfed { enable = false; };
};
in in
## Regular arguments of a NixOps4 deployment module. ## Regular arguments of a NixOps4 deployment module.
@ -122,7 +140,7 @@ in
{ pkgs, ... }: { pkgs, ... }:
mkIf (cfg.mastodon.enable || cfg.peertube.enable || cfg.pixelfed.enable) { mkIf (cfg.mastodon.enable || cfg.peertube.enable || cfg.pixelfed.enable) {
fediversity = { fediversity = {
inherit (panelConfig) domain; inherit (cfg) domain;
garage.enable = true; garage.enable = true;
pixelfed = pixelfedS3KeyConfig { inherit pkgs; }; pixelfed = pixelfedS3KeyConfig { inherit pkgs; };
mastodon = mastodonS3KeyConfig { inherit pkgs; }; mastodon = mastodonS3KeyConfig { inherit pkgs; };

View file

@ -2,5 +2,6 @@
imports = [ imports = [
./check/basic/flake-part.nix ./check/basic/flake-part.nix
./check/cli/flake-part.nix ./check/cli/flake-part.nix
./check/panel/flake-part.nix
]; ];
} }

19
flake.lock generated
View file

@ -596,22 +596,6 @@
"type": "github" "type": "github"
} }
}, },
"nixpkgs_4": {
"locked": {
"lastModified": 1740463929,
"narHash": "sha256-4Xhu/3aUdCKeLfdteEHMegx5ooKQvwPHNkOgNCXQrvc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "5d7db4668d7a0c6cc5fc8cf6ef33b008b2b1ed8b",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-24.11",
"repo": "nixpkgs",
"type": "github"
}
},
"parts": { "parts": {
"inputs": { "inputs": {
"nixpkgs-lib": [ "nixpkgs-lib": [
@ -686,8 +670,7 @@
"nixops4-nixos", "nixops4-nixos",
"nixops4" "nixops4"
], ],
"nixops4-nixos": "nixops4-nixos", "nixops4-nixos": "nixops4-nixos"
"nixpkgs": "nixpkgs_4"
} }
}, },
"rust-overlay": { "rust-overlay": {

View file

@ -1,6 +1,5 @@
{ {
inputs = { inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-24.11"; # consumed by flake-parts
flake-parts.url = "github:hercules-ci/flake-parts"; flake-parts.url = "github:hercules-ci/flake-parts";
git-hooks.url = "github:cachix/git-hooks.nix"; git-hooks.url = "github:cachix/git-hooks.nix";
nixops4.follows = "nixops4-nixos/nixops4"; nixops4.follows = "nixops4-nixos/nixops4";
@ -8,12 +7,34 @@
}; };
outputs = outputs =
inputs@{ flake-parts, ... }: inputs@{ self, flake-parts, ... }:
let let
sources = import ./npins; sources = import ./npins;
inherit (import sources.flake-inputs) import-flake;
inherit (sources) git-hooks agenix; inherit (sources) git-hooks agenix;
# XXX(@fricklerhandwerk): this atrocity is required to splice in a foreign Nixpkgs via flake-parts
# XXX - this is just importing a flake
nixpkgs = import-flake { src = sources.nixpkgs; };
# XXX - this overrides the inputs attached to `self`
inputs' = self.inputs // {
nixpkgs = nixpkgs;
};
self' = self // {
inputs = inputs';
};
in in
flake-parts.lib.mkFlake { inherit inputs; } { # XXX - finally we override the overall set of `inputs` -- we need both:
# `flake-parts obtains `nixpkgs` from `self.inputs` and not from `inputs`.
flake-parts.lib.mkFlake
{
inputs = inputs // {
inherit nixpkgs;
};
self = self';
}
(
{ inputs, ... }:
{
systems = [ systems = [
"x86_64-linux" "x86_64-linux"
"aarch64-linux" "aarch64-linux"
@ -68,5 +89,6 @@
]; ];
}; };
}; };
}; }
);
} }

View file

@ -14,7 +14,7 @@ everything will become much cleaner.
above 100. For instance, `fedi117`. above 100. For instance, `fedi117`.
2. Add a basic configuration for the machine. These typically go in 2. Add a basic configuration for the machine. These typically go in
`infra/machines/<name>/default.nix`. You can look at other `fediXXX` VMs to `machines/dev/<name>/default.nix`. You can look at other `fediXXX` VMs to
find inspiration. You probably do not need a `nixos.module` option at this find inspiration. You probably do not need a `nixos.module` option at this
point. point.
@ -48,7 +48,7 @@ everything will become much cleaner.
7. Regenerate the list of machines: 7. Regenerate the list of machines:
``` ```
sh infra/machines.md.sh sh machines/machines.md.sh
``` ```
Commit it with the machine's configuration, public key, etc. Commit it with the machine's configuration, public key, etc.

View file

@ -1,4 +1,5 @@
{ {
inputs,
lib, lib,
config, config,
... ...
@ -9,7 +10,7 @@ let
inherit (lib.attrsets) concatMapAttrs optionalAttrs; inherit (lib.attrsets) concatMapAttrs optionalAttrs;
inherit (lib.strings) removeSuffix; inherit (lib.strings) removeSuffix;
sources = import ../../npins; sources = import ../../npins;
inherit (sources) nixpkgs agenix disko; inherit (sources) agenix disko;
secretsPrefix = ../../secrets; secretsPrefix = ../../secrets;
secrets = import (secretsPrefix + "/secrets.nix"); secrets = import (secretsPrefix + "/secrets.nix");
@ -26,7 +27,7 @@ in
hostPublicKey = config.fediversityVm.hostPublicKey; hostPublicKey = config.fediversityVm.hostPublicKey;
}; };
inherit nixpkgs; inherit (inputs) nixpkgs;
## The configuration of the machine. We strive to keep in this file only the ## The configuration of the machine. We strive to keep in this file only the
## options that really need to be injected from the resource. Everything else ## options that really need to be injected from the resource. Everything else

View file

@ -21,6 +21,9 @@ let
makeResourceModule = makeResourceModule =
{ vmName, isTestVm }: { vmName, isTestVm }:
{ {
# TODO(@fricklerhandwerk): this is terrible but IMO we should just ditch flake-parts and have our own data model for how the project is organised internally
_module.args = { inherit inputs; };
imports = imports =
[ [
./common/resource.nix ./common/resource.nix
@ -28,7 +31,7 @@ let
++ ( ++ (
if isTestVm then if isTestVm then
[ [
./test-machines/${vmName} ../machines/operator/${vmName}
{ {
nixos.module.users.users.root.openssh.authorizedKeys.keys = [ nixos.module.users.users.root.openssh.authorizedKeys.keys = [
# allow our panel vm access to the test machines # allow our panel vm access to the test machines
@ -38,7 +41,7 @@ let
] ]
else else
[ [
./machines/${vmName} ../machines/dev/${vmName}
] ]
); );
fediversityVm.name = vmName; fediversityVm.name = vmName;
@ -147,8 +150,8 @@ let
listSubdirectories = path: attrNames (filterAttrs (_: type: type == "directory") (readDir path)); listSubdirectories = path: attrNames (filterAttrs (_: type: type == "directory") (readDir path));
machines = listSubdirectories ./machines; machines = listSubdirectories ../machines/dev;
testMachines = listSubdirectories ./test-machines; testMachines = listSubdirectories ../machines/operator;
in in
{ {

4
machines/README.md Normal file
View file

@ -0,0 +1,4 @@
# Machines
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.

View file

@ -0,0 +1,203 @@
{lib, tf, ...}:
{
services.postgresql = {
enable = true;
authentication = lib.mkForce ''
local all all trust
'';
ensureDatabases = [
"atticd"
];
ensureUsers = [
{
name = "atticd";
ensureDBOwnership = true;
}
];
};
services.atticd = {
enable = true;
# one `monolithic` and any number of `api-server` nodes
mode = "monolithic";
environmentFile = "/var/lib/secrets/attic_env";
# https://github.com/zhaofengli/attic/blob/main/server/src/config-template.toml
settings = {
# Socket address to listen on
# listen = "[::]:8080";
# listen = "0.0.0.0:8080";
listen = "127.0.0.1:8080";
# Allowed `Host` headers
#
# This _must_ be configured for production use. If unconfigured or the
# list is empty, all `Host` headers are allowed.
allowed-hosts = [];
# The canonical API endpoint of this server
#
# This is the endpoint exposed to clients in `cache-config` responses.
#
# This _must_ be configured for production use. If not configured, the
# API endpoint is synthesized from the client's `Host` header which may
# be insecure.
#
# The API endpoint _must_ end with a slash (e.g., `https://domain.tld/attic/`
# not `https://domain.tld/attic`).
api-endpoint = "https://${tf.resource.hetznerdns_zone.main.name}/";
# Whether to soft-delete caches
#
# If this is enabled, caches are soft-deleted instead of actually
# removed from the database. Note that soft-deleted caches cannot
# have their names reused as long as the original database records
# are there.
#soft-delete-caches = false;
# Whether to require fully uploading a NAR if it exists in the global cache.
#
# If set to false, simply knowing the NAR hash is enough for
# an uploader to gain access to an existing NAR in the global
# cache.
#require-proof-of-possession = true;
# Database connection
database = {
# Connection URL
#
# For production use it's recommended to use PostgreSQL.
# url = "postgresql:///atticd:password@127.0.0.1:5432/atticd";
url = "postgresql:///atticd?host=/run/postgresql";
# Whether to enable sending on periodic heartbeat queries
#
# If enabled, a heartbeat query will be sent every minute
#heartbeat = false;
};
# File storage configuration
storage = {
# Storage type
#
# Can be "local" or "s3".
type = "s3";
# ## Local storage
# The directory to store all files under
# path = "%storage_path%";
# ## S3 Storage (set type to "s3" and uncomment below)
# The AWS region
region = tf.resource.cloudflare_r2_bucket.atticd.location; # is this even used for R2?
# The name of the bucket
bucket = tf.resource.cloudflare_r2_bucket.atticd.name;
# Custom S3 endpoint
#
# Set this if you are using an S3-compatible object storage (e.g., Minio).
endpoint = "https://2b56368370c7a8e7f41328f0b8d4040a.r2.cloudflarestorage.com";
# Credentials
#
# If unset, the credentials are read from the `AWS_ACCESS_KEY_ID` and
# `AWS_SECRET_ACCESS_KEY` environment variables.
# storage.credentials = {
# access_key_id = "";
# secret_access_key = "";
# };
};
# Data chunking
#
# Warning: If you change any of the values here, it will be
# difficult to reuse existing chunks for newly-uploaded NARs
# since the cutpoints will be different. As a result, the
# deduplication ratio will suffer for a while after the change.
chunking = {
# The minimum NAR size to trigger chunking
#
# If 0, chunking is disabled entirely for newly-uploaded NARs.
# If 1, all NARs are chunked.
nar-size-threshold = 65536; # chunk files that are 64 KiB or larger
# The preferred minimum size of a chunk, in bytes
min-size = 16384; # 16 KiB
# The preferred average size of a chunk, in bytes
avg-size = 65536; # 64 KiB
# The preferred maximum size of a chunk, in bytes
max-size = 262144; # 256 KiB
};
# Compression
compression = {
# Compression type
#
# Can be "none", "brotli", "zstd", or "xz"
type = "zstd";
# Compression level
#level = 8;
};
# Garbage collection
garbage-collection = {
# The frequency to run garbage collection at
#
# By default it's 12 hours. You can use natural language
# to specify the interval, like "1 day".
#
# If zero, automatic garbage collection is disabled, but
# it can still be run manually with `atticd --mode garbage-collector-once`.
interval = "12 hours";
# Default retention period
#
# Zero (default) means time-based garbage-collection is
# disabled by default. You can enable it on a per-cache basis.
#default-retention-period = "6 months";
};
jwt = {
# WARNING: Changing _anything_ in this section will break any existing
# tokens. If you need to regenerate them, ensure that you use the the
# correct secret and include the `iss` and `aud` claims.
# JWT `iss` claim
#
# Set this to the JWT issuer that you want to validate.
# If this is set, all received JWTs will validate that the `iss` claim
# matches this value.
#token-bound-issuer = "some-issuer";
# JWT `aud` claim
#
# Set this to the JWT audience(s) that you want to validate.
# If this is set, all received JWTs will validate that the `aud` claim
# contains at least one of these values.
#token-bound-audiences = ["some-audience1", "some-audience2"];
};
# jwt.signing = {
# # JWT RS256 secret key
# #
# # Set this to the base64-encoded private half of an RSA PEM PKCS1 key.
# # TODO
# # You can also set it via the `ATTIC_SERVER_TOKEN_RS256_SECRET_BASE64`
# # environment variable.
# token-rs256-secret-base64 = "%token_rs256_secret_base64%";
# # JWT HS256 secret key
# #
# # Set this to the base64-encoded HMAC secret key.
# # You can also set it via the `ATTIC_SERVER_TOKEN_HS256_SECRET_BASE64`
# # environment variable.
# #token-hs256-secret-base64 = "";
# };
};
};
}

View file

@ -14,4 +14,10 @@
gateway = "2a00:51c0:13:1305::1"; gateway = "2a00:51c0:13:1305::1";
}; };
}; };
nixos.module = {
imports = [
../../../services/fediversity/attic
];
};
} }

View file

@ -20,7 +20,7 @@ vmOptions=$(
cd .. cd ..
nix eval \ nix eval \
--impure --raw --expr " --impure --raw --expr "
builtins.toJSON (builtins.getFlake (builtins.toString ./.)).vmOptions builtins.toJSON (builtins.getFlake (builtins.toString ../.)).vmOptions
" \ " \
--log-format raw --quiet --log-format raw --quiet
) )

View file

@ -25,6 +25,38 @@
"url": null, "url": null,
"hash": "1w2gsy6qwxa5abkv8clb435237iifndcxq0s79wihqw11a5yb938" "hash": "1w2gsy6qwxa5abkv8clb435237iifndcxq0s79wihqw11a5yb938"
}, },
"disko": {
"type": "GitRelease",
"repository": {
"type": "GitHub",
"owner": "nix-community",
"repo": "disko"
},
"pre_releases": false,
"version_upper_bound": null,
"release_prefix": null,
"submodules": false,
"version": "v1.12.0",
"revision": "7121f74b976481bc36877abaf52adab2a178fcbe",
"url": "https://api.github.com/repos/nix-community/disko/tarball/v1.12.0",
"hash": "0wbx518d2x54yn4xh98cgm65wvj0gpy6nia6ra7ns4j63hx14fkq"
},
"flake-inputs": {
"type": "GitRelease",
"repository": {
"type": "GitHub",
"owner": "fricklerhandwerk",
"repo": "flake-inputs"
},
"pre_releases": false,
"version_upper_bound": null,
"release_prefix": null,
"submodules": false,
"version": "4.1",
"revision": "ad02792f7543754569fe2fd3d5787ee00ef40be2",
"url": "https://api.github.com/repos/fricklerhandwerk/flake-inputs/tarball/4.1",
"hash": "1j57avx2mqjnhrsgq3xl7ih8v7bdhz1kj3min6364f486ys048bm"
},
"flake-parts": { "flake-parts": {
"type": "Git", "type": "Git",
"repository": { "repository": {
@ -80,6 +112,19 @@
"url": "https://api.github.com/repos/bigskysoftware/htmx/tarball/v2.0.4", "url": "https://api.github.com/repos/bigskysoftware/htmx/tarball/v2.0.4",
"hash": "1c4zm3b7ym01ijydiss4amd14mv5fbgp1n71vqjk4alc35jlnqy2" "hash": "1c4zm3b7ym01ijydiss4amd14mv5fbgp1n71vqjk4alc35jlnqy2"
}, },
"nix-templating": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "lassulus",
"repo": "nix-templating"
},
"branch": "master",
"submodules": false,
"revision": "437fd19b727e963560980fc4026f79400c440e39",
"url": "https://github.com/lassulus/nix-templating/archive/437fd19b727e963560980fc4026f79400c440e39.tar.gz",
"hash": "000gdd9a4w6gh9lgklsb4dzchgd0fpdkxlhgvpmw0m6ssmrxivkb"
},
"nix-unit": { "nix-unit": {
"type": "Git", "type": "Git",
"repository": { "repository": {
@ -105,6 +150,19 @@
"revision": "f33a4d26226c05d501b9d4d3e5e60a3a59991921", "revision": "f33a4d26226c05d501b9d4d3e5e60a3a59991921",
"url": "https://github.com/nixos/nixpkgs/archive/f33a4d26226c05d501b9d4d3e5e60a3a59991921.tar.gz", "url": "https://github.com/nixos/nixpkgs/archive/f33a4d26226c05d501b9d4d3e5e60a3a59991921.tar.gz",
"hash": "1b6dm1sn0bdpcsmxna0zzspjaixa2dald08005fry5jrbjvwafdj" "hash": "1b6dm1sn0bdpcsmxna0zzspjaixa2dald08005fry5jrbjvwafdj"
},
"vars": {
"type": "Git",
"repository": {
"type": "GitHub",
"owner": "lassulus",
"repo": "vars"
},
"branch": "main",
"submodules": false,
"revision": "856c18f0e7b95e262ac88ba9ddebf506a16fd4a5",
"url": "https://github.com/lassulus/vars/archive/856c18f0e7b95e262ac88ba9ddebf506a16fd4a5.tar.gz",
"hash": "095dmc67pf5idj4pgnibjbgfxpkm73px3sc6hylc9j0sqh3379q7"
} }
}, },
"version": 5 "version": 5

View file

@ -20,8 +20,15 @@ in
packages = [ packages = [
pkgs.npins pkgs.npins
manage manage
# NixOps4 and its dependencies
# FIXME: grab NixOps4 and add it here
pkgs.nix
pkgs.openssh
]; ];
env = import ./env.nix { inherit lib pkgs; } // { env = {
DEPLOYMENT_FLAKE = ../.;
DEPLOYMENT_NAME = "test";
NPINS_DIRECTORY = toString ../npins; NPINS_DIRECTORY = toString ../npins;
CREDENTIALS_DIRECTORY = toString ./.credentials; CREDENTIALS_DIRECTORY = toString ./.credentials;
DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3"; DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3";

View file

@ -1,18 +0,0 @@
{
lib,
pkgs,
...
}:
let
inherit (builtins) toString;
in
{
REPO_DIR = toString ../.;
# explicitly use nix, as e.g. lix does not have configurable-impure-env
BIN_PATH = lib.makeBinPath [
# explicitly use nix, as e.g. lix does not have configurable-impure-env
pkgs.nix
# nixops error maybe due to our flake git hook: executing 'git': No such file or directory
pkgs.git
];
}

View file

@ -23,7 +23,9 @@ let
cfg = config.services.${name}; cfg = config.services.${name};
package = pkgs.callPackage ./package.nix { }; package = pkgs.callPackage ./package.nix { };
environment = import ../env.nix { inherit lib pkgs; } // { environment = {
DEPLOYMENT_FLAKE = cfg.deployment.flake;
DEPLOYMENT_NAME = cfg.deployment.name;
DATABASE_URL = "sqlite:////var/lib/${name}/db.sqlite3"; DATABASE_URL = "sqlite:////var/lib/${name}/db.sqlite3";
USER_SETTINGS_FILE = pkgs.concatText "configuration.py" [ USER_SETTINGS_FILE = pkgs.concatText "configuration.py" [
((pkgs.formats.pythonVars { }).generate "settings.py" cfg.settings) ((pkgs.formats.pythonVars { }).generate "settings.py" cfg.settings)
@ -133,6 +135,34 @@ in
type = types.attrsOf types.path; type = types.attrsOf types.path;
default = { }; default = { };
}; };
nixops4Package = mkOption {
type = types.package;
description = ''
A package providing NixOps4.
TODO: This should not be at the level of the NixOS module, but instead
at the level of the panel's package. Until one finds a way to grab
NixOps4 from the package's npins-based code, we will have to do with
this workaround.
'';
};
deployment = {
flake = mkOption {
type = types.path;
default = ../..;
description = ''
The path to the flake containing the deployment. This is used to run the deployment button.
'';
};
name = mkOption {
type = types.str;
default = "test";
description = ''
The name of the deployment within the flake.
'';
};
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
@ -146,7 +176,19 @@ in
${cfg.domain} = ${cfg.domain} =
{ {
locations = { locations = {
"/".proxyPass = "http://localhost:${toString cfg.port}"; "/" = {
proxyPass = "http://localhost:${toString cfg.port}";
extraConfig = ''
## FIXME: The following is necessary because /deployment/status
## can take aaaaages to respond. I think this is horrendous
## design from the panel and should be changed there, but in the
## meantime we bump nginx's timeouts to one hour.
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
'';
};
"/static/".alias = "/var/lib/${name}/static/"; "/static/".alias = "/var/lib/${name}/static/";
}; };
} }
@ -158,6 +200,8 @@ in
}; };
users.users.${name} = { users.users.${name} = {
# TODO[Niols]: change to system user or document why we specifically
# need a normal user.
isNormalUser = true; isNormalUser = true;
}; };
@ -169,6 +213,11 @@ in
path = [ path = [
python-environment python-environment
manage-service manage-service
## NixOps4 and its dependencies
cfg.nixops4Package
pkgs.nix
pkgs.openssh
]; ];
preStart = '' preStart = ''
# Auto-migrate on first run or if the package has changed # Auto-migrate on first run or if the package has changed

View file

@ -13,6 +13,7 @@ let
secrets = { secrets = {
SECRET_KEY = pkgs.writeText "SECRET_KEY" "secret"; SECRET_KEY = pkgs.writeText "SECRET_KEY" "secret";
}; };
nixops4Package = pkgs.hello; # FIXME: actually pass NixOps4
}; };
virtualisation = { virtualisation = {

View file

@ -42,6 +42,7 @@ def get_secret(name: str, encoding: str = "utf-8") -> str:
return secret return secret
# SECURITY WARNING: keep the secret key used in production secret! # SECURITY WARNING: keep the secret key used in production secret!
# This is used nowhere but is required by Django.
SECRET_KEY = get_secret("SECRET_KEY") SECRET_KEY = get_secret("SECRET_KEY")
# SECURITY WARNING: don't run with debug turned on in production! # SECURITY WARNING: don't run with debug turned on in production!
@ -240,8 +241,7 @@ if user_settings_file is not None:
# The correct thing to do here would be using a helper function such as with `get_secret()` that will catch the exception and explain what's wrong and where to put the right values. # The correct thing to do here would be using a helper function such as with `get_secret()` that will catch the exception and explain what's wrong and where to put the right values.
# Replacing the `USER_SETTINGS_FILE` mechanism following the comment there would probably be a good thing. # Replacing the `USER_SETTINGS_FILE` mechanism following the comment there would probably be a good thing.
# PATH to expose to launch button # Path of the root flake to trigger nixops from, see #94, and name of the
bin_path=env['BIN_PATH'] # deployment.
# path of the root flake to trigger nixops from, see #94. deployment_flake = env["DEPLOYMENT_FLAKE"]
# to deploy this should be specified, for dev just use a relative path. deployment_name = env["DEPLOYMENT_NAME"]
repo_dir = env["REPO_DIR"]

View file

@ -1,9 +1,10 @@
from django.test import TestCase from django.test import TestCase
from django.urls import reverse from django.urls import reverse
from django.contrib.auth.models import User from django.contrib.auth.models import User
from django.template import Template, Context
from urllib.parse import unquote from urllib.parse import unquote
from panel.templatetags.custom_tags import auth_url
class Login(TestCase): class Login(TestCase):
def setUp(self): def setUp(self):
self.username = 'testuser' self.username = 'testuser'
@ -27,8 +28,7 @@ class Login(TestCase):
# check that the expected login URL is in the response # check that the expected login URL is in the response
context = response.context[0] context = response.context[0]
template = Template("{% load custom_tags %}{% auth_url 'login' %}") login_url = auth_url(context, 'login')
login_url = template.render(context)
self.assertIn(login_url, response.content.decode('utf-8')) self.assertIn(login_url, response.content.decode('utf-8'))
# log in # log in
@ -49,8 +49,7 @@ class Login(TestCase):
# check that the expected logout URL is present # check that the expected logout URL is present
context = response.context[0] context = response.context[0]
template = Template("{% load custom_tags %}{% auth_url 'logout' %}") logout_url = auth_url(context, 'logout')
logout_url = template.render(context)
self.assertIn(logout_url, response.content.decode('utf-8')) self.assertIn(logout_url, response.content.decode('utf-8'))
# log out again # log out again
@ -88,8 +87,7 @@ class Login(TestCase):
# check that the expected logout URL is present # check that the expected logout URL is present
context = response.context[0] context = response.context[0]
template = Template("{% load custom_tags %}{% auth_url 'logout' %}") logout_url = auth_url(context, 'logout')
logout_url = template.render(context)
self.assertIn(logout_url, response.content.decode('utf-8')) self.assertIn(logout_url, response.content.decode('utf-8'))
# log out # log out
@ -97,8 +95,7 @@ class Login(TestCase):
# check that we're at the expected location, logged out # check that we're at the expected location, logged out
self.assertEqual(response.status_code, 200) self.assertEqual(response.status_code, 200)
template = Template("{% load custom_tags %}{% auth_url 'login' %}") login_url = auth_url(context, 'login')
login_url = template.render(context)
location, status = response.redirect_chain[-1] location, status = response.redirect_chain[-1]
self.assertEqual(location, unquote(login_url)) self.assertEqual(location, unquote(login_url))
self.assertFalse(response.context['user'].is_authenticated) self.assertFalse(response.context['user'].is_authenticated)

View file

@ -65,9 +65,6 @@ class ConfigurationForm(LoginRequiredMixin, APIView):
config.save() config.save()
return redirect(self.success_url) return redirect(self.success_url)
# TODO(@fricklerhandwerk):
# this is broken after changing the form view.
# but if there's no test for it, how do we know it ever worked in the first place?
class DeploymentStatus(ConfigurationForm): class DeploymentStatus(ConfigurationForm):
def post(self, request): def post(self, request):
@ -84,44 +81,29 @@ class DeploymentStatus(ConfigurationForm):
config.save() config.save()
deployment_result, deployment_params = self.deployment(config.parsed_value) deployment_result, deployment_params = self.deployment(config.parsed_value)
if deployment_result.returncode == 0:
deployment_status = "Deployment Succeeded"
else:
deployment_status = "Deployment Failed"
return render(self.request, "partials/deployment_result.html", { return render(self.request, "partials/deployment_result.html", {
"deployment_status": deployment_status, "deployment_succeeded": (deployment_result.returncode == 0),
"services": deployment_params.json(), "services": deployment_params.json(),
}) })
def deployment(self, config: BaseModel): def deployment(self, config: BaseModel):
# FIXME: let the user specify these from the form (#190)
dummy_user = {
"initialUser": {
"displayName": "Testy McTestface",
"username": "test",
"password": "testtest",
"email": "test@test.com",
},
}
env = { env = {
"PATH": settings.bin_path, "PATH": os.environ.get("PATH"),
# pass in form info to our deployment # pass in form info to our deployment
"DEPLOYMENT": config.json() "DEPLOYMENT": config.json()
} }
cmd = [ cmd = [
"nix",
"develop",
"--extra-experimental-features",
"configurable-impure-env",
"--command",
"nixops4", "nixops4",
"apply", "apply",
"test", settings.deployment_name,
"--show-trace",
"--no-interactive",
] ]
deployment_result = subprocess.run( deployment_result = subprocess.run(
cmd, cmd,
cwd=settings.repo_dir, cwd = settings.deployment_flake,
env = env, env = env,
stderr = subprocess.STDOUT,
) )
return deployment_result, config return deployment_result, config

View file

@ -0,0 +1,298 @@
{
lib,
pkgs,
config,
sources,
...
}:
let
inherit (lib) mkIf mkMerge;
inherit
(import "${sources.nix-templating}/lib.nix" {
inherit pkgs lib;
nix_templater = pkgs.callPackage "${sources.nix-templating}/pkgs/nix_templater" { };
})
fileContents
template
;
in
{
imports = with sources; [
./options.nix
"${vars}/options.nix"
"${vars}/backends/on-machine.nix"
];
config = mkMerge [
(mkIf
(
config.fediversity.garage.enable
&& config.fediversity.attic.s3AccessKeyFile != null
&& config.fediversity.attic.s3SecretKeyFile != null
)
{
fediversity.garage = {
ensureBuckets = {
attic = {
website = true;
# TODO: these are too broad, after getting everything to work narrow it down to the domain we actually want
corsRules = {
enable = true;
allowedHeaders = [ "*" ];
allowedMethods = [ "GET" ];
allowedOrigins = [ "*" ];
};
};
};
ensureKeys = {
attic = {
inherit (config.fediversity.attic) s3AccessKeyFile s3SecretKeyFile;
ensureAccess = {
peertube-videos = {
read = true;
write = true;
owner = true;
};
peertube-playlists = {
read = true;
write = true;
owner = true;
};
};
};
};
};
}
)
(mkIf config.fediversity.attic.enable {
services.postgresql = {
enable = true;
authentication = lib.mkForce ''
local all all trust
'';
ensureDatabases = [
"atticd"
];
ensureUsers = [
{
name = "atticd";
ensureDBOwnership = true;
}
];
};
# open up access to the mastodon web interface. 80 is necessary if only for ACME
networking.firewall.allowedTCPPorts = [
80
443
8080
];
vars.generators.attic = {
runtimeInputs = [ pkgs.openssl ];
files.token.secret = true;
script = ''
genrsa -traditional 4096 | base64 -w0 > $out/token
'';
};
services.atticd = {
enable = true;
# one `monolithic` and any number of `api-server` nodes
mode = "monolithic";
environmentFile = "${
template {
name = "attic.env";
outPath = "./attic.env";
text = ''
ATTIC_SERVER_TOKEN_RS256_SECRET_BASE64=${fileContents config.vars.generators.attic.files.token.path}
AWS_ACCESS_KEY_ID=${config.fediversity.attic.ensureKeys.mastodon.id}
AWS_SECRET_ACCESS_KEY=${config.fediversity.attic.ensureKeys.mastodon.secret}
'';
}
}/bin/attic.env";
# https://github.com/zhaofengli/attic/blob/main/server/src/config-template.toml
settings = {
# Socket address to listen on
# listen = "[::]:8080";
listen = "0.0.0.0:8080";
# listen = "127.0.0.1:8080";
# Allowed `Host` headers
#
# This _must_ be configured for production use. If unconfigured or the
# list is empty, all `Host` headers are allowed.
allowed-hosts = [ ];
# The canonical API endpoint of this server
#
# This is the endpoint exposed to clients in `cache-config` responses.
#
# This _must_ be configured for production use. If not configured, the
# API endpoint is synthesized from the client's `Host` header which may
# be insecure.
#
# The API endpoint _must_ end with a slash (e.g., `https://domain.tld/attic/`
# not `https://domain.tld/attic`).
api-endpoint = "https://${config.fediversity.attic.domain}/";
# Whether to soft-delete caches
#
# If this is enabled, caches are soft-deleted instead of actually
# removed from the database. Note that soft-deleted caches cannot
# have their names reused as long as the original database records
# are there.
#soft-delete-caches = false;
# Whether to require fully uploading a NAR if it exists in the global cache.
#
# If set to false, simply knowing the NAR hash is enough for
# an uploader to gain access to an existing NAR in the global
# cache.
#require-proof-of-possession = true;
# Database connection
database = {
# Connection URL
#
# For production use it's recommended to use PostgreSQL.
# url = "postgresql:///atticd:password@127.0.0.1:5432/atticd";
url = "postgresql:///atticd?host=/run/postgresql";
# Whether to enable sending on periodic heartbeat queries
#
# If enabled, a heartbeat query will be sent every minute
#heartbeat = false;
};
# File storage configuration
storage = {
# Storage type
#
# Can be "local" or "s3".
type = "s3";
# ## Local storage
# The directory to store all files under
# path = "%storage_path%";
# ## S3 Storage (set type to "s3" and uncomment below)
# The AWS region
region = "garage";
# The name of the bucket
bucket = "attic";
# Custom S3 endpoint
#
# Set this if you are using an S3-compatible object storage (e.g., Minio).
endpoint = config.fediversity.garage.api.url;
# Credentials
#
# If unset, the credentials are read from the `AWS_ACCESS_KEY_ID` and
# `AWS_SECRET_ACCESS_KEY` environment variables.
# storage.credentials = {
# access_key_id = "";
# secret_access_key = "";
# };
};
# Data chunking
#
# Warning: If you change any of the values here, it will be
# difficult to reuse existing chunks for newly-uploaded NARs
# since the cutpoints will be different. As a result, the
# deduplication ratio will suffer for a while after the change.
chunking = {
# The minimum NAR size to trigger chunking
#
# If 0, chunking is disabled entirely for newly-uploaded NARs.
# If 1, all NARs are chunked.
nar-size-threshold = 65536; # chunk files that are 64 KiB or larger
# The preferred minimum size of a chunk, in bytes
min-size = 16384; # 16 KiB
# The preferred average size of a chunk, in bytes
avg-size = 65536; # 64 KiB
# The preferred maximum size of a chunk, in bytes
max-size = 262144; # 256 KiB
};
# Compression
compression = {
# Compression type
#
# Can be "none", "brotli", "zstd", or "xz"
type = "zstd";
# Compression level
#level = 8;
};
# Garbage collection
garbage-collection = {
# The frequency to run garbage collection at
#
# By default it's 12 hours. You can use natural language
# to specify the interval, like "1 day".
#
# If zero, automatic garbage collection is disabled, but
# it can still be run manually with `atticd --mode garbage-collector-once`.
interval = "12 hours";
# Default retention period
#
# Zero (default) means time-based garbage-collection is
# disabled by default. You can enable it on a per-cache basis.
#default-retention-period = "6 months";
};
jwt = {
# WARNING: Changing _anything_ in this section will break any existing
# tokens. If you need to regenerate them, ensure that you use the the
# correct secret and include the `iss` and `aud` claims.
# JWT `iss` claim
#
# Set this to the JWT issuer that you want to validate.
# If this is set, all received JWTs will validate that the `iss` claim
# matches this value.
#token-bound-issuer = "some-issuer";
# JWT `aud` claim
#
# Set this to the JWT audience(s) that you want to validate.
# If this is set, all received JWTs will validate that the `aud` claim
# contains at least one of these values.
#token-bound-audiences = ["some-audience1", "some-audience2"];
};
# jwt.signing = {
# # JWT RS256 secret key
# #
# # Set this to the base64-encoded private half of an RSA PEM PKCS1 key.
# # TODO
# # You can also set it via the `ATTIC_SERVER_TOKEN_RS256_SECRET_BASE64`
# # environment variable.
# token-rs256-secret-base64 = "%token_rs256_secret_base64%";
# # JWT HS256 secret key
# #
# # Set this to the base64-encoded HMAC secret key.
# # You can also set it via the `ATTIC_SERVER_TOKEN_HS256_SECRET_BASE64`
# # environment variable.
# #token-hs256-secret-base64 = "";
# };
};
};
})
];
}

View file

@ -0,0 +1,14 @@
{ config, lib, ... }:
{
options.fediversity.attic =
(import ../sharedOptions.nix {
inherit config lib;
serviceName = "attic";
serviceDocName = "Attic Nix Cache server";
})
//
{
};
}

View file

@ -56,12 +56,6 @@ in
) )
(mkIf config.fediversity.pixelfed.enable { (mkIf config.fediversity.pixelfed.enable {
## NOTE: Pixelfed as packaged in nixpkgs has a permission issue that prevents Nginx
## from being able to serving the images. We fix it here, but this should be
## upstreamed. See https://github.com/NixOS/nixpkgs/issues/235147
services.pixelfed.package = pkgs.pixelfed.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [ ./group-permissions.patch ];
});
users.users.nginx.extraGroups = [ "pixelfed" ]; users.users.nginx.extraGroups = [ "pixelfed" ];
services.pixelfed = { services.pixelfed = {

View file

@ -1,18 +0,0 @@
diff --git a/config/filesystems.php b/config/filesystems.php
index 00254e93..fc1a58f3 100644
--- a/config/filesystems.php
+++ b/config/filesystems.php
@@ -49,11 +49,11 @@ return [
'permissions' => [
'file' => [
'public' => 0644,
- 'private' => 0600,
+ 'private' => 0640,
],
'dir' => [
'public' => 0755,
- 'private' => 0700,
+ 'private' => 0750,
],
],
],