Compare commits

...
Sign in to create a new pull request.

37 commits

Author SHA1 Message Date
0599a33916 fix deployment evaluation 2025-07-28 17:19:34 +02:00
312ca7a48b fix resource mapping for the shell application 2025-07-28 13:36:08 +02:00
8c571c2dbe WIP: get past dumb type error 2025-07-25 12:03:58 +02:00
a102ad93b7 test that login-shell resource is mapped by hello application 2025-07-25 11:53:24 +02:00
89b22df3a4 WIP(broken, infinite recursion): apply application config 2025-07-24 20:08:29 +02:00
b49e426ed4 test application config 2025-07-24 19:26:05 +02:00
421d7b6f93 generalize function type to take types 2025-07-24 18:59:19 +02:00
bf488f89e1 readability 2025-07-22 12:43:32 +02:00
7066b2cb69 use mapAttrs right, again 2025-07-22 12:42:08 +02:00
243ef4425b WIP: more type-safe policy application 2025-07-22 12:38:50 +02:00
0f7da57392 use submodule to turn module into type for functionTo 2025-07-22 10:57:32 +02:00
bb93d2d0de use mapAttrs right
`mapAttrs'` takes two args rather than a set, whereas if only the val
changes `mapAttrs (_: v: ...)` should do
2025-07-22 10:54:43 +02:00
b25ddac298 fix typos, lint, format 2025-07-22 10:54:27 +02:00
ba047997f2 WIP: illustrate an entire NixOS configuration as a resource 2025-07-03 13:08:14 +02:00
0c592d81f3 WIP: (broken) implement test 2025-07-02 03:39:36 +02:00
f8d1be9f6e WIP: implement mappings 2025-07-02 01:20:35 +02:00
7a667c7517 WIP: start writing an evaluation test
turns out we also need a collection of configurations, obviously
next: figure out where to wire everything up to obtain a deployment
2025-07-01 23:59:16 +02:00
5c97e35970 WIP: add missing types 2025-07-01 22:07:42 +02:00
3ec853a32a WIP: implement data model as in diagram
this doesn't update the tests yet because we don't have all the data
types in place anyway yet, and I still need to come up with testable examples.
2025-07-01 17:55:46 +02:00
c764c0f7b6
better reflect naming from diagram configuration data flow 2025-06-30 14:20:21 +02:00
34529a7de4
data model: migration 2025-06-23 19:22:47 +02:00
6c2022d064
data model: deployment 2025-06-23 16:35:11 +02:00
f51462afc9
data model: runtime environment
allows declaring options so instantiations may configure required
settings
2025-06-23 16:35:04 +02:00
fefcd93bc1
grant run-time environments their own modules with their own description 2025-06-23 11:25:18 +02:00
c1f3aa6aed
have run-time environments use their corresponding run-time configurations 2025-06-23 09:34:59 +02:00
8b2ee21dbe
data model: add run-time configuration 2025-06-23 09:06:52 +02:00
486b316885 run updater natively (#394)
see Fediversity/Fediversity#65 (comment).

closes #65.

Reviewed-on: Fediversity/Fediversity#394
Co-authored-by: Kiara Grouwstra <kiara@procolix.eu>
Co-committed-by: Kiara Grouwstra <kiara@procolix.eu>
2025-06-20 09:41:38 +02:00
611c961dcf separate test declarations from invocations (#396)
see Fediversity/Fediversity#395 (comment)

Reviewed-on: Fediversity/Fediversity#396
Reviewed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-authored-by: Kiara Grouwstra <kiara@procolix.eu>
Co-committed-by: Kiara Grouwstra <kiara@procolix.eu>
2025-06-19 18:11:08 +02:00
d67f533948 fix running nixops4 apply test (#391)
Closes #390

Reviewed-on: Fediversity/Fediversity#391
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-committed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
2025-06-19 08:26:20 +02:00
bd1cfd7a7c Introduce test for deploying all services via FediPanel (#361)
Closes #277

Same as #329 but where we run the FediPanel and interact with it via a browser
instead of running NixOps4 directly.

Reviewed-on: Fediversity/Fediversity#361
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Reviewed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-18 12:37:47 +02:00
939f9d961d add data model entity: application (#387)
part of #103.

Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Reviewed-on: Fediversity/Fediversity#387
Co-authored-by: Kiara Grouwstra <kiara@procolix.eu>
Co-committed-by: Kiara Grouwstra <kiara@procolix.eu>
2025-06-17 17:11:52 +02:00
4801433ae0 Get rid of the need for deployer.pub (#385)
The tests still work because we manually write the deployer's public key in `/root/.ssh/authorized_keys` on the target machines. In itself, however, the configuration that we push does not allow the deployer to push anything on the target machines.

Context: Fediversity/Fediversity#361 (comment)
Reviewed-on: Fediversity/Fediversity#385
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-17 16:34:29 +02:00
3a3a083793 FediPanel: allow configuring flake and deployment (#376)
Last part of #361.

Builds on top of #375.

Reviewed-on: Fediversity/Fediversity#376
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-15 16:55:19 +02:00
ace56e754e FediPanel: do not call nix develop (#375)
Yet another piece of #361.

Reviewed-on: Fediversity/Fediversity#375
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2025-06-15 15:06:23 +02:00
dbb4ce67fc move machines to reflect a semantic structure (#367)
later we may want to distinguish dev vs host as well, tho eventually we expect not to have hard-coded machines anyway.

split off from #319.

Reviewed-on: Fediversity/Fediversity#367
Co-authored-by: Kiara Grouwstra <kiara@procolix.eu>
Co-committed-by: Kiara Grouwstra <kiara@procolix.eu>
2025-06-15 15:01:56 +02:00
5a514b96e9 use deployed environment for launching nixops4 from the panel 2025-06-13 16:39:34 +02:00
1b832c1f5b bypass native flake input for Nixpkgs (#374)
@Niols the sheer amount of hassle and noise indicates that it may be better to first split out a `flake.nix` just for the tests. And all this clutter doesn't even explain yet *why* we thought it needs to be there.

closes #279.

Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Reviewed-on: Fediversity/Fediversity#374
Reviewed-by: kiara Grouwstra <kiara@procolix.eu>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-committed-by: Valentin Gagarin <valentin.gagarin@tweag.io>
2025-06-12 13:05:11 +02:00
73 changed files with 1160 additions and 161 deletions

View file

@ -15,6 +15,12 @@ jobs:
- uses: actions/checkout@v4
- run: nix-build -A tests
check-data-model:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix-shell --run 'nix-unit ./deployment/data-model-test.nix'
check-peertube:
runs-on: native
steps:
@ -38,3 +44,9 @@ jobs:
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-cli -L
check-deployment-panel:
runs-on: native
steps:
- uses: actions/checkout@v4
- run: nix build .#checks.x86_64-linux.deployment-panel -L

View file

@ -7,15 +7,16 @@ on:
jobs:
lockfile:
runs-on: ubuntu-latest
runs-on: native
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v31
- name: Install npins
run: nix profile install 'nixpkgs#npins'
- name: Update npins sources
uses: getchoo/update-npins@v0
run: nix-shell --run "npins update"
- name: Create PR
uses: peter-evans/create-pull-request@v7
with:
token: "${{ secrets.DEPLOY_KEY }}"
branch: npins-update
commit-message: "npins: update sources"
title: "npins: update sources"

View file

@ -154,6 +154,3 @@ details as to what they are for. As an overview:
- [`services/`](./services) contains our effort to make Fediverse applications
work seemlessly together in our specific setting.
- [`website/`](./website) contains the framework and the content of [the
Fediversity website](https://fediversity.eu/)

View file

@ -9,6 +9,8 @@ let
git-hooks
gitignore
;
inherit (import sources.flake-inputs) import-flake;
inputs = (import-flake { src = ./.; }).inputs;
inherit (pkgs) lib;
pre-commit-check =
(import "${git-hooks}/nix" {
@ -41,6 +43,23 @@ in
shell = pkgs.mkShellNoCC {
inherit (pre-commit-check) shellHook;
buildInputs = pre-commit-check.enabledPackages;
packages =
let
test-loop = pkgs.writeShellApplication {
name = "test-loop";
runtimeInputs = [
pkgs.watchexec
pkgs.nix-unit
];
text = ''
watchexec -w ${builtins.toString ./.} -- nix-unit ${builtins.toString ./deployment/data-model-test.nix} "$@"
'';
};
in
[
pkgs.nix-unit
test-loop
];
};
tests = {
@ -50,6 +69,7 @@ in
# re-export inputs so they can be overridden granularly
# (they can't be accessed from the outside any other way)
inherit
inputs
sources
system
pkgs

View file

@ -3,6 +3,13 @@
This directory contains work to generate a full Fediversity deployment from a minimal configuration.
This is different from [`../services/`](../services) that focuses on one machine, providing a polished and unified interface to different Fediverse services.
## Data model
The core piece of the project is the [Fediversity data model](./data-model.nix), which describes all entities and their interactions.
What can be done with it is exemplified in the [evaluation tests](./data-model-test.nix).
Run `test-loop` in the development environment when hacking on the data model or adding tests.
## Checks
There are three levels of deployment checks: `basic`, `cli`, `panel`.
@ -109,8 +116,8 @@ flowchart LR
target_machines -->|get certs| acme
```
### [WIP] Service deployment check from the panel
### Service deployment check from the FediPanel
This is a full deployment check running the panel on the deployer machine, deploying some services through the panel and checking that they are indeed on the target machines, then cleans them up and checks whether that works, too.
This is a full deployment check running the [FediPanel](../panel) on the deployer machine, deploying some services through it and checking that they are indeed on the target machines, then cleans them up and checks whether that works, too.
It builds upon the basic and CLI deployment checks.
It builds upon the basic and CLI deployment checks, the only difference being that `deployer` runs NixOps4 only indirectly via the panel, and the `client` node is the one that triggers the deployment via a browser, the way a human would.

View file

@ -10,6 +10,12 @@
inputs.nixops4.packages.${pkgs.system}.default
];
# FIXME: sad times
system.extraDependencies = with pkgs; [
jq
jq.inputDerivation
];
system.extraDependenciesFromModule =
{ pkgs, ... }:
{

View file

@ -1 +0,0 @@
## This is a placeholder file. It will be overwritten by the test.

View file

@ -14,6 +14,8 @@ let
types
;
sources = import ../../../npins;
in
{
imports = [ ./sharedOptions.nix ];
@ -57,6 +59,8 @@ in
"${inputs.nixops4-nixos}"
"${inputs.nixpkgs}"
"${sources.flake-inputs}"
pkgs.stdenv
pkgs.stdenvNoCC
]

View file

@ -119,7 +119,6 @@ in
with subtest("Configure the deployer key"):
deployer.succeed("""mkdir -p ~/.ssh && ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa""")
deployer_key = deployer.succeed("cat ~/.ssh/id_rsa.pub").strip()
deployer.succeed(f"echo '{deployer_key}' > ${config.pathFromRoot}/deployer.pub")
${forConcat config.targetMachines (tm: ''
${tm}.succeed(f"mkdir -p /root/.ssh && echo '{deployer_key}' >> /root/.ssh/authorized_keys")
'')}

View file

@ -0,0 +1,91 @@
{
self,
inputs,
lib,
...
}:
let
inherit (builtins)
fromJSON
listToAttrs
;
targetMachines = [
"garage"
"mastodon"
"peertube"
"pixelfed"
];
pathToRoot = /. + (builtins.unsafeDiscardStringContext self);
pathFromRoot = ./.;
enableAcme = true;
in
{
perSystem =
{ pkgs, ... }:
{
checks.deployment-panel = pkgs.testers.runNixOSTest {
imports = [
../common/nixosTest.nix
./nixosTest.nix
];
_module.args.inputs = inputs;
inherit
targetMachines
pathToRoot
pathFromRoot
enableAcme
;
};
};
nixops4Deployments =
let
makeTargetResource = nodeName: {
imports = [ ../common/targetResource.nix ];
_module.args.inputs = inputs;
inherit
nodeName
pathToRoot
pathFromRoot
enableAcme
;
};
## The deployment function - what we are here to test!
##
## TODO: Modularise `deployment/default.nix` to get rid of the nested
## function calls.
makeTestDeployment =
args:
(import ../..)
{
inherit lib;
inherit (inputs) nixops4 nixops4-nixos;
fediversity = import ../../../services/fediversity;
}
(listToAttrs (
map (nodeName: {
name = "${nodeName}ConfigurationResource";
value = makeTargetResource nodeName;
}) targetMachines
))
args;
in
{
check-deployment-panel = makeTestDeployment (
fromJSON (
let
env = builtins.getEnv "DEPLOYMENT";
in
if env == "" then
throw "The DEPLOYMENT environment needs to be set. You do not want to use this deployment unless in the `deployment-panel` NixOS test."
else
env
)
);
};
}

View file

@ -0,0 +1,362 @@
{
inputs,
lib,
hostPkgs,
config,
...
}:
let
inherit (lib)
getExe
;
## Some places need a dummy file that will in fact never be used. We create
## it here.
dummyFile = hostPkgs.writeText "dummy" "dummy";
panelPort = 8000;
panelUser = "test";
panelEmail = "test@test.com";
panelPassword = "ouiprdaaa43"; # panel's manager complains if too close to username or email
fediUser = "test";
fediEmail = "test@test.com";
fediPassword = "testtest";
fediName = "Testy McTestface";
toPythonBool = b: if b then "True" else "False";
interactWithPanel =
{
baseUri,
enableMastodon,
enablePeertube,
enablePixelfed,
}:
hostPkgs.writers.writePython3Bin "interact-with-panel"
{
libraries = with hostPkgs.python3Packages; [ selenium ];
flakeIgnore = [
"E302" # expected 2 blank lines, found 0
"E303" # too many blank lines
"E305" # expected 2 blank lines after end of function or class
"E501" # line too long (> 79 characters)
"E731" # do not assign lambda expression, use a def
];
}
''
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
print("Create and configure driver...")
options = Options()
options.add_argument("--headless")
options.binary_location = "${getExe hostPkgs.firefox-unwrapped}"
service = webdriver.FirefoxService(executable_path="${getExe hostPkgs.geckodriver}")
driver = webdriver.Firefox(options=options, service=service)
driver.set_window_size(1280, 960)
driver.implicitly_wait(360)
driver.command_executor.set_timeout(3600)
print("Open login page...")
driver.get("${baseUri}/login/")
print("Enter username...")
driver.find_element(By.XPATH, "//input[@name = 'username']").send_keys("${panelUser}")
print("Enter password...")
driver.find_element(By.XPATH, "//input[@name = 'password']").send_keys("${panelPassword}")
print("Click Login button...")
driver.find_element(By.XPATH, "//button[normalize-space() = 'Login']").click()
print("Open configuration page...")
driver.get("${baseUri}/configuration/")
# Helpers to actually set and not add or switch input values.
def input_set(elt, keys):
elt.clear()
elt.send_keys(keys)
def checkbox_set(elt, new_value):
if new_value != elt.is_selected():
elt.click()
print("Enable Fediversity...")
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'enable']"), True)
print("Fill in initialUser info...")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.username']"), "${fediUser}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.password']"), "${fediPassword}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.email']"), "${fediEmail}")
input_set(driver.find_element(By.XPATH, "//input[@name = 'initialUser.displayName']"), "${fediName}")
print("Enable services...")
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'mastodon.enable']"), ${toPythonBool enableMastodon})
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'peertube.enable']"), ${toPythonBool enablePeertube})
checkbox_set(driver.find_element(By.XPATH, "//input[@name = 'pixelfed.enable']"), ${toPythonBool enablePixelfed})
print("Start deployment...")
driver.find_element(By.XPATH, "//button[@id = 'deploy-button']").click()
print("Wait for deployment status to show up...")
get_deployment_result = lambda d: d.find_element(By.XPATH, "//div[@id = 'deployment-result']//p")
WebDriverWait(driver, timeout=3660, poll_frequency=10).until(get_deployment_result)
deployment_result = get_deployment_result(driver).get_attribute('innerHTML')
print("Quit...")
driver.quit()
match deployment_result:
case 'Deployment Succeeded':
print("Deployment has succeeded; exiting normally")
exit(0)
case 'Deployment Failed':
print("Deployment has failed; exiting with return code `1`")
exit(1)
case _:
print(f"Unexpected deployment result: {deployment_result}; exiting with return code `2`")
exit(2)
'';
in
{
name = "deployment-panel";
## The panel's module sets `nixpkgs.overlays` which clashes with
## `pkgsReadOnly`. We disable it here.
node.pkgsReadOnly = false;
nodes.deployer =
{ pkgs, ... }:
{
imports = [
(import ../../../panel { }).module
];
## FIXME: This should be in the common stuff.
security.acme = {
acceptTerms = true;
defaults.email = "test@test.com";
defaults.server = "https://acme.test/dir";
};
security.pki.certificateFiles = [
(import "${inputs.nixpkgs}/nixos/tests/common/acme/server/snakeoil-certs.nix").ca.cert
];
networking.extraHosts = "${config.acmeNodeIP} acme.test";
services.panel = {
enable = true;
production = true;
domain = "deployer";
secrets = {
SECRET_KEY = dummyFile;
};
port = panelPort;
nixops4Package = inputs.nixops4.packages.${pkgs.system}.default;
deployment = {
flake = "/run/fedipanel/flake";
name = "check-deployment-panel";
};
};
environment.systemPackages = [ pkgs.expect ];
## FIXME: The following dependencies are necessary but I do not
## understand why they are not covered by the fake node.
system.extraDependencies = with pkgs; [
peertube
peertube.inputDerivation
gixy # a configuration checker for nginx
gixy.inputDerivation
];
system.extraDependenciesFromModule = {
imports = [ ../../../services/fediversity ];
fediversity = {
domain = "fediversity.net"; # would write `dummy` but that would not type
garage.enable = true;
mastodon = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
peertube = {
enable = true;
secretsFile = dummyFile;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
pixelfed = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
temp.cores = 1;
temp.initialUser = {
username = "dummy";
displayName = "dummy";
email = "dummy";
passwordFile = dummyFile;
};
};
};
};
nodes.client =
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
httpie
dnsutils # for `dig`
openssl
cacert
wget
python3
python3Packages.selenium
firefox-unwrapped
geckodriver
];
security.pki.certificateFiles = [
config.nodes.acme.test-support.acme.caCert
];
networking.extraHosts = "${config.acmeNodeIP} acme.test";
};
## NOTE: The target machines may need more RAM than the default to handle
## being deployed to, otherwise we get something like:
##
## pixelfed # [ 616.785499 ] sshd-session[1167]: Conection closed by 2001:db8:1::2 port 45004
## deployer # error: writing to file: No space left on device
## pixelfed # [ 616.788538 ] sshd-session[1151]: pam_unix(sshd:session): session closed for user port
## pixelfed # [ 616.793929 ] systemd-logind[719]: Session 4 logged out. Waiting for processes to exit.
## deployer # Error: Could not create resource
##
## These values have been trimmed down to the gigabyte.
nodes.mastodon.virtualisation.memorySize = 4 * 1024;
nodes.pixelfed.virtualisation.memorySize = 4 * 1024;
nodes.peertube.virtualisation.memorySize = 5 * 1024;
## FIXME: The test of presence of the services are very simple: we only
## check that there is a systemd service of the expected name on the
## machine. This proves at least that NixOps4 did something, and we cannot
## really do more for now because the services aren't actually working
## properly, in particular because of DNS issues. We should fix the services
## and check that they are working properly.
extraTestScript = ''
## TODO: We want a nicer way to control where the FediPanel consumes its
## flake, which can default to the store but could also be somewhere else if
## someone wanted to change the code of the flake.
##
with subtest("Give the panel access to the flake"):
deployer.succeed("mkdir /run/fedipanel /run/fedipanel/flake >&2")
deployer.succeed("cp -R . /run/fedipanel/flake >&2")
deployer.succeed("chown -R panel:panel /run/fedipanel >&2")
## TODO: I want a programmatic way to provide an SSH key to the panel (and
## therefore NixOps4). This should happen either in the Python code, but
## maybe it is fair that that one picks up on the user's key? It could
## also be in the Nix packaging.
##
with subtest("Set up the panel's SSH keys"):
deployer.succeed("mkdir /home/panel/.ssh >&2")
deployer.succeed("cp -R /root/.ssh/* /home/panel/.ssh >&2")
deployer.succeed("chown -R panel:panel /home/panel/.ssh >&2")
deployer.succeed("chmod 600 /home/panel/.ssh/* >&2")
## TODO: This is a hack to accept the root CA used by Pebble on the client
## machine. Pebble randomizes everything, so the only way to get it is to
## call the /roots/0 endpoint at runtime, leaving not much margin for a nice
## Nixy way of adding the certificate. There is no way around it as this is
## by design in Pebble, showing in fact that Pebble was not the appropriate
## tool for our use and that nixpkgs does not in fact provide an easy way to
## generate _usable_ certificates in NixOS tests. I suggest we merge this,
## and track the task to set it up in a cleaner way. I would tackle this in
## a subsequent PR, and hopefully even contribute this BetterWay(tm) to
## nixpkgs. — Niols
##
with subtest("Set up ACME root CA on client"):
client.succeed("""
cd /etc/ssl/certs
curl -o pebble-root-ca.pem https://acme.test:15000/roots/0
curl -o pebble-intermediate-ca.pem https://acme.test:15000/intermediates/0
{ cat ca-bundle.crt
cat pebble-root-ca.pem
cat pebble-intermediate-ca.pem
} > new-ca-bundle.crt
rm ca-bundle.crt ca-certificates.crt
mv new-ca-bundle.crt ca-bundle.crt
ln -s ca-bundle.crt ca-certificates.crt
""")
## TODO: I would hope for a more declarative way to add users. This should
## be handled by the Nix packaging of the FediPanel. — Niols
##
with subtest("Create panel user"):
deployer.succeed("""
expect -c '
spawn manage createsuperuser --username ${panelUser} --email ${panelEmail}
expect "Password: "; send "${panelPassword}\\n";
expect "Password (again): "; send "${panelPassword}\\n"
interact
' >&2
""")
with subtest("Check the status of the services - there should be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with no services enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = false;
enablePeertube = false;
enablePixelfed = false;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - there should still be none"):
garage.fail("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with Mastodon and Pixelfed enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = true;
enablePeertube = false;
enablePixelfed = true;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - expecting Garage, Mastodon and Pixelfed"):
garage.succeed("systemctl status garage.service")
mastodon.succeed("systemctl status mastodon-web.service")
peertube.fail("systemctl status peertube.service")
pixelfed.succeed("systemctl status phpfpm-pixelfed.service")
with subtest("Run deployment with only Peertube enabled"):
client.succeed("${
interactWithPanel {
baseUri = "https://deployer";
enableMastodon = false;
enablePeertube = true;
enablePixelfed = false;
}
}/bin/interact-with-panel >&2")
with subtest("Check the status of the services - expecting Garage and Peertube"):
garage.succeed("systemctl status garage.service")
mastodon.fail("systemctl status mastodon-web.service")
peertube.succeed("systemctl status peertube.service")
pixelfed.fail("systemctl status phpfpm-pixelfed.service")
'';
}

View file

@ -0,0 +1,220 @@
let
inherit (import ../default.nix { }) pkgs inputs;
inherit (pkgs) lib;
inherit (lib) mkOption types;
eval =
module:
(lib.evalModules {
specialArgs = {
inherit inputs;
};
modules = [
module
./data-model.nix
];
}).config;
nixops4Deployment = inputs.nixops4.modules.nixops4Deployment.default;
inherit (inputs.nixops4.lib) mkDeployment;
in
{
test-eval = {
expr =
let
fediversity = eval (
{ config, ... }:
{
config = {
resources.nixos-configuration = {
description = "An entire NixOS configuration";
request =
{ ... }:
{
_class = "fediversity-resource-request";
options.config = mkOption {
description = "Any options from NixOS";
};
};
policy =
{ config, ... }:
{
_class = "fediversity-resource-policy";
options = {
extra-config = mkOption {
description = "Any options from NixOS";
};
};
config.resource-type = types.raw; # TODO: what's the type of a NixOS configuration?
config.apply = requests: lib.mkMerge (requests ++ [ config.extra-config ]);
};
};
resources.login-shell = {
description = "The operator needs to be able to log into the shell";
request =
{ ... }:
{
_class = "fediversity-resource-request";
options = {
wheel = mkOption {
description = "Whether the login user needs root permissions";
type = types.bool;
default = false;
};
packages = mkOption {
description = "Packages that need to be available in the user environment";
type = with types; attrsOf package;
};
};
};
policy =
{ config, ... }:
{
_class = "fediversity-resource-policy";
options = {
username = mkOption {
description = "Username for the operator";
type = types.str; # TODO: use the proper constraints from NixOS
};
wheel = mkOption {
description = "Whether to allow login with root permissions";
type = types.bool;
default = false;
};
};
config.resource-type = types.raw; # TODO: splice out the user type from NixOS
config.apply =
requests:
let
# Filter out requests that need wheel if policy doesn't allow it
validRequests = lib.filterAttrs (
_name: req: !req.login-shell.wheel || config.wheel
) requests.resources;
in
lib.optionalAttrs (validRequests != { }) {
${config.username} = {
isNormalUser = true;
packages =
with lib;
attrValues (concatMapAttrs (_name: request: request.login-shell.packages) validRequests);
extraGroups = lib.optional config.wheel "wheel";
};
};
};
};
applications.hello =
{ ... }:
{
description = ''Command-line tool that will print "Hello, world!" on the terminal'';
module =
{ ... }:
{
options.enable = lib.mkEnableOption "Hello in the shell";
};
implementation = cfg: {
input = cfg;
output = lib.optionalAttrs cfg.enable {
resources.hello.login-shell.packages.hello = pkgs.hello;
};
};
};
environments.single-nixos-vm =
{ config, ... }:
{
resources.operator-environment.login-shell.username = "operator";
implementation = requests: {
input = requests;
output =
{ providers, ... }:
{
providers = {
inherit (inputs.nixops4.modules.nixops4Provider) local;
};
resources.the-machine = {
type = providers.local.exec;
imports = [
inputs.nixops4-nixos.modules.nixops4Resource.nixos
];
nixos.module =
{ ... }:
{
users.users = config.resources.shell.login-shell.apply (
lib.filterAttrs (_name: value: value ? login-shell) requests
);
};
};
};
};
};
};
options = {
example-configuration = mkOption {
type = config.configuration;
readOnly = true;
default = {
enable = true;
applications.hello.enable = true;
};
};
example-deployment = mkOption {
type = types.submodule nixops4Deployment;
readOnly = true;
default = config.environments.single-nixos-vm.deployment config.example-configuration;
};
};
}
);
resources = fediversity.applications.hello.resources fediversity.example-configuration.applications.hello;
hello-shell = (resources).resources.hello.login-shell;
environment = fediversity.environments.single-nixos-vm.resources.operator-environment.login-shell;
result = mkDeployment {
modules = [
(fediversity.environments.single-nixos-vm.deployment fediversity.example-configuration)
];
};
in
rec {
number-of-resources = with lib; length (attrNames fediversity.resources);
inherit (fediversity) example-configuration;
hello-package-exists = hello-shell.packages ? hello;
wheel-required = hello-shell.wheel;
wheel-allowed = environment.wheel;
operator-shell =
let
operator = (environment.apply resources).operator;
in
{
inherit (operator) isNormalUser;
packages = with lib; map (p: "${p.pname}") operator.packages;
extraGroups = operator.extraGroups;
};
deployment = {
inherit (result) _type;
deploymentFunction = lib.isFunction result.deploymentFunction;
# XXX(@fricklerhandwerk): evaluating the derivation takes a bit, we may want to stop at `isFunction` here
getProviders = lib.isDerivation (result.getProviders { system = builtins.currentSystem; });
};
};
expected = {
number-of-resources = 2;
example-configuration = {
enable = true;
applications.hello.enable = true;
};
hello-package-exists = true;
wheel-required = false;
wheel-allowed = false;
operator-shell = {
isNormalUser = true;
packages = [ "hello" ];
extraGroups = [ ];
};
deployment = {
_type = "nixops4Deployment";
deploymentFunction = true;
getProviders = true;
};
};
};
}

199
deployment/data-model.nix Normal file
View file

@ -0,0 +1,199 @@
{
lib,
config,
inputs,
...
}:
let
inherit (lib) mkOption types;
inherit (lib.types)
attrsOf
attrTag
deferredModuleWith
submodule
optionType
functionTo
;
functionType = import ./function.nix;
application-resources = submodule {
options.resources = mkOption {
# TODO: maybe transpose, and group the resources by type instead
type = attrsOf (
attrTag (
lib.mapAttrs (_name: resource: mkOption { type = submodule resource.request; }) config.resources
)
);
};
};
nixops4Deployment = types.deferredModuleWith {
staticModules = [
inputs.nixops4.modules.nixops4Deployment.default
{
_module.args = {
resourceProviderSystem = builtins.currentSystem;
resources = { };
};
}
];
};
in
{
options = {
resources = mkOption {
description = "Collection of deployment resources that can be required by applications and policed by hosting providers";
type = attrsOf (
submodule (
{ ... }:
{
_class = "fediversity-resource";
options = {
description = mkOption {
description = "Description of the resource to help application module authors and hosting providers to work with it";
type = types.str;
};
request = mkOption {
description = "Options for declaring resource requirements by an application, a description of how the resource is consumed or accessed";
type = deferredModuleWith { staticModules = [ { _class = "fediversity-resource-request"; } ]; };
};
policy = mkOption {
description = "Options for configuring the resource policy for the hosting provider, a description of how the resource is made available";
type = deferredModuleWith {
staticModules = [
(policy: {
_class = "fediversity-resource-policy";
# TODO(@fricklerhandwerk): not sure it can be made
# sensible syntactically, but essentially we want to
# ensure that `apply` is defined, but since its output
# depends on the specific policy we also need to
# determine that somehow.
# hopefully this also helps with correct composition down the line.
options.resource-type = mkOption {
description = "The type of resource this policy configures";
type = types.optionType;
};
# TODO(@fricklerhandwerk): do we need a function type here as well, or is it in the way?
options.apply = mkOption {
description = "Apply the policy to a request";
type = with types; functionTo policy.config.resource-type;
};
})
];
};
};
};
}
)
);
};
applications = mkOption {
description = "Collection of Fediversity applications";
type = attrsOf (
submodule (application: {
_class = "fediversity-application";
options = {
description = mkOption {
description = "Description to be shown in the application overview";
type = types.str;
};
module = mkOption {
description = "Operator-facing configuration options for the application";
type = deferredModuleWith { staticModules = [ { _class = "fediversity-application-config"; } ]; };
};
implementation = mkOption {
description = "Mapping of application configuration to deployment resources, a description of what an application needs to run";
type = application.config.config-mapping.function-type;
};
resources = mkOption {
description = "Compute resources required by an application";
type = functionTo application.config.config-mapping.output-type;
readOnly = true;
default = input: (application.config.implementation input).output;
};
config-mapping = mkOption {
description = "Function type for the mapping from application configuration to required resources";
type = submodule functionType;
readOnly = true;
default = {
input-type = submodule application.config.module;
output-type = application-resources;
};
};
};
})
);
};
environments = mkOption {
description = "Run-time environments for Fediversity applications to be deployed to";
type = attrsOf (
submodule (environment: {
_class = "fediversity-environment";
options = {
resources = mkOption {
description = ''
Resources made available by the hosting provider, and their policies.
Setting this is optional, but provides a place to declare that information for programmatic use in the resource mapping.
'';
# TODO: maybe transpose, and group the resources by type instead
type = attrsOf (
attrTag (
lib.mapAttrs (_name: resource: mkOption { type = submodule resource.policy; }) config.resources
)
);
};
implementation = mkOption {
description = "Mapping of resources required by applications to available resources; the result can be deployed";
type = environment.config.resource-mapping.function-type;
};
resource-mapping = mkOption {
description = "Function type for the mapping from resources to a (NixOps4) deployment";
type = submodule functionType;
readOnly = true;
default = {
input-type = application-resources;
output-type = nixops4Deployment;
};
};
deployment = mkOption {
description = "Generate a deployment from a configuration";
type = functionTo (environment.config.resource-mapping.output-type);
readOnly = true;
default =
cfg:
# TODO: check cfg.enable.true
let
required-resources = lib.mapAttrs (
name: application-settings: config.applications.${name}.resources application-settings
) cfg.applications;
in
(environment.config.implementation required-resources).output;
};
};
})
);
};
configuration = mkOption {
description = "Configuration type declaring options to be set by operators";
type = optionType;
readOnly = true;
default = submodule {
options = {
enable = lib.mkEnableOption {
description = "your Fediversity configuration";
};
applications = lib.mapAttrs (
_name: application:
mkOption {
description = application.description;
type = submodule application.module;
default = { };
}
) config.applications;
};
};
};
};
}

View file

@ -2,5 +2,6 @@
imports = [
./check/basic/flake-part.nix
./check/cli/flake-part.nix
./check/panel/flake-part.nix
];
}

38
deployment/function.nix Normal file
View file

@ -0,0 +1,38 @@
/**
Modular function type
*/
{ config, lib, ... }:
let
inherit (lib) mkOption types;
inherit (types)
submodule
functionTo
optionType
;
in
{
options = {
input-type = mkOption {
type = optionType;
};
output-type = mkOption {
type = optionType;
};
function-type = mkOption {
type = optionType;
readOnly = true;
default = functionTo (
submodule (function: {
options = {
input = mkOption {
type = config.input-type;
};
output = mkOption {
type = config.output-type;
};
};
})
);
};
};
}

19
flake.lock generated
View file

@ -596,22 +596,6 @@
"type": "github"
}
},
"nixpkgs_4": {
"locked": {
"lastModified": 1740463929,
"narHash": "sha256-4Xhu/3aUdCKeLfdteEHMegx5ooKQvwPHNkOgNCXQrvc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "5d7db4668d7a0c6cc5fc8cf6ef33b008b2b1ed8b",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-24.11",
"repo": "nixpkgs",
"type": "github"
}
},
"parts": {
"inputs": {
"nixpkgs-lib": [
@ -686,8 +670,7 @@
"nixops4-nixos",
"nixops4"
],
"nixops4-nixos": "nixops4-nixos",
"nixpkgs": "nixpkgs_4"
"nixops4-nixos": "nixops4-nixos"
}
},
"rust-overlay": {

136
flake.nix
View file

@ -1,6 +1,5 @@
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-24.11"; # consumed by flake-parts
flake-parts.url = "github:hercules-ci/flake-parts";
git-hooks.url = "github:cachix/git-hooks.nix";
nixops4.follows = "nixops4-nixos/nixops4";
@ -8,65 +7,88 @@
};
outputs =
inputs@{ flake-parts, ... }:
inputs@{ self, flake-parts, ... }:
let
sources = import ./npins;
inherit (import sources.flake-inputs) import-flake;
inherit (sources) git-hooks agenix;
# XXX(@fricklerhandwerk): this atrocity is required to splice in a foreign Nixpkgs via flake-parts
# XXX - this is just importing a flake
nixpkgs = import-flake { src = sources.nixpkgs; };
# XXX - this overrides the inputs attached to `self`
inputs' = self.inputs // {
nixpkgs = nixpkgs;
};
self' = self // {
inputs = inputs';
};
in
flake-parts.lib.mkFlake { inherit inputs; } {
systems = [
"x86_64-linux"
"aarch64-linux"
"x86_64-darwin"
"aarch64-darwin"
];
imports = [
(import "${git-hooks}/flake-module.nix")
inputs.nixops4.modules.flake.default
./deployment/flake-part.nix
./infra/flake-part.nix
];
perSystem =
{
pkgs,
lib,
inputs',
...
}:
{
formatter = pkgs.nixfmt-rfc-style;
pre-commit.settings.hooks =
let
## Add a directory here if pre-commit hooks shouldn't apply to it.
optout = [ "npins" ];
excludes = map (dir: "^${dir}/") optout;
addExcludes = lib.mapAttrs (_: c: c // { inherit excludes; });
in
addExcludes {
nixfmt-rfc-style.enable = true;
deadnix.enable = true;
trim-trailing-whitespace.enable = true;
shellcheck.enable = true;
};
devShells.default = pkgs.mkShell {
packages = [
pkgs.npins
pkgs.nil
(pkgs.callPackage "${agenix}/pkgs/agenix.nix" { })
pkgs.openssh
pkgs.httpie
pkgs.jq
# exposing this env var as a hack to pass info in from form
(inputs'.nixops4.packages.default.overrideAttrs {
impureEnvVars = [ "DEPLOYMENT" ];
})
];
};
# XXX - finally we override the overall set of `inputs` -- we need both:
# `flake-parts obtains `nixpkgs` from `self.inputs` and not from `inputs`.
flake-parts.lib.mkFlake
{
inputs = inputs // {
inherit nixpkgs;
};
};
self = self';
}
(
{ inputs, ... }:
{
systems = [
"x86_64-linux"
"aarch64-linux"
"x86_64-darwin"
"aarch64-darwin"
];
imports = [
(import "${git-hooks}/flake-module.nix")
inputs.nixops4.modules.flake.default
./deployment/flake-part.nix
./infra/flake-part.nix
];
perSystem =
{
pkgs,
lib,
inputs',
...
}:
{
formatter = pkgs.nixfmt-rfc-style;
pre-commit.settings.hooks =
let
## Add a directory here if pre-commit hooks shouldn't apply to it.
optout = [ "npins" ];
excludes = map (dir: "^${dir}/") optout;
addExcludes = lib.mapAttrs (_: c: c // { inherit excludes; });
in
addExcludes {
nixfmt-rfc-style.enable = true;
deadnix.enable = true;
trim-trailing-whitespace.enable = true;
shellcheck.enable = true;
};
devShells.default = pkgs.mkShell {
packages = [
pkgs.npins
pkgs.nil
(pkgs.callPackage "${agenix}/pkgs/agenix.nix" { })
pkgs.openssh
pkgs.httpie
pkgs.jq
# exposing this env var as a hack to pass info in from form
(inputs'.nixops4.packages.default.overrideAttrs {
impureEnvVars = [ "DEPLOYMENT" ];
})
];
};
};
}
);
}

View file

@ -14,7 +14,7 @@ everything will become much cleaner.
above 100. For instance, `fedi117`.
2. Add a basic configuration for the machine. These typically go in
`infra/machines/<name>/default.nix`. You can look at other `fediXXX` VMs to
`machines/dev/<name>/default.nix`. You can look at other `fediXXX` VMs to
find inspiration. You probably do not need a `nixos.module` option at this
point.
@ -48,7 +48,7 @@ everything will become much cleaner.
7. Regenerate the list of machines:
```
sh infra/machines.md.sh
sh machines/machines.md.sh
```
Commit it with the machine's configuration, public key, etc.

View file

@ -1,4 +1,5 @@
{
inputs,
lib,
config,
...
@ -9,7 +10,7 @@ let
inherit (lib.attrsets) concatMapAttrs optionalAttrs;
inherit (lib.strings) removeSuffix;
sources = import ../../npins;
inherit (sources) nixpkgs agenix disko;
inherit (sources) agenix disko;
secretsPrefix = ../../secrets;
secrets = import (secretsPrefix + "/secrets.nix");
@ -26,7 +27,7 @@ in
hostPublicKey = config.fediversityVm.hostPublicKey;
};
inherit nixpkgs;
inherit (inputs) nixpkgs;
## The configuration of the machine. We strive to keep in this file only the
## options that really need to be injected from the resource. Everything else

View file

@ -21,6 +21,9 @@ let
makeResourceModule =
{ vmName, isTestVm }:
{
# TODO(@fricklerhandwerk): this is terrible but IMO we should just ditch flake-parts and have our own data model for how the project is organised internally
_module.args = { inherit inputs; };
imports =
[
./common/resource.nix
@ -28,7 +31,7 @@ let
++ (
if isTestVm then
[
./test-machines/${vmName}
../machines/operator/${vmName}
{
nixos.module.users.users.root.openssh.authorizedKeys.keys = [
# allow our panel vm access to the test machines
@ -38,7 +41,7 @@ let
]
else
[
./machines/${vmName}
../machines/dev/${vmName}
]
);
fediversityVm.name = vmName;
@ -147,8 +150,8 @@ let
listSubdirectories = path: attrNames (filterAttrs (_: type: type == "directory") (readDir path));
machines = listSubdirectories ./machines;
testMachines = listSubdirectories ./test-machines;
machines = listSubdirectories ../machines/dev;
testMachines = listSubdirectories ../machines/operator;
in
{

4
machines/README.md Normal file
View file

@ -0,0 +1,4 @@
# Machines
This directory contains the definition of [the VMs](machines.md) that host our
infrastructure.

View file

@ -20,7 +20,7 @@ vmOptions=$(
cd ..
nix eval \
--impure --raw --expr "
builtins.toJSON (builtins.getFlake (builtins.toString ./.)).vmOptions
builtins.toJSON (builtins.getFlake (builtins.toString ../.)).vmOptions
" \
--log-format raw --quiet
)

View file

@ -25,6 +25,38 @@
"url": null,
"hash": "1w2gsy6qwxa5abkv8clb435237iifndcxq0s79wihqw11a5yb938"
},
"disko": {
"type": "GitRelease",
"repository": {
"type": "GitHub",
"owner": "nix-community",
"repo": "disko"
},
"pre_releases": false,
"version_upper_bound": null,
"release_prefix": null,
"submodules": false,
"version": "v1.12.0",
"revision": "7121f74b976481bc36877abaf52adab2a178fcbe",
"url": "https://api.github.com/repos/nix-community/disko/tarball/v1.12.0",
"hash": "0wbx518d2x54yn4xh98cgm65wvj0gpy6nia6ra7ns4j63hx14fkq"
},
"flake-inputs": {
"type": "GitRelease",
"repository": {
"type": "GitHub",
"owner": "fricklerhandwerk",
"repo": "flake-inputs"
},
"pre_releases": false,
"version_upper_bound": null,
"release_prefix": null,
"submodules": false,
"version": "4.1",
"revision": "ad02792f7543754569fe2fd3d5787ee00ef40be2",
"url": "https://api.github.com/repos/fricklerhandwerk/flake-inputs/tarball/4.1",
"hash": "1j57avx2mqjnhrsgq3xl7ih8v7bdhz1kj3min6364f486ys048bm"
},
"flake-parts": {
"type": "Git",
"repository": {

View file

@ -20,8 +20,15 @@ in
packages = [
pkgs.npins
manage
# NixOps4 and its dependencies
# FIXME: grab NixOps4 and add it here
pkgs.nix
pkgs.openssh
];
env = import ./env.nix { inherit lib pkgs; } // {
env = {
DEPLOYMENT_FLAKE = ../.;
DEPLOYMENT_NAME = "test";
NPINS_DIRECTORY = toString ../npins;
CREDENTIALS_DIRECTORY = toString ./.credentials;
DATABASE_URL = "sqlite:///${toString ./src}/db.sqlite3";

View file

@ -1,18 +0,0 @@
{
lib,
pkgs,
...
}:
let
inherit (builtins) toString;
in
{
REPO_DIR = toString ../.;
# explicitly use nix, as e.g. lix does not have configurable-impure-env
BIN_PATH = lib.makeBinPath [
# explicitly use nix, as e.g. lix does not have configurable-impure-env
pkgs.nix
# nixops error maybe due to our flake git hook: executing 'git': No such file or directory
pkgs.git
];
}

View file

@ -23,7 +23,9 @@ let
cfg = config.services.${name};
package = pkgs.callPackage ./package.nix { };
environment = import ../env.nix { inherit lib pkgs; } // {
environment = {
DEPLOYMENT_FLAKE = cfg.deployment.flake;
DEPLOYMENT_NAME = cfg.deployment.name;
DATABASE_URL = "sqlite:////var/lib/${name}/db.sqlite3";
USER_SETTINGS_FILE = pkgs.concatText "configuration.py" [
((pkgs.formats.pythonVars { }).generate "settings.py" cfg.settings)
@ -133,6 +135,34 @@ in
type = types.attrsOf types.path;
default = { };
};
nixops4Package = mkOption {
type = types.package;
description = ''
A package providing NixOps4.
TODO: This should not be at the level of the NixOS module, but instead
at the level of the panel's package. Until one finds a way to grab
NixOps4 from the package's npins-based code, we will have to do with
this workaround.
'';
};
deployment = {
flake = mkOption {
type = types.path;
default = ../..;
description = ''
The path to the flake containing the deployment. This is used to run the deployment button.
'';
};
name = mkOption {
type = types.str;
default = "test";
description = ''
The name of the deployment within the flake.
'';
};
};
};
config = mkIf cfg.enable {
@ -170,6 +200,8 @@ in
};
users.users.${name} = {
# TODO[Niols]: change to system user or document why we specifically
# need a normal user.
isNormalUser = true;
};
@ -181,6 +213,11 @@ in
path = [
python-environment
manage-service
## NixOps4 and its dependencies
cfg.nixops4Package
pkgs.nix
pkgs.openssh
];
preStart = ''
# Auto-migrate on first run or if the package has changed

View file

@ -13,6 +13,7 @@ let
secrets = {
SECRET_KEY = pkgs.writeText "SECRET_KEY" "secret";
};
nixops4Package = pkgs.hello; # FIXME: actually pass NixOps4
};
virtualisation = {

View file

@ -42,6 +42,7 @@ def get_secret(name: str, encoding: str = "utf-8") -> str:
return secret
# SECURITY WARNING: keep the secret key used in production secret!
# This is used nowhere but is required by Django.
SECRET_KEY = get_secret("SECRET_KEY")
# SECURITY WARNING: don't run with debug turned on in production!
@ -240,8 +241,7 @@ if user_settings_file is not None:
# The correct thing to do here would be using a helper function such as with `get_secret()` that will catch the exception and explain what's wrong and where to put the right values.
# Replacing the `USER_SETTINGS_FILE` mechanism following the comment there would probably be a good thing.
# PATH to expose to launch button
bin_path=env['BIN_PATH']
# path of the root flake to trigger nixops from, see #94.
# to deploy this should be specified, for dev just use a relative path.
repo_dir = env["REPO_DIR"]
# Path of the root flake to trigger nixops from, see #94, and name of the
# deployment.
deployment_flake = env["DEPLOYMENT_FLAKE"]
deployment_name = env["DEPLOYMENT_NAME"]

View file

@ -89,25 +89,20 @@ class DeploymentStatus(ConfigurationForm):
def deployment(self, config: BaseModel):
env = {
"PATH": settings.bin_path,
"PATH": os.environ.get("PATH"),
# pass in form info to our deployment
"DEPLOYMENT": config.json()
}
cmd = [
"nix",
"develop",
"--extra-experimental-features",
"configurable-impure-env",
"--command",
"nixops4",
"apply",
"test",
settings.deployment_name,
"--show-trace",
"--no-interactive",
]
deployment_result = subprocess.run(
cmd,
cwd = settings.repo_dir,
cwd = settings.deployment_flake,
env = env,
stderr = subprocess.STDOUT,
)

View file

@ -6,8 +6,8 @@
}:
{
tests = {
mastodon = import ./tests/mastodon.nix { inherit pkgs; };
pixelfed-garage = import ./tests/pixelfed-garage.nix { inherit pkgs; };
peertube = import ./tests/peertube.nix { inherit pkgs; };
mastodon = pkgs.nixosTest ./tests/mastodon.nix;
pixelfed-garage = pkgs.nixosTest ./tests/pixelfed-garage.nix;
peertube = pkgs.nixosTest ./tests/peertube.nix;
};
}

View file

@ -56,12 +56,6 @@ in
)
(mkIf config.fediversity.pixelfed.enable {
## NOTE: Pixelfed as packaged in nixpkgs has a permission issue that prevents Nginx
## from being able to serving the images. We fix it here, but this should be
## upstreamed. See https://github.com/NixOS/nixpkgs/issues/235147
services.pixelfed.package = pkgs.pixelfed.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [ ./group-permissions.patch ];
});
users.users.nginx.extraGroups = [ "pixelfed" ];
services.pixelfed = {

View file

@ -1,18 +0,0 @@
diff --git a/config/filesystems.php b/config/filesystems.php
index 00254e93..fc1a58f3 100644
--- a/config/filesystems.php
+++ b/config/filesystems.php
@@ -49,11 +49,11 @@ return [
'permissions' => [
'file' => [
'public' => 0644,
- 'private' => 0600,
+ 'private' => 0640,
],
'dir' => [
'public' => 0755,
- 'private' => 0700,
+ 'private' => 0750,
],
],
],

View file

@ -42,7 +42,7 @@ let
'';
in
pkgs.nixosTest {
{
name = "mastodon";
nodes = {

View file

@ -161,7 +161,7 @@ let
'';
in
pkgs.nixosTest {
{
name = "peertube";
nodes = {

View file

@ -114,7 +114,7 @@ let
${seleniumQuit}'';
in
pkgs.nixosTest {
{
name = "test-pixelfed-garage";
nodes = {