Compare commits

..

94 commits
review ... main

Author SHA1 Message Date
Nicolas Jeannerod b1a5e16432
Add pre-commit hooks for formatting and dead code 2024-11-11 17:39:20 +01:00
Nicolas Jeannerod 661f81b3f9
Cleanup dead code 2024-11-11 17:28:35 +01:00
Nicolas Jeannerod 7007da1775
Format everything, RFC-style 2024-11-11 17:25:42 +01:00
Nicolas Jeannerod 49473c43c8
Proxy Peertube behind Nginx 2024-11-11 17:10:58 +01:00
Nicolas Jeannerod 4f8ba4bf3c
Require secrets file also when on metal 2024-11-11 17:10:44 +01:00
Nicolas Jeannerod 8e03b4b34e
Fix typo 2024-11-11 16:36:33 +01:00
Nicolas Jeannerod c1dcdfe493
Open port 80, necessary for ACME 2024-11-11 16:34:24 +01:00
Nicolas Jeannerod f53a27baee
Number of cores also when on metal 2024-11-11 16:16:27 +01:00
Nicolas Jeannerod 2d522f51f5
Support installing host keys in the installer 2024-11-08 17:35:25 +01:00
Nicolas Jeannerod f04b71047c
Slight rework of the installer 2024-11-07 18:36:43 +01:00
Nicolas Jeannerod cd194f818d
Turn off the machine once if install is successful 2024-11-07 12:02:09 +01:00
Nicolas Jeannerod 007c168081 Fix Mastodon/Garage test 2024-10-30 19:44:07 +00:00
Nicolas Jeannerod fb342b02fb Also forward SSH port 2024-10-30 18:38:39 +00:00
Nicolas Jeannerod 96acf1f10d Use recommended proxy settings for Garage 2024-10-30 18:37:45 +00:00
Nicolas Jeannerod e299978508 Avoid clashes of security.acme.defaults options 2024-10-30 18:37:06 +00:00
Nicolas Jeannerod 0b5e3ca40e
Bump Taeer's nixpkgs 2024-10-29 17:13:51 +01:00
Nicolas Jeannerod 1de8f5bc17 Some fixes for Pixelfed on metal (#27) 2024-10-29 17:09:19 +01:00
Taeer Bar-Yam b36166ccc0 fix test to not use ACME/SSL (again) 2024-10-01 17:08:09 -04:00
Nicolas Jeannerod 4c8d380e9e
Proxy all buckets that have website = true 2024-10-01 18:18:47 +02:00
Nicolas Jeannerod 247a4258b2
No certificate for Garage web root domain 2024-10-01 18:04:53 +02:00
Nicolas Jeannerod be756ab8d3
Faster compression and note on isoName 2024-10-01 13:29:06 +02:00
Nicolas Jeannerod dd9b481b78
Expose mkInstaller 2024-10-01 13:14:56 +02:00
Nicolas Jeannerod 3cfc4370f7 Add missing module in tests 2024-10-01 09:40:38 +00:00
Nicolas Jeannerod e9b5de893d Create automatic installation ISOs (#26)
Co-authored-by: Taeer Bar-Yam <taeer.bar-yam@moduscreate.com>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Reviewed-on: Fediversity/simple-nixos-fediverse#26
Co-authored-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
Co-committed-by: Nicolas “Niols” Jeannerod <nicolas.jeannerod@moduscreate.com>
2024-10-01 10:02:01 +02:00
Nicolas Jeannerod 7b36774b80
We are way past that! 2024-09-27 11:48:49 +02:00
Taeer Bar-Yam 4da997b3af fix frivolous errors in garage test 2024-09-26 01:41:06 -04:00
Taeer Bar-Yam fa53ecac53 fix the overlay 2024-09-25 11:25:21 -04:00
Taeer Bar-Yam d910dfe788 take bleeding edge pixelfed 2024-09-25 00:40:53 -04:00
Nicolas Jeannerod b461a44707
Not localhost 2024-09-24 16:59:37 +02:00
Nicolas Jeannerod fc18582a1b
Make Garage API domain be localhost 2024-09-24 16:42:53 +02:00
Nicolas Jeannerod e6b58b656b
Remove SSL in Garage VM 2024-09-24 14:56:33 +02:00
Nicolas Jeannerod bf303ff1d1
Remove SSL in VM 2024-09-24 14:52:13 +02:00
Nicolas Jeannerod a600829d56
s/port/internalPort 2024-09-24 14:42:18 +02:00
Nicolas Jeannerod 042cb2d517
Move Garage VM stuff out of main file 2024-09-24 14:40:35 +02:00
Nicolas Jeannerod 050042d255
domainForBucket 2024-09-24 14:23:29 +02:00
Nicolas Jeannerod 6b45256839
s/urlFor/urlForBucket 2024-09-24 14:17:56 +02:00
Taeer Bar-Yam 51a294a659 acme fixup 2 2024-09-23 12:39:55 -04:00
Taeer Bar-Yam 2116ac6b27 acme fixup 2024-09-23 12:39:15 -04:00
Taeer Bar-Yam 3e4b486921 httpS 2024-09-23 12:22:40 -04:00
Taeer Bar-Yam db39623eeb ADD http:// to proxypass 2024-09-23 12:18:22 -04:00
Taeer Bar-Yam ffb941687a remove http:// from nginx server name 2024-09-23 12:14:40 -04:00
Taeer Bar-Yam 2657e2130f mv {,internal}port 2024-09-23 12:11:04 -04:00
Taeer Bar-Yam ca8310dce3 had two 'cfg's. changed one to 'fedicfg' 2024-09-23 12:09:16 -04:00
Taeer Bar-Yam e093632222 ; 2024-09-23 11:58:49 -04:00
Taeer Bar-Yam 2501c480fb proxy garage web to port 80 2024-09-23 11:55:54 -04:00
Nicolas Jeannerod 011f166fd3
Exceptionally use non-staging LetsEncrypt servers 2024-09-20 18:55:00 +02:00
Nicolas Jeannerod 3bb9569eb4
ACME 2024-09-20 18:51:21 +02:00
Nicolas Jeannerod 6323e0adc8
Also open HTTPS port 2024-09-20 18:44:47 +02:00
Nicolas Jeannerod 55a6377b12
Ignore errors of garage key import 2024-09-20 18:39:32 +02:00
Nicolas Jeannerod 9be8232083
[HACK] comment out virtualisation 2024-09-20 18:25:21 +02:00
Nicolas Jeannerod c9665b927f
Move stuff from pixelfed-vm to pixelfed 2024-09-20 17:56:40 +02:00
Nicolas Jeannerod fa0a01f868 Use common options also in tests 2024-09-20 15:45:53 +00:00
Nicolas Jeannerod 43826e686b
Note on style choice for eg. fediversity.internal.pixelfed.domain 2024-09-20 17:20:31 +02:00
Nicolas Jeannerod 73939b9d87
Rework definition of “constants”
- make things such as `fediversity.garage.api.port` into actual options
  with the right default value
- move them under `fediversity.internal`

Co-authored-by: Taeer Bar-Yam <taeer.bar-yam@moduscreate.com>
2024-09-20 17:13:35 +02:00
Nicolas Jeannerod d97772ccc4
s/types.string/types.str/
`types.string` was being used for too many thing so it got deprecated
and now there are several different string types. `types.str` is the one
where you don't want to merge definitions if it's defined in more than
one place

Co-authored-by: Taeer Bar-Yam <taeer.bar-yam@moduscreate.com>
2024-09-20 16:35:21 +02:00
Nicolas Jeannerod 2ff8975b6b
s/mkOption/mkEnableOption 2024-09-20 16:34:08 +02:00
Nicolas Jeannerod fb02afc6c9
Factorise services URIs 2024-09-17 17:58:09 +02:00
Nicolas Jeannerod 9d1f20fc1c
Factorise Garage URIs 2024-09-17 17:52:54 +02:00
Nicolas Jeannerod 7f99fc48dd
Move Fediversity modules under top-level module 2024-09-17 15:16:11 +02:00
Nicolas Jeannerod cc148ce57f
Move Fediversity modules into own subdirectory 2024-09-17 14:27:24 +02:00
Nicolas Jeannerod c455ec1667
Move VM-specific stuff in a subdirectory 2024-09-17 14:26:21 +02:00
Nicolas Jeannerod 83d8474f17
Some fixes to the Pixelfed/Garage test 2024-09-17 13:35:51 +02:00
Taeer Bar-Yam bc47154895 stop threading email and password around as arguments 2024-09-10 08:50:54 -04:00
Nicolas Jeannerod 03995ca922 Wait until Garage is up by polling port 3900 2024-09-10 12:41:53 +00:00
Nicolas Jeannerod 8205330341 Check that src points to Garage 2024-09-10 12:17:32 +00:00
Taeer Bar-Yam 8a09ba967a test image gets uploaded to garage 2024-09-09 10:13:23 -04:00
Taeer Bar-Yam 4178822ee2 nicer timing for video 2024-09-09 10:13:13 -04:00
Taeer Bar-Yam 5090927bcf use fediversity logo 2024-09-09 10:12:54 -04:00
Nicolas Jeannerod 36ed2c68f4
Run Magick on the server but with right path 2024-09-09 14:25:42 +02:00
Nicolas Jeannerod 0c230bd0a7
Follow things graphically 2024-09-09 14:23:05 +02:00
Nicolas Jeannerod e0a24404ae
Fix logging and Selenium script 2024-09-09 14:23:05 +02:00
Nicolas Jeannerod e894f0dcc8
Cleanup 2024-09-09 14:23:05 +02:00
Taeer Bar-Yam 1d8f514240 patch pixelfed to give nginx read permissions
this way we don't need DANGEROUSLY_SET_FILESYSTEM_DRIVER
2024-09-05 12:03:35 -04:00
Taeer Bar-Yam e7ffd94c5e this configuration also works (without nginx config) 2024-09-05 11:06:55 -04:00
Nicolas Jeannerod 553a03b971
Run pixelfed-data-setup only after ensure-garage 2024-09-05 15:33:47 +02:00
Nicolas Jeannerod 10a38cdf6d
Make ensure-garage a oneshot service 2024-09-05 15:32:34 +02:00
Nicolas Jeannerod e7b82d5c54
Proxy Garage backend 2024-09-05 13:05:29 +02:00
Nicolas Jeannerod bee71d541a
DANGEROUSLY fix everything 2024-09-04 18:30:55 +02:00
Taeer Bar-Yam 9d32782452 some more ignores 2024-09-02 12:10:01 -04:00
Taeer Bar-Yam dc06c54c31 attempt to access garage storage correctly
nginx was trying to access the files on disk, rather than via s3 storage
2024-09-02 12:09:10 -04:00
Taeer Bar-Yam 5d504d0879 garage defaults
without this ensureKeys and ensureBuckets Must be set or nixos won't
build
2024-09-02 12:08:14 -04:00
Taeer Bar-Yam 4ca18752b3 fix typo 2024-09-02 12:07:25 -04:00
Nicolas Jeannerod b9cf2d5e10
WIP 2024-08-30 17:23:55 +02:00
Taeer Bar-Yam 5fd5c37834 use rebuildable_tests branch of nixpkgs for now 2024-08-28 08:39:36 -04:00
Taeer Bar-Yam e6dde31148 don't use local nixpkgs 2024-08-28 08:38:02 -04:00
Taeer Bar-Yam 353c0a7ffa separate vm.nix files for vm-specific configuration 2024-08-28 08:35:48 -04:00
Taeer Bar-Yam 366a67e112 update readme 2024-07-25 07:49:22 -04:00
Taeer Bar-Yam 941d3bf2a9 fix CSP check 2024-07-25 07:45:57 -04:00
Taeer Bar-Yam bddfd95ee4 cleanup 2024-07-25 06:06:02 -04:00
Taeer Bar-Yam acc4a1a2ef better error messages 2024-07-23 09:43:18 -04:00
Taeer Bar-Yam 0f8972a8f0 for now, we have to stop using vmVariant so the test works 2024-07-18 08:22:47 -04:00
Taeer Bar-Yam dab12bc2b8 interactive test is working 2024-07-18 06:44:13 -04:00
Taeer Bar-Yam 693e21b1a8 first stab at a nixos test
for now, had to get rid of vmVariant. we can figure out how to add it
back when we understand how we should actually distinguish between
real machines and VMs
2024-06-25 06:39:04 -04:00
Taeer Bar-Yam 4e719da9d9 address roberth comments
SEE
https://git.fediversity.eu/taeer/simple-nixos-fediverse/compare/main...roberth:review
2024-06-06 07:10:19 -04:00
28 changed files with 1956 additions and 605 deletions

5
.gitignore vendored
View file

@ -1,4 +1,9 @@
nixos.qcow2
result*
.direnv
.nixos-test-history
*screenshot.png
output
todo
/.pre-commit-config.yaml

View file

@ -27,8 +27,11 @@ With the VM running, you can then access the apps on your local machine's web br
NOTE: it sometimes takes a while for the services to start up, and in the meantime you will get 502 Bad Gateway.
- Mastodon: <http://mastodon.localhost:55001>
- You can also create accounts on the machine itself by running `mastodon-tootctl accounts create test --email test@test.com --confirmed --approve`
- Mastodon: through the reverse proxy at <https://mastodon.localhost:8443> and directly at <http://mastodon.localhost:55001>
- You can create accounts on the machine itself by running `mastodon-tootctl accounts create test --email test@test.com --confirmed --approve`
- Account-related activities (logging in/out; preferences) can only be done on the insecure direct page <http://mastodon.localhost:55001>
- After you've logged in, you can go back to the secure page and you will remain logged in
- some operations may remove the port number from the URL. You'll have to add that back in manually
- PeerTube: <http://peertube.localhost:9000>
- The root account can be accessed with username "root". The password can be obtained by running the following command on the VM:
@ -43,6 +46,26 @@ NOTE: it sometimes takes a while for the services to start up, and in the meanti
```bash
pixelfed-manage user:create --name=test --username=test --email=test@test.com --password=testtest --confirm_email=1
```
# Building an installer image
Build an installer image for the desired configuration, e.g. for `peertube`:
```bash
nix build .#installers.peertube
```
Upload the image in `./result` to Proxmox when creating a VM.
Booting the image will format the disk and install NixOS with the desired configuration.
# Deploying an updated machine configuration
> TODO: There is currently no way to specify an actual target machine by name.
Assuming you have SSH configuration with access to the remote `root` user stored for a machine called e.g. `peertube`, deploy the configuration by the same name:
```bash
nix run .#deploy.peertube
```
## debugging notes
@ -51,6 +74,27 @@ NOTE: it sometimes takes a while for the services to start up, and in the meanti
- mastodon-web.service
- peertube.service
- the `garage` CLI command gives information about garage storage, but cannot be used to actually inspect the contents. use `mc` (minio) for that
- run `mc alias set garage http://s3.garage.localhost:3900 --api s3v4 --path off $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY`
- in the chromium devtools, you can go to the networking tab and change things like response headers in a way that persists through reloads. this is much faster iteration time if that's what you need to epxeriment with.
## NixOS Tests
Tests live in the aptly named `tests/` directory, and can be accessed at the flake URI `.#checks.<system>.<test-name>` e.g. `nix build .#checks.x86_64-linux.mastodon-garage`.
They can also be run interactively with
```
nix build .#checks.<system>.<test>.driverInteractive
./result/bin/nixos-test-driver 2>output
````
you can `less output` and then `F` from a different terminal to follow along.
These tests are also equiped with the same port forwarding as the VMs, so when running interactively you should be able to access services through a browser running on your machine.
While running interactively, `rebuildableTests` allows you to modify the test nodes and then redeploy without restarting the test and waiting for the VMs to start up again. To do this you must start the jumphost by running `redeploy_jumphost.start()` inside the driver. Then from the command line
```
nix build .#checks.<system>.<test>.driverInteractive
./result/bin/rebuild
```
# questions
@ -76,10 +120,8 @@ NOTE: it sometimes takes a while for the services to start up, and in the meanti
When mastodon is running in production mode, we have a few problems:
- you have to click "accept the security risk"
- it takes a while for the webpage to come online. Until then you see "502 Bad Gateway"
- reverse proxy should produce a user friendly page regardless
- might be needed for upgrade downtime too?
- don't send users over until it's up
- email sent from the mastodon instance (e.g. for account confirmation) should be accessible at <https://mastodon.localhost:55001/letter_opener>, but it's not working.
- maybe the admin account should be managed entirely by fediversity anyway?
- mastodon is trying to fetch `missing.png` without ssl (`http://`). This isn't allowed, and i'm not sure why it's doing it.
- mastodon is trying to fetch `custom.css` from https://mastodon.localhost (no port), which is not the configured `LOCAL_DOMAIN`, so it's unclear why.
NixOS tests do not take the configuration from `virtualisation.vmVariant`. This seems like an oversight since people don't tend to mix normal NixOS configurations with the ones they're using for tests. This should be pretty easy to rectify upstream.

View file

@ -1,49 +0,0 @@
{ pkgs, ... }: {
# Customize nixos-rebuild build-vm to be a bit more convenient
virtualisation.vmVariant = {
# let us log in
users.mutableUsers = false;
users.users.root.hashedPassword = "";
services.openssh = {
enable = true;
settings = {
PermitRootLogin = "yes";
PermitEmptyPasswords = "yes";
UsePAM = "no";
};
};
# automatically log in
services.getty.autologinUser = "root";
services.getty.helpLine = ''
Type `C-a c` to access the qemu console
Type `C-a x` to quit
'';
# access to convenient things
environment.systemPackages = with pkgs; [
w3m
python3
xterm # for `resize`
];
environment.loginShellInit = ''
eval "$(resize)"
'';
nix.extraOptions = ''
extra-experimental-features = nix-command flakes
'';
# no graphics. see nixos-shell
virtualisation = {
graphics = false;
qemu.consoles = [ "tty0" "hvc0" ];
qemu.options = [
"-serial null"
"-device virtio-serial"
"-chardev stdio,mux=on,id=char0,signal=off"
"-mon chardev=char0,mode=readline"
"-device virtconsole,chardev=char0,nr=0"
];
};
};
}

13
deploy.nix Normal file
View file

@ -0,0 +1,13 @@
{ writeShellApplication }:
name: _config:
writeShellApplication {
name = "deploy";
text = ''
result="$(nix build --print-out-paths ${./.}#nixosConfigurations#${name} --eval-store auto --store ssh-ng://${name})"
# shellcheck disable=SC2087
ssh ${name} << EOF
nix-env -p /nix/var/nix/profiles/system --set "$result"
"$result"/bin/switch-to-configuration switch
EOF
'';
}

36
disk-layout.nix Normal file
View file

@ -0,0 +1,36 @@
{ ... }:
{
disko.devices.disk.main = {
device = "/dev/sda";
type = "disk";
content = {
type = "gpt";
partitions = {
MBR = {
priority = 0;
size = "1M";
type = "EF02";
};
ESP = {
priority = 1;
size = "500M";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
};
};
root = {
priority = 2;
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
}

137
fediversity/default.nix Normal file
View file

@ -0,0 +1,137 @@
{ lib, config, ... }:
let
inherit (builtins) toString;
inherit (lib) mkOption mkEnableOption mkForce;
inherit (lib.types) types;
in
{
imports = [
./garage.nix
./mastodon.nix
./pixelfed.nix
./peertube.nix
];
options = {
fediversity = {
enable = mkEnableOption "the collection of services bundled under Fediversity";
domain = mkOption {
type = types.str;
description = ''
root domain for the Fediversity services
For instance, if this option is set to `foo.example.com`, then
Pixelfed might be under `pixelfed.foo.example.com`.
'';
};
mastodon.enable = mkEnableOption "default Fediversity Mastodon configuration";
pixelfed.enable = mkEnableOption "default Fediversity Pixelfed configuration";
peertube.enable = mkEnableOption "default Fediversity PeerTube configuration";
temp = mkOption {
description = "options that are only used while developing; should be removed eventually";
default = { };
type = types.submodule {
options = {
cores = mkOption {
description = "number of cores; should be obtained from NixOps4";
type = types.int;
};
peertubeSecretsFile = mkOption {
description = "should it be provided by NixOps4? or maybe we should just ask for a main secret from which to derive all the others?";
type = types.path;
};
};
};
};
internal = mkOption {
description = "options that are only meant to be used internally; change at your own risk";
default = { };
type = types.submodule {
options = {
garage = {
api = {
domain = mkOption {
type = types.str;
default = "s3.garage.${config.fediversity.domain}";
};
port = mkOption {
type = types.int;
default = 3900;
};
url = mkOption {
type = types.str;
default = "http://${config.fediversity.internal.garage.api.domain}:${toString config.fediversity.internal.garage.api.port}";
};
};
rpc = {
port = mkOption {
type = types.int;
default = 3901;
};
};
web = {
rootDomain = mkOption {
type = types.str;
default = "web.garage.${config.fediversity.domain}";
};
internalPort = mkOption {
type = types.int;
default = 3902;
};
domainForBucket = mkOption {
type = types.functionTo types.str;
default = bucket: "${bucket}.${config.fediversity.internal.garage.web.rootDomain}";
};
urlForBucket = mkOption {
type = types.functionTo types.str;
default = bucket: "http://${config.fediversity.internal.garage.web.domainForBucket bucket}";
};
};
};
## REVIEW: Do we want to recreate options under
## `fediversity.internal` or would we rather use the options from
## the respective services? See Taeer's comment:
## https://git.fediversity.eu/taeer/simple-nixos-fediverse/pulls/22#issuecomment-124
pixelfed.domain = mkOption {
type = types.str;
default = "pixelfed.${config.fediversity.domain}";
};
mastodon.domain = mkOption {
type = types.str;
default = "mastodon.${config.fediversity.domain}";
};
peertube.domain = mkOption {
type = types.str;
default = "peertube.${config.fediversity.domain}";
};
};
};
};
};
};
config = {
## FIXME: This should clearly go somewhere else; and we should have a
## `staging` vs. `production` setting somewhere.
security.acme = {
acceptTerms = true;
defaults.email = "nicolas.jeannerod+fediversity@moduscreate.com";
# defaults.server = "https://acme-staging-v02.api.letsencrypt.org/directory";
};
## NOTE: For a one-machine deployment, this removes the need to provide an
## `s3.garage.<domain>` domain. However, this will quickly stop working once
## we go to multi-machines deployment.
fediversity.internal.garage.api.domain = mkForce "s3.garage.localhost";
};
}

277
fediversity/garage.nix Normal file
View file

@ -0,0 +1,277 @@
let
# generate one using openssl (somehow)
# XXX: when importing, garage tells you importing is only meant for keys previously generated by garage. is it okay to generate them using openssl? it seems to work fine
snakeoil_key = {
id = "GK22a15201acacbd51cd43e327";
secret = "82b2b4cbef27bf8917b350d5b10a87c92fa9c8b13a415aeeea49726cf335d74e";
};
in
# TODO: expand to a multi-machine setup
{
config,
lib,
pkgs,
...
}:
let
inherit (builtins) toString;
inherit (lib)
types
mkOption
mkEnableOption
optionalString
concatStringsSep
;
inherit (lib.strings) escapeShellArg;
inherit (lib.attrsets) filterAttrs mapAttrs';
cfg = config.services.garage;
fedicfg = config.fediversity.internal.garage;
concatMapAttrs = scriptFn: attrset: concatStringsSep "\n" (lib.mapAttrsToList scriptFn attrset);
ensureBucketScriptFn =
bucket:
{
website,
aliases,
corsRules,
}:
let
bucketArg = escapeShellArg bucket;
corsRulesJSON = escapeShellArg (
builtins.toJSON {
CORSRules = [
{
AllowedHeaders = corsRules.allowedHeaders;
AllowedMethods = corsRules.allowedMethods;
AllowedOrigins = corsRules.allowedOrigins;
}
];
}
);
in
''
# garage bucket info tells us if the bucket already exists
garage bucket info ${bucketArg} || garage bucket create ${bucketArg}
# TODO: should this --deny the website if `website` is false?
${optionalString website ''
garage bucket website --allow ${bucketArg}
''}
${concatStringsSep "\n" (
map (alias: ''
garage bucket alias ${bucketArg} ${escapeShellArg alias}
'') aliases
)}
${optionalString corsRules.enable ''
garage bucket allow --read --write --owner ${bucketArg} --key tmp
# TODO: endpoin-url should not be hard-coded
aws --region ${cfg.settings.s3_api.s3_region} --endpoint-url ${fedicfg.api.url} s3api put-bucket-cors --bucket ${bucketArg} --cors-configuration ${corsRulesJSON}
garage bucket deny --read --write --owner ${bucketArg} --key tmp
''}
'';
ensureBucketsScript = concatMapAttrs ensureBucketScriptFn cfg.ensureBuckets;
ensureAccessScriptFn =
key: bucket:
{
read,
write,
owner,
}:
''
garage bucket allow ${optionalString read "--read"} ${optionalString write "--write"} ${optionalString owner "--owner"} \
${escapeShellArg bucket} --key ${escapeShellArg key}
'';
ensureKeyScriptFn =
key:
{
id,
secret,
ensureAccess,
}:
''
## FIXME: Check whether the key exist and skip this step if that is the case. Get rid of this `|| :`
garage key import --yes -n ${escapeShellArg key} ${escapeShellArg id} ${escapeShellArg secret} || :
${concatMapAttrs (ensureAccessScriptFn key) ensureAccess}
'';
ensureKeysScript = concatMapAttrs ensureKeyScriptFn cfg.ensureKeys;
in
{
# add in options to ensure creation of buckets and keys
options = {
services.garage = {
ensureBuckets = mkOption {
type = types.attrsOf (
types.submodule {
options = {
website = mkOption {
type = types.bool;
default = false;
};
# I think setting corsRules should allow another website to show images from your bucket
corsRules = {
enable = mkEnableOption "CORS Rules";
allowedHeaders = mkOption {
type = types.listOf types.str;
default = [ ];
};
allowedMethods = mkOption {
type = types.listOf types.str;
default = [ ];
};
allowedOrigins = mkOption {
type = types.listOf types.str;
default = [ ];
};
};
aliases = mkOption {
type = types.listOf types.str;
default = [ ];
};
};
}
);
default = { };
};
ensureKeys = mkOption {
type = types.attrsOf (
types.submodule {
# TODO: these should be managed as secrets, not in the nix store
options = {
id = mkOption {
type = types.str;
};
secret = mkOption {
type = types.str;
};
# TODO: assert at least one of these is true
# NOTE: this currently needs to be done at the top level module
ensureAccess = mkOption {
type = types.attrsOf (
types.submodule {
options = {
read = mkOption {
type = types.bool;
default = false;
};
write = mkOption {
type = types.bool;
default = false;
};
owner = mkOption {
type = types.bool;
default = false;
};
};
}
);
default = [ ];
};
};
}
);
default = { };
};
};
};
config = lib.mkIf config.fediversity.enable {
environment.systemPackages = [
pkgs.minio-client
pkgs.awscli
];
networking.firewall.allowedTCPPorts = [
fedicfg.rpc.port
];
services.garage = {
enable = true;
package = pkgs.garage_0_9;
settings = {
replication_mode = "none";
# TODO: use a secret file
rpc_secret = "d576c4478cc7d0d94cfc127138cbb82018b0155c037d1c827dfb6c36be5f6625";
# TODO: why does this have to be set? is there not a sensible default?
rpc_bind_addr = "[::]:${toString fedicfg.rpc.port}";
rpc_public_addr = "[::1]:${toString fedicfg.rpc.port}";
s3_api.api_bind_addr = "[::]:${toString fedicfg.api.port}";
s3_web.bind_addr = "[::]:${toString fedicfg.web.internalPort}";
s3_web.root_domain = ".${fedicfg.web.rootDomain}";
index = "index.html";
s3_api.s3_region = "garage";
s3_api.root_domain = ".${fedicfg.api.domain}";
};
};
## Create a proxy from <bucket>.web.garage.<domain> to localhost:3902 for
## each bucket that has `website = true`.
services.nginx.virtualHosts =
let
value = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://localhost:3902";
extraConfig = ''
## copied from https://garagehq.deuxfleurs.fr/documentation/cookbook/reverse-proxy/
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Disable buffering to a temporary file.
proxy_max_temp_file_size 0;
'';
};
};
in
mapAttrs' (bucket: _: {
name = fedicfg.web.domainForBucket bucket;
inherit value;
}) (filterAttrs (_: { website, ... }: website) cfg.ensureBuckets);
systemd.services.ensure-garage = {
after = [ "garage.service" ];
wantedBy = [ "garage.service" ];
serviceConfig = {
Type = "oneshot";
};
path = [
cfg.package
pkgs.perl
pkgs.awscli
];
script = ''
set -xeuo pipefail
# Give Garage time to start up by waiting until somethings speaks HTTP
# behind Garage's API URL.
until ${pkgs.curl}/bin/curl -sio /dev/null ${fedicfg.api.url}; do sleep 1; done
# XXX: this is very sensitive to being a single instance
# (doing the bare minimum to get garage up and running)
# also, it's crazy that we have to parse command output like this
# TODO: talk to garage maintainer about making this nicer to work with in Nix
# before I do that though, I should figure out how setting it up across multiple machines will work
GARAGE_ID=$(garage node id 2>/dev/null | perl -ne '/(.*)@.*/ && print $1')
garage layout assign -z g1 -c 1G $GARAGE_ID
LAYOUT_VER=$(garage layout show | perl -ne '/Current cluster layout version: (\d*)/ && print $1')
garage layout apply --version $((LAYOUT_VER + 1))
# XXX: this is a hack because we want to write to the buckets here but we're not guaranteed any access keys
# TODO: generate this key here rather than using a well-known key
# TODO: if the key already exists, we get an error; hacked with this `|| :` which needs to be removed
garage key import --yes -n tmp ${snakeoil_key.id} ${snakeoil_key.secret} || :
export AWS_ACCESS_KEY_ID=${snakeoil_key.id};
export AWS_SECRET_ACCESS_KEY=${snakeoil_key.secret};
${ensureBucketsScript}
${ensureKeysScript}
# garage doesn't like re-adding keys that once existed, so we can't delete / recreate it every time
# garage key delete ${snakeoil_key.id} --yes
'';
};
};
}

96
fediversity/mastodon.nix Normal file
View file

@ -0,0 +1,96 @@
let
snakeoil_key = {
id = "GK3515373e4c851ebaad366558";
secret = "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34";
};
in
{
config,
lib,
...
}:
lib.mkIf (config.fediversity.enable && config.fediversity.mastodon.enable) {
#### garage setup
services.garage = {
ensureBuckets = {
mastodon = {
website = true;
corsRules = {
enable = true;
allowedHeaders = [ "*" ];
allowedMethods = [ "GET" ];
allowedOrigins = [ "*" ];
};
};
};
ensureKeys = {
mastodon = {
inherit (snakeoil_key) id secret;
ensureAccess = {
mastodon = {
read = true;
write = true;
owner = true;
};
};
};
};
};
services.mastodon = {
extraConfig = rec {
S3_ENABLED = "true";
# TODO: this shouldn't be hard-coded, it should come from the garage configuration
S3_ENDPOINT = config.fediversity.internal.garage.api.url;
S3_REGION = "garage";
S3_BUCKET = "mastodon";
# use <S3_BUCKET>.<S3_ENDPOINT>
S3_OVERRIDE_PATH_STLE = "true";
AWS_ACCESS_KEY_ID = snakeoil_key.id;
AWS_SECRET_ACCESS_KEY = snakeoil_key.secret;
S3_PROTOCOL = "http";
S3_HOSTNAME = config.fediversity.internal.garage.web.rootDomain;
# by default it tries to use "<S3_HOSTNAME>/<S3_BUCKET>"
S3_ALIAS_HOST = "${S3_BUCKET}.${S3_HOSTNAME}";
# SEE: the last section in https://docs.joinmastodon.org/admin/optional/object-storage/
# TODO: can we set up ACLs with garage?
S3_PERMISSION = "";
};
};
#### mastodon setup
# open up access to the mastodon web interface. 80 is necessary if only for ACME
networking.firewall.allowedTCPPorts = [
80
443
];
services.mastodon = {
enable = true;
localDomain = config.fediversity.internal.mastodon.domain;
configureNginx = true;
# from the documentation: recommended is the amount of your CPU cores minus
# one. but it also must be a positive integer
streamingProcesses = lib.max 1 (config.fediversity.temp.cores - 1);
# TODO: configure a mailserver so this works
smtp = {
fromAddress = "noreply@${config.fediversity.internal.mastodon.domain}";
createLocally = false;
};
# TODO: this is hardware-dependent. let's figure it out when we have hardware
# streamingProcesses = 1;
};
security.acme = {
acceptTerms = true;
preliminarySelfsigned = true;
# TODO: configure a mailserver so we can set up acme
# defaults.email = "test@example.com";
};
}

View file

@ -4,8 +4,18 @@ let
secret = "7295c4201966a02c2c3d25b5cea4a5ff782966a2415e3a196f91924631191395";
};
in
{ config, lib, pkgs, ... }: {
networking.firewall.allowedTCPPorts = [ 80 9000 ];
{
config,
lib,
...
}:
lib.mkIf (config.fediversity.enable && config.fediversity.peertube.enable) {
networking.firewall.allowedTCPPorts = [
80
443
];
services.garage = {
ensureBuckets = {
@ -19,7 +29,7 @@ in
allowedOrigins = [ "*" ];
};
};
# TODO: these are too broad, after getting everything works narrow it down to the domain we actually want
# TODO: these are too broad, after getting everything works narrow it down to the domain we actually want
peertube-playlists = {
website = true;
corsRules = {
@ -50,30 +60,39 @@ in
};
services.peertube = {
enable = true;
localDomain = config.fediversity.internal.peertube.domain;
# TODO: in most of nixpkgs, these are true by default. upstream that unless there's a good reason not to.
redis.createLocally = true;
database.createLocally = true;
secrets.secretsFile = config.fediversity.temp.peertubeSecretsFile;
settings = {
object_storage = {
enabled = true;
endpoint = "http://s3.garage.localhost:3900";
endpoint = config.fediversity.internal.garage.api.url;
region = "garage";
# not supported by garage
# not supported by garage
# SEE: https://garagehq.deuxfleurs.fr/documentation/connect/apps/#peertube
proxy.proxyify_private_files = false;
web_videos = {
web_videos = rec {
bucket_name = "peertube-videos";
prefix = "";
base_url = "http://peertube-videos.web.garage.localhost:3902";
base_url = config.fediversity.internal.garage.web.urlForBucket bucket_name;
};
videos = {
videos = rec {
bucket_name = "peertube-videos";
prefix = "";
base_url = "http://peertube-videos.web.garage.localhost:3902";
base_url = config.fediversity.internal.garage.web.urlForBucket bucket_name;
};
streaming_playlists = {
streaming_playlists = rec {
bucket_name = "peertube-playlists";
prefix = "";
base_url = "http://peertube-playlists.web.garage.localhost:3902";
base_url = config.fediversity.internal.garage.web.urlForBucket bucket_name;
};
};
};
@ -84,33 +103,11 @@ in
AWS_SECRET_ACCESS_KEY=${snakeoil_key.secret}
'';
virtualisation.vmVariant = { config, ... }: {
services.peertube = {
enable = true;
# redirects to localhost, but allows it to have a proper domain name
localDomain = "peertube.localhost";
enableWebHttps = false;
settings = {
listen.hostname = "0.0.0.0";
instance.name = "PeerTube Test VM";
};
# TODO: use agenix
secrets.secretsFile = pkgs.writeText "secret" ''
574e093907d1157ac0f8e760a6deb1035402003af5763135bae9cbd6abe32b24
'';
## Proxying through Nginx
# TODO: in most of nixpkgs, these are true by default. upstream that unless there's a good reason not to.
redis.createLocally = true;
database.createLocally = true;
configureNginx = true;
};
virtualisation.forwardPorts = [
{
from = "host";
host.port = 9000;
guest.port = 9000;
}
];
services.peertube.configureNginx = true;
services.nginx.virtualHosts.${config.services.peertube.localDomain} = {
forceSSL = true;
enableACME = true;
};
}

View file

@ -0,0 +1,18 @@
diff --git a/config/filesystems.php b/config/filesystems.php
index 00254e93..fc1a58f3 100644
--- a/config/filesystems.php
+++ b/config/filesystems.php
@@ -49,11 +49,11 @@ return [
'permissions' => [
'file' => [
'public' => 0644,
- 'private' => 0600,
+ 'private' => 0640,
],
'dir' => [
'public' => 0755,
- 'private' => 0700,
+ 'private' => 0750,
],
],
],

92
fediversity/pixelfed.nix Normal file
View file

@ -0,0 +1,92 @@
let
snakeoil_key = {
id = "GKb5615457d44214411e673b7b";
secret = "5be6799a88ca9b9d813d1a806b64f15efa49482dbe15339ddfaf7f19cf434987";
};
in
{
config,
lib,
pkgs,
...
}:
lib.mkIf (config.fediversity.enable && config.fediversity.pixelfed.enable) {
services.garage = {
ensureBuckets = {
pixelfed = {
website = true;
# TODO: these are too broad, after getting everything works narrow it down to the domain we actually want
corsRules = {
enable = true;
allowedHeaders = [ "*" ];
allowedMethods = [ "GET" ];
allowedOrigins = [ "*" ];
};
};
};
ensureKeys = {
pixelfed = {
inherit (snakeoil_key) id secret;
ensureAccess = {
pixelfed = {
read = true;
write = true;
owner = true;
};
};
};
};
};
services.pixelfed = {
enable = true;
domain = config.fediversity.internal.pixelfed.domain;
# TODO: secrets management!!!
secretFile = pkgs.writeText "secrets.env" ''
APP_KEY=adKK9EcY8Hcj3PLU7rzG9rJ6KKTOtYfA
'';
## Taeer feels like this way of configuring Nginx is odd; there should
## instead be a `services.pixefed.nginx.enable` option and the actual Nginx
## configuration should be in `services.nginx`. See eg. `pretix`.
##
## TODO: If that indeed makes sense, upstream.
nginx = {
forceSSL = true;
enableACME = true;
# locations."/public/".proxyPass = "${config.fediversity.internal.garage.web.urlForBucket "pixelfed"}/public/";
};
};
services.pixelfed.settings = {
## NOTE: This depends on the targets, eg. universities might want control
## over who has an account. We probably want a universal
## `fediversity.openRegistration` option.
OPEN_REGISTRATION = true;
# DANGEROUSLY_SET_FILESYSTEM_DRIVER = "s3";
FILESYSTEM_CLOUD = "s3";
PF_ENABLE_CLOUD = true;
AWS_ACCESS_KEY_ID = snakeoil_key.id;
AWS_SECRET_ACCESS_KEY = snakeoil_key.secret;
AWS_DEFAULT_REGION = "garage";
AWS_URL = config.fediversity.internal.garage.web.urlForBucket "pixelfed";
AWS_BUCKET = "pixelfed";
AWS_ENDPOINT = config.fediversity.internal.garage.api.url;
AWS_USE_PATH_STYLE_ENDPOINT = false;
};
## Only ever run `pixelfed-data-setup` after `ensure-garage` has done its job.
## Otherwise, everything crashed dramatically.
systemd.services.pixelfed-data-setup = {
after = [ "ensure-garage.service" ];
};
networking.firewall.allowedTCPPorts = [
80
443
];
}

View file

@ -1,24 +1,184 @@
{
"nodes": {
"disko": {
"inputs": {
"nixpkgs": "nixpkgs"
},
"locked": {
"lastModified": 1727347829,
"narHash": "sha256-y7cW6TjJKy+tu7efxeWI6lyg4VVx/9whx+OmrhmRShU=",
"owner": "nix-community",
"repo": "disko",
"rev": "1879e48907c14a70302ff5d0539c3b9b6f97feaa",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "disko",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"git-hooks": {
"inputs": {
"flake-compat": "flake-compat",
"gitignore": "gitignore",
"nixpkgs": "nixpkgs_2",
"nixpkgs-stable": "nixpkgs-stable"
},
"locked": {
"lastModified": 1730814269,
"narHash": "sha256-fWPHyhYE6xvMI1eGY3pwBTq85wcy1YXqdzTZF+06nOg=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "d70155fdc00df4628446352fc58adc640cd705c2",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1708475490,
"narHash": "sha256-g1v0TsWBQPX97ziznfJdWhgMyMGtoBFs102xSYO4syU=",
"lastModified": 1725194671,
"narHash": "sha256-tLGCFEFTB5TaOKkpfw3iYT9dnk4awTP/q4w+ROpMfuw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b833ff01a0d694b910daca6e2ff4a3f26dee478c",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-latest": {
"locked": {
"lastModified": 1727220152,
"narHash": "sha256-6ezRTVBZT25lQkvaPrfJSxYLwqcbNWm6feD/vG1FO0o=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "0e74ca98a74bc7270d28838369593635a5db3260",
"rev": "24959f933187217890b206788a85bfa73ba75949",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-stable": {
"locked": {
"lastModified": 1730741070,
"narHash": "sha256-edm8WG19kWozJ/GqyYx2VjW99EdhjKwbY3ZwdlPAAlo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "d063c1dd113c91ab27959ba540c0d9753409edf3",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1730768919,
"narHash": "sha256-8AKquNnnSaJRXZxc5YmF/WfmxiHX6MMZZasRP6RRQkE=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a04d33c0c3f1a59a2c1cb0c6e34cd24500e5a1dc",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_3": {
"locked": {
"lastModified": 1730137230,
"narHash": "sha256-0kW6v0alzWIc/Dc/DoVZ7A9qNScv77bj/zYTKI67HZM=",
"owner": "radvendii",
"repo": "nixpkgs",
"rev": "df815998652a1d00ce7c059a1e5ef7d7c0548c90",
"type": "github"
},
"original": {
"owner": "radvendii",
"ref": "nixos_rebuild_tests",
"repo": "nixpkgs",
"type": "github"
}
},
"pixelfed": {
"flake": false,
"locked": {
"lastModified": 1719823820,
"narHash": "sha256-CKjqnxp7p2z/13zfp4HQ1OAmaoUtqBKS6HFm6TV8Jwg=",
"owner": "pixelfed",
"repo": "pixelfed",
"rev": "4c245cf429330d01fcb8ebeb9aa8c84a9574a645",
"type": "github"
},
"original": {
"owner": "pixelfed",
"ref": "v0.12.3",
"repo": "pixelfed",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
"disko": "disko",
"git-hooks": "git-hooks",
"nixpkgs": "nixpkgs_3",
"nixpkgs-latest": "nixpkgs-latest",
"pixelfed": "pixelfed"
}
}
},

157
flake.nix
View file

@ -1,42 +1,143 @@
{
description = "Testing mastodon configurations";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs?ref=nixos-unstable";
nixpkgs.url = "github:radvendii/nixpkgs/nixos_rebuild_tests";
nixpkgs-latest.url = "github:nixos/nixpkgs";
git-hooks.url = "github:cachix/git-hooks.nix";
pixelfed = {
url = "github:pixelfed/pixelfed?ref=v0.12.3";
flake = false;
};
disko.url = "github:nix-community/disko";
};
outputs = { self, nixpkgs }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
in {
outputs =
{
self,
nixpkgs,
nixpkgs-latest,
git-hooks,
pixelfed,
disko,
}:
let
system = "x86_64-linux";
lib = nixpkgs.lib;
pkgs = nixpkgs.legacyPackages.${system};
pkgsLatest = nixpkgs-latest.legacyPackages.${system};
bleedingFediverseOverlay = (
_: _: {
pixelfed = pkgsLatest.pixelfed.overrideAttrs (old: {
src = pixelfed;
patches = (old.patches or [ ]) ++ [ ./fediversity/pixelfed-group-permissions.patch ];
});
## TODO: give mastodon, peertube the same treatment
}
);
in
{
nixosModules = {
## Bleeding-edge fediverse packages
bleedingFediverse = {
nixpkgs.overlays = [ bleedingFediverseOverlay ];
};
## Fediversity modules
fediversity = import ./fediversity;
nixosConfigurations = {
mastodon = nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./common.nix ./mastodon.nix ./garage.nix ];
## VM-specific modules
interactive-vm = import ./vm/interactive-vm.nix;
garage-vm = import ./vm/garage-vm.nix;
mastodon-vm = import ./vm/mastodon-vm.nix;
peertube-vm = import ./vm/peertube-vm.nix;
pixelfed-vm = import ./vm/pixelfed-vm.nix;
disk-layout = import ./disk-layout.nix;
};
peertube = nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./common.nix ./peertube.nix ./garage.nix ];
nixosConfigurations = {
mastodon = nixpkgs.lib.nixosSystem {
inherit system;
modules = with self.nixosModules; [
disko.nixosModules.default
disk-layout
bleedingFediverse
fediversity
interactive-vm
garage-vm
mastodon-vm
];
};
peertube = nixpkgs.lib.nixosSystem {
inherit system;
modules = with self.nixosModules; [
disko.nixosModules.default
disk-layout
bleedingFediverse
fediversity
interactive-vm
garage-vm
peertube-vm
];
};
pixelfed = nixpkgs.lib.nixosSystem {
inherit system;
modules = with self.nixosModules; [
disko.nixosModules.default
disk-layout
bleedingFediverse
fediversity
interactive-vm
garage-vm
pixelfed-vm
];
};
all = nixpkgs.lib.nixosSystem {
inherit system;
modules = with self.nixosModules; [
disko.nixosModules.default
disk-layout
bleedingFediverse
fediversity
interactive-vm
garage-vm
peertube-vm
pixelfed-vm
mastodon-vm
];
};
};
pixelfed = nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./common.nix ./pixelfed.nix ./garage.nix ];
## Fully-feature ISO installer
mkInstaller = import ./installer.nix;
installers = lib.mapAttrs (_: config: self.mkInstaller nixpkgs config) self.nixosConfigurations;
deploy =
let
deployCommand = (pkgs.callPackage ./deploy.nix { });
in
lib.mapAttrs (name: config: deployCommand name config) self.nixosConfigurations;
checks.${system} = {
mastodon-garage = import ./tests/mastodon-garage.nix { inherit pkgs self; };
pixelfed-garage = import ./tests/pixelfed-garage.nix { inherit pkgs self; };
pre-commit = git-hooks.lib.${system}.run {
src = ./.;
hooks = {
nixfmt-rfc-style.enable = true;
deadnix.enable = true;
};
};
};
all = nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./common.nix ./mastodon.nix ./peertube.nix ./pixelfed.nix ./garage.nix ];
devShells.${system}.default = pkgs.mkShell {
inputs = with pkgs; [
nil
];
shellHook = self.checks.${system}.pre-commit.shellHook;
};
};
devShells.${system}.default = pkgs.mkShell {
inputs = with pkgs; [
nil
];
};
};
}

View file

@ -1,201 +0,0 @@
let
# generate one using openssl (somehow)
# XXX: when importing, garage tells you importing is only meant for keys previously generated by garage. is it okay to generate them using openssl? it seems to work fine
snakeoil_key = {
id = "GK22a15201acacbd51cd43e327";
secret = "82b2b4cbef27bf8917b350d5b10a87c92fa9c8b13a415aeeea49726cf335d74e";
};
in
# TODO: expand to a multi-machine setup
{ config, lib, pkgs, ... }: {
# add in options to ensure creation of buckets and keys
options =
let
inherit (lib) types mkOption mkEnableOption;
in {
services.garage = {
ensureBuckets = mkOption {
type = types.attrsOf (types.submodule {
options = {
website = mkOption {
type = types.bool;
default = false;
};
# I think setting corsRules should allow another website to show images from your bucket
corsRules = {
enable = mkEnableOption "CORS Rules";
allowedHeaders = mkOption {
type = types.listOf types.str;
default = [];
};
allowedMethods = mkOption {
type = types.listOf types.str;
default = [];
};
allowedOrigins = mkOption {
type = types.listOf types.str;
default = [];
};
};
aliases = mkOption {
type = types.listOf types.str;
default = [];
};
};
});
};
ensureKeys = mkOption {
type = types.attrsOf (types.submodule {
# TODO: these should be managed as secrets, not in the nix store
options = {
id = mkOption {
type = types.str;
};
secret = mkOption {
type = types.str;
};
# TODO: assert at least one of these is true
# currently, needs to be done in the top level module
ensureAccess = mkOption {
type = types.attrsOf (types.submodule {
options = {
read = mkOption {
type = types.bool;
default = false;
};
write = mkOption {
type = types.bool;
default = false;
};
owner = mkOption {
type = types.bool;
default = false;
};
};
});
default = [];
};
};
});
};
};
};
config = {
virtualisation.vmVariant = {
virtualisation.diskSize = 2048;
virtualisation.forwardPorts = [
{
from = "host";
host.port = 3901;
guest.port = 3901;
}
{
from = "host";
host.port = 3902;
guest.port = 3902;
}
];
};
environment.systemPackages = [ pkgs.minio-client pkgs.awscli ];
networking.firewall.allowedTCPPorts = [ 3901 3902 ];
services.garage = {
enable = true;
package = pkgs.garage_0_9;
settings = {
replication_mode = "none";
# TODO: use a secret file
# I'd like to have a NixOS module that declares the need for a secret file
# that way, the need can be met by any secrets solution (agenix, sops-nix, colmena, a nixops4 module, ...)
rpc_secret = "d576c4478cc7d0d94cfc127138cbb82018b0155c037d1c827dfb6c36be5f6625";
# TODO: why does this have to be set? is there not a sensible default?
rpc_bind_addr = "[::]:3901";
rpc_public_addr = "[::1]:3901";
s3_api.api_bind_addr = "[::]:3900";
s3_web.bind_addr = "[::]:3902";
s3_web.root_domain = ".web.garage.localhost";
index = "index.html";
s3_api.s3_region = "garage";
s3_api.root_domain = ".s3.garage.localhost";
};
};
systemd.services.ensure-garage = {
after = [ "garage.service" ];
wantedBy = [ "garage.service" ];
path = [ config.services.garage.package pkgs.perl pkgs.awscli ];
script = ''
set -xeuo pipefail
# give garage time to start up
sleep 3
# XXX: this is very sensitive to being a single instance
# (doing the bare minimum to get garage up and running)
# also, it's crazy that we have to parse command output like this
# TODO: talk to garage maintainer about making this nicer to work with in Nix
# before I do that though, I should figure out how setting it up across multiple machines will work
# You could ask for a change or `--json` flag anyway, and maybe tell them what you're working on.
GARAGE_ID=$(garage node id 2>/dev/null | perl -ne '/(.*)@.*/ && print $1')
garage layout assign -z g1 -c 1G $GARAGE_ID
LAYOUT_VER=$(garage layout show | perl -ne '/Current cluster layout version: (\d*)/ && print $1')
garage layout apply --version $((LAYOUT_VER + 1))
# XXX: this is a hack because we want to write to the buckets here but we're not guaranteed any access keys
# TODO: generate this key here rather than using a well-known key
garage key import --yes -n tmp ${snakeoil_key.id} ${snakeoil_key.secret}
export AWS_ACCESS_KEY_ID=${snakeoil_key.id};
export AWS_SECRET_ACCESS_KEY=${snakeoil_key.secret};
${
lib.concatStringsSep "\n" (lib.mapAttrsToList (bucket: { website, aliases, corsRules }: ''
# garage bucket info tells us if the bucket already exists
garage bucket info ${bucket} || garage bucket create ${bucket}
# TODO: should this --deny the website if `website` is false?
${lib.optionalString website ''
garage bucket website --allow ${/* more robust: */ lib.strings.escapeShellArg bucket}
''}
${lib.concatStringsSep "\n" (map (alias: ''
garage bucket alias ${bucket} ${alias}
'') aliases)}
${lib.optionalString corsRules.enable ''
# TODO: can i turn this whole thing into one builtins.toJSON?
# why not :D
# we also have `lib.strings.escapeShellArg` for the quoting
export CORS=${lib.concatStrings [
"'"
''{"CORSRules":[{''
''"AllowedHeaders":${builtins.toJSON corsRules.allowedHeaders},''
''"AllowedMethods":${builtins.toJSON corsRules.allowedMethods},''
''"AllowedOrigins":${builtins.toJSON corsRules.allowedOrigins}''
''}]}''
"'"
]}
garage bucket allow --read --write --owner ${bucket} --key tmp
aws --endpoint http://s3.garage.localhost:3900 s3api put-bucket-cors --bucket ${bucket} --cors-configuration $CORS
garage bucket deny --read --write --owner ${bucket} --key tmp
''}
'') config.services.garage.ensureBuckets)
# probably nice to factor this out into a function
}
${
lib.concatStringsSep "\n" (lib.mapAttrsToList (key: {id, secret, ensureAccess}: ''
garage key import --yes -n ${key} ${id} ${secret}
${
lib.concatStringsSep "\n" (lib.mapAttrsToList (bucket: { read, write, owner }: ''
garage bucket allow ${lib.optionalString read "--read"} ${lib.optionalString write "--write"} ${lib.optionalString owner "--owner"} ${bucket} --key ${key}
'') ensureAccess)
}
'') config.services.garage.ensureKeys)
}
garage key delete ${snakeoil_key.id} --yes
'';
};
};
}

61
installer.nix Normal file
View file

@ -0,0 +1,61 @@
/**
Convert a NixOS configuration to one for a minimal installer ISO
WARNING: Running this installer will format the target disk!
*/
{
nixpkgs,
hostKeys ? { },
}:
machine:
let
inherit (builtins) concatStringsSep attrValues mapAttrs;
installer =
{
config,
pkgs,
lib,
...
}:
let
bootstrap = pkgs.writeShellApplication {
name = "bootstrap";
runtimeInputs = with pkgs; [ nixos-install-tools ];
text = ''
${machine.config.system.build.diskoScript}
nixos-install --no-root-password --no-channel-copy --system ${machine.config.system.build.toplevel}
${concatStringsSep "\n" (
attrValues (
mapAttrs (kind: keys: ''
cp ${keys.private} /mnt/etc/ssh/ssh_host_${kind}_key
chmod 600 /mnt/etc/ssh/ssh_host_${kind}_key
cp ${keys.public} /mnt/etc/ssh/ssh_host_${kind}_key.pub
chmod 644 /mnt/etc/ssh/ssh_host_${kind}_key.pub
'') hostKeys
)
)}
poweroff
'';
};
in
{
imports = [
"${nixpkgs}/nixos/modules/installer/cd-dvd/installation-cd-minimal.nix"
];
nixpkgs.hostPlatform = "x86_64-linux";
services.getty.autologinUser = lib.mkForce "root";
programs.bash.loginShellInit = nixpkgs.lib.getExe bootstrap;
isoImage = {
compressImage = false;
squashfsCompression = "lz4";
isoName = lib.mkForce "installer.iso";
## ^^ FIXME: Use a more interesting name or keep the default name and
## use `isoImage.isoName` in the tests.
};
};
in
(nixpkgs.lib.nixosSystem { modules = [ installer ]; }).config.system.build.isoImage

View file

@ -1,190 +0,0 @@
let
snakeoil_key = {
id = "GK3515373e4c851ebaad366558";
secret = "7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34";
};
in
{ config, lib, pkgs, ... }: lib.mkMerge [
{ # garage setup
services.garage = {
ensureBuckets = {
mastodon = {
website = true;
# corsRules = {
# enable = true;
# allowedHeaders = [ "*" ];
# allowedMethods = [ "GET" ];
# allowedOrigins = [ "*" ];
# };
};
};
ensureKeys = {
mastodon = {
inherit (snakeoil_key) id secret;
ensureAccess = {
mastodon = {
read = true;
write = true;
owner = true;
};
};
};
};
};
services.mastodon = {
extraConfig = {
S3_ENABLED = "true";
S3_ENDPOINT = "http://s3.garage.localhost:3900";
S3_REGION = "garage";
S3_BUCKET = "mastodon";
# use <S3_BUCKET>.<S3_ENDPOINT>
S3_OVERRIDE_PATH_STLE = "true";
AWS_ACCESS_KEY_ID = snakeoil_key.id;
AWS_SECRET_ACCESS_KEY = snakeoil_key.secret;
S3_PROTOCOL = "http";
S3_HOSTNAME = "web.garage.localhost:3902";
# by default it tries to use "<S3_HOSTNAME>/<S3_BUCKET>"
# but we want "<S3_BUCKET>.<S3_HOSTNAME>"
S3_ALIAS_HOST = "mastodon.web.garage.localhost:3902";
# XXX: I think we need to set up a proper CDN host
CDN_HOST = "mastodon.web.garage.localhost:3902";
# SEE: the last section in https://docs.joinmastodon.org/admin/optional/object-storage/
# TODO: can we set up ACLs with garage?
S3_PERMISSION = "";
};
};
}
# mastodon setup
{
# open up access to the mastodon web interface
networking.firewall.allowedTCPPorts = [ 443 ];
services.mastodon = {
enable = true;
# TODO: set up a domain name, and a DNS service so that this can run not in a vm
# localDomain = "domain.social";
configureNginx = true;
# TODO: configure a mailserver so this works
# smtp.fromAddress = "mastodon@mastodon.localhost";
# TODO: this is hardware-dependent. let's figure it out when we have hardware
# streamingProcesses = 1;
};
security.acme = {
acceptTerms = true;
preliminarySelfsigned = true;
# TODO: configure a mailserver so we can set up acme
# defaults.email = "test@example.com";
};
}
# VM setup
{
# these configurations only apply when producing a VM (e.g. nixos-rebuild build-vm)
virtualisation.vmVariant = { config, ... }: {
services.mastodon = {
# redirects to localhost, but allows it to have a proper domain name
localDomain = "mastodon.localhost";
smtp = {
fromAddress = "mastodon@mastodon.localhost";
createLocally = false;
};
extraConfig = {
EMAIL_DOMAIN_ALLOWLIST = "example.com";
};
# from the documentation: recommended is the amount of your CPU cores minus one.
# but it also must be a positive integer
streamingProcesses = let
ncores = config.virtualisation.cores;
in
lib.max 1 (ncores - 1);
};
security.acme = {
defaults = {
# invalid server; the systemd service will fail, and we won't get properly signed certificates
# but let's not spam the letsencrypt servers (and we don't own this domain anyways)
server = "https://127.0.0.1";
email = "none";
};
};
virtualisation.memorySize = 2048;
virtualisation.forwardPorts = [
{
from = "host";
host.port = 44443;
guest.port = 443;
}
];
};
}
# mastodon development environment
{
networking.firewall.allowedTCPPorts = [ 55001 ];
virtualisation.vmVariant = { config, ... }: {
services.mastodon = {
# needed so we can directly access mastodon at port 55001
# otherwise, mastodon has to be accessed *from* port 443, which we can't do via port forwarding
enableUnixSocket = false;
extraConfig = {
RAILS_ENV = "development";
# to be accessible from outside the VM
BIND = "0.0.0.0";
# for letter_opener (still doesn't work though)
REMOTE_DEV = "true";
};
};
services.postgresql = {
enable = true;
ensureUsers = [
{
name = config.services.mastodon.database.user;
ensureClauses.createdb = true;
# ensurePermissions doesn't work anymore
# ensurePermissions = {
# "mastodon_development.*" = "ALL PRIVILEGES";
# "mastodon_test.*" = "ALL PRIVILEGES";
# }
}
];
# ensureDatabases = [ "mastodon_development_test" "mastodon_test" ];
};
# run rails db:seed so that mastodon sets up the databases for us
# iirc the postgresql module can also do this kind of thing
systemd.services.mastodon-init-db.script = lib.mkForce ''
# This conditional freaks me out
# Maybe configure psql to output in a more machine-readable format?
if [ `psql -c \
"select count(*) from pg_class c \
join pg_namespace s on s.oid = c.relnamespace \
where s.nspname not in ('pg_catalog', 'pg_toast', 'information_schema') \
and s.nspname not like 'pg_temp%';" | sed -n 3p` -eq 0 ]; then
echo "Seeding database"
rails db:setup
# SAFETY_ASSURED=1 rails db:schema:load
rails db:seed
else
echo "Migrating database (this might be a noop)"
# TODO: this breaks for some reason
# rails db:migrate
fi
'';
virtualisation.forwardPorts = [
{
from = "host";
host.port = 55001;
guest.port = 55001;
}
];
};
}
]

View file

@ -1,75 +0,0 @@
let
snakeoil_key = {
id = "GKb5615457d44214411e673b7b";
secret = "5be6799a88ca9b9d813d1a806b64f15efa49482dbe15339ddfaf7f19cf434987";
};
in
{ config, lib, pkgs, ... }: {
services.garage = {
ensureBuckets = {
pixelfed = {
website = true;
# TODO: these are too broad, after getting everything works narrow it down to the domain we actually want
corsRules = {
enable = true;
allowedHeaders = [ "*" ];
allowedMethods = [ "GET" ];
allowedOrigins = [ "*" ];
};
};
};
ensureKeys = {
pixelfed = {
inherit (snakeoil_key) id secret;
ensureAccess = {
pixelfed = {
read = true;
write = true;
owner = true;
};
};
};
};
};
# TODO: factor these out so we're only defining e.g. s3.garage.localhost and port 3900 in one place
services.pixelfed.settings = {
FILESYSTEM_CLOUD = "s3";
PF_ENABLE_CLOUD = true;
AWS_ACCESS_KEY_ID = snakeoil_key.id;
AWS_SECRET_ACCESS_KEY = snakeoil_key.secret;
AWS_DEFAULT_REGION = "garage";
AWS_URL = "http://pixelfed.s3.garage.localhost:3900";
AWS_BUCKET = "pixelfed";
AWS_ENDPOINT = "http://s3.garage.localhost:3900";
AWS_USE_PATH_STYLE_ENDPOINT = false;
};
virtualisation.vmVariant = {
networking.firewall.allowedTCPPorts = [ 80 ];
services.pixelfed = {
enable = true;
domain = "pixelfed.localhost";
# TODO: secrets management!
secretFile = pkgs.writeText "secrets.env" ''
APP_KEY=adKK9EcY8Hcj3PLU7rzG9rJ6KKTOtYfA
'';
settings = {
OPEN_REGISTRATION = true;
FORCE_HTTPS_URLS = false;
};
# I feel like this should have an `enable` option and be configured via `services.nginx` rather than mirroring those options here
# TODO: If that indeed makes sense, upstream it.
nginx = {};
};
virtualisation.memorySize = 2048;
virtualisation.forwardPorts = [
{
from = "host";
host.port = 8000;
guest.port = 80;
}
];
};
}

BIN
tests/fediversity.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

BIN
tests/green.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 692 B

153
tests/mastodon-garage.nix Normal file
View file

@ -0,0 +1,153 @@
{ pkgs, self }:
let
lib = pkgs.lib;
## FIXME: this binding was not used, but maybe we want a side-effect or something?
# rebuildableTest = import ./rebuildableTest.nix pkgs;
seleniumScript =
pkgs.writers.writePython3Bin "selenium-script"
{
libraries = with pkgs.python3Packages; [ selenium ];
}
''
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
print(1)
options = Options()
options.add_argument("--headless")
# devtools don't show up in headless screenshots
# options.add_argument("-devtools")
service = webdriver.FirefoxService(executable_path="${lib.getExe pkgs.geckodriver}") # noqa: E501
driver = webdriver.Firefox(options=options, service=service)
driver.get("http://mastodon.localhost:55001/public/local")
# wait until the statuses load
WebDriverWait(driver, 90).until(
lambda x: x.find_element(By.CLASS_NAME, "status"))
driver.save_screenshot("/mastodon-screenshot.png")
driver.close()
'';
in
pkgs.nixosTest {
name = "test-mastodon-garage";
nodes = {
server =
{ config, ... }:
{
virtualisation.memorySize = lib.mkVMOverride 4096;
imports = with self.nixosModules; [
bleedingFediverse
fediversity
garage-vm
mastodon-vm
];
# TODO: pair down
environment.systemPackages = with pkgs; [
python3
firefox-unwrapped
geckodriver
toot
xh
seleniumScript
helix
imagemagick
];
environment.variables = {
POST_MEDIA = ./green.png;
AWS_ACCESS_KEY_ID = config.services.garage.ensureKeys.mastodon.id;
AWS_SECRET_ACCESS_KEY = config.services.garage.ensureKeys.mastodon.secret;
};
};
};
testScript =
{ nodes, ... }:
''
import re
import time
server.start()
with subtest("Mastodon starts"):
server.wait_for_unit("mastodon-web.service")
# make sure mastodon is fully up and running before we interact with it
# TODO: is there a way to test for this?
time.sleep(180)
with subtest("Account creation"):
account_creation_output = server.succeed("mastodon-tootctl accounts create test --email test@test.com --confirmed --approve")
password_match = re.match('.*New password: ([^\n]*).*', account_creation_output, re.S)
if password_match is None:
raise Exception(f"account creation did not generate a password.\n{account_creation_output}")
password = password_match.group(1)
with subtest("TTY Login"):
server.wait_until_tty_matches("1", "login: ")
server.send_chars("root\n");
with subtest("Log in with toot"):
# toot doesn't provide a way to just specify our login details as arguments, so we have to pretend we're typing them in at the prompt
server.send_chars("toot login_cli --instance http://mastodon.localhost:55001 --email test@test.com\n")
server.wait_until_tty_matches("1", "Password: ")
server.send_chars(password + "\n")
server.wait_until_tty_matches("1", "Successfully logged in.")
with subtest("post text"):
server.succeed("echo 'hello mastodon' | toot post")
with subtest("post image"):
server.succeed("toot post --media $POST_MEDIA")
with subtest("access garage"):
server.succeed("mc alias set garage ${nodes.server.fediversity.internal.garage.api.url} --api s3v4 --path off $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY")
server.succeed("mc ls garage/mastodon")
with subtest("access image in garage"):
image = server.succeed("mc find garage --regex original")
image = image.rstrip()
if image == "":
raise Exception("image posted to mastodon did not get stored in garage")
server.succeed(f"mc cat {image} >/garage-image.webp")
garage_image_hash = server.succeed("identify -quiet -format '%#' /garage-image.webp")
image_hash = server.succeed("identify -quiet -format '%#' $POST_MEDIA")
if garage_image_hash != image_hash:
raise Exception("image stored in garage did not match image uploaded")
with subtest("Content security policy allows garage images"):
headers = server.succeed("xh -h http://mastodon.localhost:55001/public/local")
csp_match = None
# I can't figure out re.MULTILINE
for header in headers.split("\n"):
csp_match = re.match('^Content-Security-Policy: (.*)$', header)
if csp_match is not None:
break
if csp_match is None:
raise Exception("mastodon did not send a content security policy header")
csp = csp_match.group(1)
# the img-src content security policy should include the garage server
## TODO: use `nodes.server.fediversity.internal.garage.api.url` same as above, but beware of escaping the regex. Be careful with port 80 though.
garage_csp = re.match(".*; img-src[^;]*web\.garage\.localhost.*", csp)
if garage_csp is None:
raise Exception("Mastodon's content security policy does not include garage server. image will not be displayed properly on mastodon.")
# this could in theory give a false positive if mastodon changes it's colorscheme to include pure green.
with subtest("image displays"):
server.succeed("selenium-script")
server.copy_from_vm("/mastodon-screenshot.png", "")
displayed_colors = server.succeed("convert /mastodon-screenshot.png -define histogram:unique-colors=true -format %c histogram:info:")
# check that the green image displayed somewhere
green_check = re.match(".*#00FF00.*", displayed_colors, re.S)
if green_check is None:
raise Exception("cannot detect the uploaded image on mastodon page.")
'';
}

227
tests/pixelfed-garage.nix Normal file
View file

@ -0,0 +1,227 @@
{ pkgs, self }:
let
lib = pkgs.lib;
## FIXME: this binding was not used but maybe we want a side effect or something?
# rebuildableTest = import ./rebuildableTest.nix pkgs;
email = "test@test.com";
password = "testtest";
# FIXME: Replace all the By.XPATH by By.CSS_SELECTOR.
seleniumImports = ''
import sys
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
'';
seleniumSetup = ''
print("Create and configure driver...", file=sys.stderr)
options = Options()
# options.add_argument("--headless=new")
service = webdriver.ChromeService(executable_path="${lib.getExe pkgs.chromedriver}") # noqa: E501
driver = webdriver.Chrome(options=options, service=service)
driver.implicitly_wait(30)
driver.set_window_size(1280, 960)
'';
seleniumPixelfedLogin = ''
print("Open login page...", file=sys.stderr)
driver.get("http://pixelfed.localhost/login")
print("Enter email...", file=sys.stderr)
driver.find_element(By.ID, "email").send_keys("${email}")
print("Enter password...", file=sys.stderr)
driver.find_element(By.ID, "password").send_keys("${password}")
# FIXME: This is disgusting. Find instead the input type submit in the form
# with action ending in "/login".
print("Click Login button...", file=sys.stderr)
driver.find_element(By.XPATH, "//button[normalize-space()='Login']").click()
'';
## NOTE: `path` must be a valid python string, either a variable or _quoted_.
seleniumTakeScreenshot = path: ''
print("Take screenshot...", file=sys.stderr)
if not driver.save_screenshot(${path}):
raise Exception("selenium could not save screenshot")
'';
seleniumQuit = ''
print("Quitting...", file=sys.stderr)
driver.quit()
'';
seleniumScriptPostPicture =
pkgs.writers.writePython3Bin "selenium-script-post-picture"
{
libraries = with pkgs.python3Packages; [ selenium ];
}
''
import os
import time
${seleniumImports}
from selenium.webdriver.support.wait import WebDriverWait
${seleniumSetup}
${seleniumPixelfedLogin}
time.sleep(3)
media_path = os.environ['POST_MEDIA']
# Find the new post form, fill it in with our pictureand a caption.
print("Click on Create New Post...", file=sys.stderr)
driver.find_element(By.LINK_TEXT, "Create New Post").click()
print("Add file to input element...", file=sys.stderr)
driver.find_element(By.XPATH, "//input[@type='file']").send_keys(media_path)
print("Add a caption", file=sys.stderr)
driver.find_element(By.CSS_SELECTOR, ".media-body textarea").send_keys(
"Fediversity test of image upload to pixelfed with garage storage."
)
time.sleep(3)
print("Click on Post button...", file=sys.stderr)
driver.find_element(By.LINK_TEXT, "Post").click()
# Wait until the post loads, and in particular its picture, then take a
# screenshot of the whole page.
print("Wait for post and image to be loaded...", file=sys.stderr)
img = driver.find_element(
By.XPATH,
"//div[@class='timeline-status-component-content']//img"
)
WebDriverWait(driver, timeout=10).until(
lambda d: d.execute_script("return arguments[0].complete", img)
)
time.sleep(3)
${seleniumTakeScreenshot "\"/home/selenium/screenshot.png\""}
${seleniumQuit}'';
seleniumScriptGetSrc =
pkgs.writers.writePython3Bin "selenium-script-get-src"
{
libraries = with pkgs.python3Packages; [ selenium ];
}
''
${seleniumImports}
${seleniumSetup}
${seleniumPixelfedLogin}
img = driver.find_element(
By.XPATH,
"//div[@class='timeline-status-component-content']//img"
)
# REVIEW: Need to wait for it to be loaded?
print(img.get_attribute('src'))
${seleniumQuit}'';
in
pkgs.nixosTest {
name = "test-pixelfed-garage";
nodes = {
server =
{ config, ... }:
{
services = {
xserver = {
enable = true;
displayManager.lightdm.enable = true;
desktopManager.lxqt.enable = true;
};
displayManager.autoLogin = {
enable = true;
user = "selenium";
};
};
virtualisation.resolution = {
x = 1680;
y = 1050;
};
virtualisation = {
memorySize = lib.mkVMOverride 8192;
cores = 8;
};
imports = with self.nixosModules; [
bleedingFediverse
fediversity
garage-vm
pixelfed-vm
];
# TODO: pair down
environment.systemPackages = with pkgs; [
python3
chromium
chromedriver
xh
seleniumScriptPostPicture
seleniumScriptGetSrc
helix
imagemagick
];
environment.variables = {
POST_MEDIA = ./fediversity.png;
AWS_ACCESS_KEY_ID = config.services.garage.ensureKeys.pixelfed.id;
AWS_SECRET_ACCESS_KEY = config.services.garage.ensureKeys.pixelfed.secret;
## without this we get frivolous errors in the logs
MC_REGION = "garage";
};
# chrome does not like being run as root
users.users.selenium = {
isNormalUser = true;
};
};
};
testScript =
{ nodes, ... }:
''
import re
server.start()
with subtest("Pixelfed starts"):
server.wait_for_unit("phpfpm-pixelfed.service")
with subtest("Account creation"):
server.succeed("pixelfed-manage user:create --name=test --username=test --email=${email} --password=${password} --confirm_email=1")
# NOTE: This could in theory give a false positive if pixelfed changes it's
# colorscheme to include pure green. (see same problem in pixelfed-garage.nix).
# TODO: For instance: post a red image and check that the green pixel IS NOT
# there, then post a green image and check that the green pixel IS there.
with subtest("Image displays"):
server.succeed("su - selenium -c 'selenium-script-post-picture ${email} ${password}'")
server.copy_from_vm("/home/selenium/screenshot.png", "")
displayed_colors = server.succeed("magick /home/selenium/screenshot.png -define histogram:unique-colors=true -format %c histogram:info:")
# check that the green image displayed somewhere
image_check = re.match(".*#FF0500.*", displayed_colors, re.S)
if image_check is None:
raise Exception("cannot detect the uploaded image on pixelfed page.")
with subtest("access garage"):
server.succeed("mc alias set garage ${nodes.server.fediversity.internal.garage.api.url} --api s3v4 --path off $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY")
server.succeed("mc ls garage/pixelfed")
with subtest("access image in garage"):
image = server.succeed("mc find garage --regex '\\.png' --ignore '*_thumb.png'")
image = image.rstrip()
if image == "":
raise Exception("image posted to Pixelfed did not get stored in garage")
server.succeed(f"mc cat {image} >/garage-image.png")
garage_image_hash = server.succeed("identify -quiet -format '%#' /garage-image.png")
image_hash = server.succeed("identify -quiet -format '%#' $POST_MEDIA")
if garage_image_hash != image_hash:
raise Exception("image stored in garage did not match image uploaded")
with subtest("Check that image comes from garage"):
src = server.succeed("su - selenium -c 'selenium-script-get-src ${email} ${password}'")
if not src.startswith("${nodes.server.fediversity.internal.garage.web.urlForBucket "pixelfed"}"):
raise Exception("image does not come from garage")
'';
}

163
tests/rebuildableTest.nix Normal file
View file

@ -0,0 +1,163 @@
pkgs: test:
let
inherit (pkgs.lib)
mapAttrsToList
concatStringsSep
genAttrs
mkIf
;
inherit (builtins) attrNames;
interactiveConfig = (
{ config, ... }:
{
# so we can run `nix shell nixpkgs#foo` on the machines
nix.extraOptions = ''
extra-experimental-features = nix-command flakes
'';
# so we can ssh in and rebuild them
services.openssh = {
enable = true;
settings = {
PermitRootLogin = "yes";
PermitEmptyPasswords = "yes";
UsePAM = false;
};
};
virtualisation = mkIf (config.networking.hostName == "jumphost") {
forwardPorts = [
{
from = "host";
host.port = 2222;
guest.port = 22;
}
];
};
}
);
sshConfig = pkgs.writeText "ssh-config" ''
Host *
User root
StrictHostKeyChecking no
BatchMode yes
ConnectTimeout 20
UserKnownHostsFile=/dev/null
LogLevel Error # no "added to known hosts"
Host jumphost
Port 2222
HostName localhost
Host * !jumphost
ProxyJump jumphost
'';
# one should first start up the interactive test driver, then start the
# machines, then update the config, and then redeploy with the `rebuildScript`
# associated with the new config.
rebuildScript = pkgs.writeShellScriptBin "rebuild" ''
# create an association array from machine names to the path to their
# configuration in the nix store
declare -A configPaths=(${
concatStringsSep " " (
mapAttrsToList (
n: v: ''["${n}"]="${v.system.build.toplevel}"''
) rebuildableTest.driverInteractive.nodes
)
})
rebuild_one() {
machine="$1"
echo "pushing new config to $machine"
if [ -z ''${configPaths[$machine]+x} ]; then
echo 'No machine '"$machine"' in this test.'
exit 1
fi
if ! ssh -F ${sshConfig} $machine true; then
echo 'Couldn'"'"'t connect to '"$machine"'. Make sure you'"'"'ve started it with `'"$machine"'.start()` in the test interactive driver.'
exit 1
fi
# taken from nixos-rebuild (we only want to do the activate part)
cmd=(
"systemd-run"
"-E" "LOCALE_ARCHIVE"
"--collect"
"--no-ask-password"
"--pty"
"--quiet"
"--same-dir"
"--service-type=exec"
"--unit=nixos-rebuild-switch-to-configuration"
"--wait"
"''${configPaths[$machine]}/bin/switch-to-configuration"
"test"
)
if ! ssh -F ${sshConfig} $machine "''${cmd[@]}"; then
echo "warning: error(s) occurred while switching to the new configuration"
exit 1
fi
}
if ! ssh -F ${sshConfig} jumphost true; then
echo 'Couldn'"'"'t connect to jump host. Make sure you are running driverInteractive, and that you'"'"'ve run `jumphost.start()` and `jumphost.forward_port(2222,22)`'
exit 1
fi
if [ -n "$1" ]; then
rebuild_one "$1"
else
for machine in ${concatStringsSep " " (attrNames rebuildableTest.driverInteractive.nodes)}; do
rebuild_one $machine
done
fi
'';
# NOTE: This is awkward because NixOS does not expose the module interface
# that is used to build tests. When we upstream this, we can build it into the
# system more naturally (and expose more of the interface to end users while
# we're at it)
rebuildableTest =
let
preOverride = pkgs.nixosTest (
test
// {
interactive = (test.interactive or { }) // {
# no need to // with test.interactive.nodes here, since we are iterating
# over all of them, and adding back in the config via `imports`
nodes =
genAttrs (attrNames test.nodes or { } ++ attrNames test.interactive.nodes or { } ++ [ "jumphost" ])
(n: {
imports = [
(test.interactive.${n} or { })
interactiveConfig
];
});
};
# override with test.passthru in case someone wants to overwrite us.
passthru = {
inherit rebuildScript sshConfig;
} // (test.passthru or { });
}
);
in
preOverride
// {
driverInteractive = preOverride.driverInteractive.overrideAttrs (old: {
# this comes from runCommand, not mkDerivation, so this is the only
# hook we have to override
buildCommand =
old.buildCommand
+ ''
ln -s ${sshConfig} $out/ssh-config
ln -s ${rebuildScript}/bin/rebuild $out/bin/rebuild
'';
});
};
in
rebuildableTest

View file

@ -1,12 +0,0 @@
# `ensureBuckets`
Should be replaced by a resource that creates the bucket, so that we can manage its whole lifecycle, including updates (authz?) and deletion; possibly a generic S3 bucket resource? - we'll see.
Fine solution for now.
Perhaps also useful in a NixOS module, but could also be tech debt if nobody uses it.
# More exploration
- Use NixOS test framework?
- Write test that upgrades garage

44
vm/garage-vm.nix Normal file
View file

@ -0,0 +1,44 @@
{
lib,
config,
modulesPath,
...
}:
let
inherit (lib) mkVMOverride mapAttrs' filterAttrs;
cfg = config.services.garage;
fedicfg = config.fediversity.internal.garage;
in
{
imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
services.nginx.virtualHosts =
let
value = {
forceSSL = mkVMOverride false;
enableACME = mkVMOverride false;
};
in
mapAttrs' (bucket: _: {
name = fedicfg.web.domainForBucket bucket;
inherit value;
}) (filterAttrs (_: { website, ... }: website) cfg.ensureBuckets);
virtualisation.diskSize = 2048;
virtualisation.forwardPorts = [
{
from = "host";
host.port = fedicfg.rpc.port;
guest.port = fedicfg.rpc.port;
}
{
from = "host";
host.port = fedicfg.web.internalPort;
guest.port = fedicfg.web.internalPort;
}
];
}

75
vm/interactive-vm.nix Normal file
View file

@ -0,0 +1,75 @@
# customize nixos-rebuild build-vm to be a bit more convenient
{ pkgs, ... }:
{
# let us log in
users.mutableUsers = false;
users.users.root.hashedPassword = "";
services.openssh = {
enable = true;
settings = {
PermitRootLogin = "yes";
PermitEmptyPasswords = "yes";
UsePAM = false;
};
};
# automatically log in
services.getty.autologinUser = "root";
services.getty.helpLine = ''
Type `C-a c` to access the qemu console
Type `C-a x` to quit
'';
# access to convenient things
environment.systemPackages = with pkgs; [
w3m
python3
xterm # for `resize`
];
environment.loginShellInit = ''
eval "$(resize)"
'';
nix.extraOptions = ''
extra-experimental-features = nix-command flakes
'';
# no graphics. see nixos-shell
virtualisation = {
graphics = false;
qemu.consoles = [
"tty0"
"hvc0"
];
qemu.options = [
"-serial null"
"-device virtio-serial"
"-chardev stdio,mux=on,id=char0,signal=off"
"-mon chardev=char0,mode=readline"
"-device virtconsole,chardev=char0,nr=0"
];
};
# we can't forward port 80 or 443, so let's run nginx on a different port
networking.firewall.allowedTCPPorts = [
8443
8080
];
services.nginx.defaultSSLListenPort = 8443;
services.nginx.defaultHTTPListenPort = 8080;
virtualisation.forwardPorts = [
{
from = "host";
host.port = 22222;
guest.port = 22;
}
{
from = "host";
host.port = 8080;
guest.port = 8080;
}
{
from = "host";
host.port = 8443;
guest.port = 8443;
}
];
}

120
vm/mastodon-vm.nix Normal file
View file

@ -0,0 +1,120 @@
{
modulesPath,
lib,
config,
...
}:
{
imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
config = lib.mkMerge [
{
fediversity = {
enable = true;
domain = "localhost";
mastodon.enable = true;
temp.cores = config.virtualisation.cores;
};
services.mastodon = {
extraConfig = {
EMAIL_DOMAIN_ALLOWLIST = "example.com";
};
};
security.acme = lib.mkVMOverride {
defaults = {
# invalid server; the systemd service will fail, and we won't get
# properly signed certificates. but let's not spam the letsencrypt
# servers (and we don't own this domain anyways)
server = "https://127.0.0.1";
email = "none";
};
};
virtualisation.memorySize = 2048;
virtualisation.forwardPorts = [
{
from = "host";
host.port = 44443;
guest.port = 443;
}
];
}
#### run mastodon as development environment
{
networking.firewall.allowedTCPPorts = [ 55001 ];
services.mastodon = {
# needed so we can directly access mastodon at port 55001
# otherwise, mastodon has to be accessed *from* port 443, which we can't do via port forwarding
enableUnixSocket = false;
extraConfig = {
RAILS_ENV = "development";
# to be accessible from outside the VM
BIND = "0.0.0.0";
# for letter_opener (still doesn't work though)
REMOTE_DEV = "true";
LOCAL_DOMAIN = "${config.fediversity.internal.mastodon.domain}:8443";
};
};
services.postgresql = {
enable = true;
ensureUsers = [
{
name = config.services.mastodon.database.user;
ensureClauses.createdb = true;
# ensurePermissions doesn't work anymore
# ensurePermissions = {
# "mastodon_development.*" = "ALL PRIVILEGES";
# "mastodon_test.*" = "ALL PRIVILEGES";
# }
}
];
# ensureDatabases = [ "mastodon_development_test" "mastodon_test" ];
};
# Currently, nixos seems to be able to create a single database per
# postgres user. This works for the production version of mastodon, which
# is what's packaged in nixpkgs. For development, we need two databases,
# mastodon_development and mastodon_test. This used to be possible with
# ensurePermissions, but that's broken and has been removed. Here I copy
# the mastodon-init-db script from upstream nixpkgs, but add the single
# line `rails db:setup`, which asks mastodon to create the postgres
# databases for us.
# FIXME: the commented out lines were breaking things, but presumably they're necessary for something.
# TODO: see if we can fix the upstream ensurePermissions stuff. See commented out lines in services.postgresql above for what that config would look like.
systemd.services.mastodon-init-db.script = lib.mkForce ''
result="$(psql -t --csv -c \
"select count(*) from pg_class c \
join pg_namespace s on s.oid = c.relnamespace \
where s.nspname not in ('pg_catalog', 'pg_toast', 'information_schema') \
and s.nspname not like 'pg_temp%';")" || error_code=$?
if [ "''${error_code:-0}" -ne 0 ]; then
echo "Failure checking if database is seeded. psql gave exit code $error_code"
exit "$error_code"
fi
if [ "$result" -eq 0 ]; then
echo "Seeding database"
rails db:setup
# SAFETY_ASSURED=1 rails db:schema:load
rails db:seed
# else
# echo "Migrating database (this might be a noop)"
# rails db:migrate
fi
'';
virtualisation.forwardPorts = [
{
from = "host";
host.port = 55001;
guest.port = 55001;
}
];
}
];
}

22
vm/peertube-vm.nix Normal file
View file

@ -0,0 +1,22 @@
{ modulesPath, ... }:
{
imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
services.peertube = {
enableWebHttps = false;
settings = {
listen.hostname = "0.0.0.0";
instance.name = "PeerTube Test VM";
};
};
virtualisation.forwardPorts = [
{
from = "host";
host.port = 9000;
guest.port = 9000;
}
];
}

39
vm/pixelfed-vm.nix Normal file
View file

@ -0,0 +1,39 @@
{
lib,
modulesPath,
...
}:
let
inherit (lib) mkVMOverride;
in
{
imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
fediversity = {
enable = true;
domain = "localhost";
pixelfed.enable = true;
};
services.pixelfed = {
settings = {
FORCE_HTTPS_URLS = false;
};
nginx = {
forceSSL = mkVMOverride false;
enableACME = mkVMOverride false;
};
};
virtualisation.memorySize = 2048;
virtualisation.forwardPorts = [
{
from = "host";
host.port = 8000;
guest.port = 80;
}
];
}