WIP: add attic cache #397

Draft
kiara wants to merge 14 commits from kiara/fediversity:attic into main
Owner

closes #92.

follow-up: https://git.fediversity.eu/Fediversity/Fediversity/compare/main...kiara:access-cache-from-runner

- [x] set up - test - [x] cli - [ ] CI - may have trouble now using CI secrets, tho #479 may help address that - [ ] [reconcile with woodpecker](https://git.fediversity.eu/kiara/Fediversity/compare/attic...attic-woodpecker) - [ ] [move from operator to dev](https://git.fediversity.eu/kiara/Fediversity/commits/branch/attic-infra) - needs #309 - [ ] deal with hard-coded credentials closes #92. follow-up: https://git.fediversity.eu/Fediversity/Fediversity/compare/main...kiara:access-cache-from-runner
Niols was assigned by kiara 2025-06-19 17:53:44 +02:00
@ -108,0 +110,4 @@
atticS3KeyConfig =
{ pkgs, ... }:
{
# REVIEW: how were these generated above? how do i add one?
Author
Owner

/cc @Niols hoping to better understand this 😅

/cc @Niols hoping to better understand this 😅

Why would attic be operator facing? I’d say it’d be part of the hosting provider’s Fediversity setup. Instead of plugging new noise into PoC code that needs cleanup before anything else, let’s focus on the application data model.

I understand we need that for our CI and #92 is still valid, but the code for that woule live somewhere else entirely.

Why would attic be operator facing? I’d say it’d be part of the hosting provider’s Fediversity setup. Instead of plugging new noise into PoC code that needs cleanup before anything else, let’s focus on the application data model. I understand we need that for our CI and #92 is still valid, but the code for that woule live somewhere else entirely.
Author
Owner

i think caching is what i can do for #362 (given my lack of access so far to the runner node), which we prioritized above #103.
the initial reason i starting exploring having this in operator over dev was that was where we'd done used applications in garage before, while we've had better testing for operator stuff as well - tho the reason i allowed myself to explore that direction has been #370.

i think caching is what i can do for #362 (given my lack of access so far to the runner node), which we prioritized above #103. the initial reason i starting exploring having this in `operator` over `dev` was that was where we'd done used applications in garage before, while we've had better testing for operator stuff as well - tho the reason i allowed myself to explore that direction has been #370.

Yes I know, we half jokingly discussed how meta it would be if we deployed our dev infra through the operator workflow. But it would complicate the bootstrapping of our business logic because we first need the simple things to work, and arguably a CI with cache is at least medium fancy. We don’t need cool deployment to improve CI robustness and performance. Just hack it in somehow for now, as long as it’s under version control and reasonably nixed. But we can target our dev infra as a use case once we have a better grasp on the mechanics of the application.

Yes I know, we half jokingly discussed how meta it would be if we deployed our dev infra through the operator workflow. But it would complicate the bootstrapping of our business logic because we first need the simple things to work, and arguably a CI with cache is at least medium fancy. We don’t need cool deployment to improve CI robustness and performance. Just hack it in somehow for now, as long as it’s under version control and reasonably nixed. But we can target our dev infra as a use case once we have a better grasp on the mechanics of the application.
Author
Owner

@fricklerhandwerk i feel being able to iterate on this using our test infrastructure is helping me get this to a state where it may get our CI more usable. if containers fix our problems first tho, that's great.

@fricklerhandwerk i feel being able to iterate on this using our test infrastructure is helping me get this to a state where it may get our CI more usable. if containers fix our problems first tho, that's great.
@ -108,0 +113,4 @@
# REVIEW: how were these generated above? how do i add one?
s3AccessKeyFile = pkgs.writeText "s3AccessKey" "GKaaaaaaaaaaaaaaaaaaaaaaaa";
s3SecretKeyFile = pkgs.writeText "s3SecretKey" "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
};
Owner

I don't actually know. I suppose they have been generated using Garage itself?

I don't actually know. I suppose they have been generated using Garage itself?
Author
Owner

CI correctly reproduced my local error, but what is this error Broken pipe anyway 😿

CI correctly reproduced my local error, but what is this error `Broken pipe` anyway 😿
Author
Owner

issues in the log:

failed to download acl
vm-test-run-deployment-cli> deployer # building '/nix/store/l70m8z9m7m1yd96vvkl6bq2qimisvvms-users-groups.json.drv'...
vm-test-run-deployment-cli> deployer # building '/nix/store/zrrn5m3lq51riq1rmc7i6i7vxbg4l5lg-etc-hostname.drv'...
vm-test-run-deployment-cli> deployer # building '/nix/store/0442lf73plz43gym6ahgydmp752dxczk-acl-2.3.2.tar.gz.drv'...
vm-test-run-deployment-cli> deployer # building '/nix/store/7byvrfyi1f00i8jxsc1q8z997j9r10y5-attr-2.5.2.tar.gz.drv'...
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 307 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 254 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 506 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 536 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 1316 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 1030 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 2592 ms
vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 2218 ms
vm-test-run-deployment-cli> deployer # error:
vm-test-run-deployment-cli> deployer #        … writing file '/nix/store/kfa6p4i5dmvgirqw87557qbg53xif0qh-acl-2.3.2.tar.gz'
vm-test-run-deployment-cli> deployer # 
vm-test-run-deployment-cli> deployer #        error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org
vm-test-run-deployment-cli> deployer # error: builder for '/nix/store/0442lf73plz43gym6ahgydmp752dxczk-acl-2.3.2.tar.gz.drv' failed with exit code 1
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/ra2zlblxs9v86ai9adl8132hn9zyx349-acl-2.3.2.drv' failed to build
vm-test-run-deployment-cli> deployer # building '/nix/store/i2pfgljh1az3v0xga8s4kvn7l1kb1nj4-autoconf-2.72.tar.xz.drv'...
vm-test-run-deployment-cli> deployer # building '/nix/store/3yar2pnvz7ll79z3jlzx09qnhrsi7zj5-automake-1.16.5.tar.xz.drv'...
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/s4wc8bxsrq7wh4j0pkr6fx2vclyh7vq8-libarchive-3.7.7.drv' failed to build
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/9g84y69zyrx6snk86y2nh7h9miwj3wmz-cmake-3.31.5.drv' failed to build
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/7sl47s9yz2a3v8gmbpz28hlzirpw2kyc-elfutils-0.192.drv' failed to build
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/8px6vl7590iy1vzy4jz9fryazfk58qqm-ghc-9.6.6.drv' failed to build
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/dvf00v3jnm5529l60hzdxmgj012qvvan-ShellCheck-0.10.0.drv' failed to build
vm-test-run-deployment-cli> deployer # building '/nix/store/vrf2w2sj0azd3f0d3i0y40bblxkmvb3x-etc-pam-environment.drv'...
vm-test-run-deployment-cli> deployer # building '/nix/store/kh2js99js9kpsfpa122w4kbvy4wfjd93-extra-hosts.drv'...
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/v11d3yyp69rh2myl6xpqappvff4rhs4p-generate-vars.drv' failed to build
vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/9f973ra9mkfnfcks7dcw6g44ijs26q6i-system-path.drv' failed to build
OOM
vm-test-run-deployment-cli> deployer # building '/nix/store/h35f56gly8qbac1hdxjgs8viqqh4lxhl-nixos-system-peertube-test.drv'...
vm-test-run-deployment-cli> deployer # [  615.502896] nixops4-eval invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
vm-test-run-deployment-cli> deployer # [  615.504228] CPU: 1 UID: 0 PID: 1046 Comm: nixops4-eval Not tainted 6.12.21 #1-NixOS
vm-test-run-deployment-cli> deployer # [  615.504232] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
vm-test-run-deployment-cli> deployer # [  615.504235] Call Trace:
vm-test-run-deployment-cli> deployer # [  615.504237]  <TASK>
vm-test-run-deployment-cli> deployer # [  615.504239]  dump_stack_lvl+0x5d/0x90
vm-test-run-deployment-cli> deployer # [  615.504254]  dump_header+0x43/0x1c0
vm-test-run-deployment-cli> deployer # [  615.504257]  out_of_memory.cold+0x35/0x78
vm-test-run-deployment-cli> deployer # [  615.504260]  __alloc_pages_noprof+0xbb2/0x1160
vm-test-run-deployment-cli> deployer # [  615.504276]  alloc_pages_mpol_noprof+0xd7/0x1d0
vm-test-run-deployment-cli> deployer # [  615.504280]  folio_alloc_noprof+0x5b/0xb0
vm-test-run-deployment-cli> deployer # [  615.504281]  __filemap_get_folio+0x1ed/0x340
vm-test-run-deployment-cli> deployer # [  615.504284]  filemap_fault+0x20c/0xd70
vm-test-run-deployment-cli> deployer # [  615.504286]  ? __pfx_lru_add+0x10/0x10
vm-test-run-deployment-cli> deployer # [  615.504292]  __do_fault+0x33/0x180
vm-test-run-deployment-cli> deployer # [  615.504306]  do_fault+0x380/0x550
vm-test-run-deployment-cli> deployer # [  615.504308]  __handle_mm_fault+0x7d0/0xfb0
vm-test-run-deployment-cli> deployer # [  615.504311]  handle_mm_fault+0xe2/0x2d0
vm-test-run-deployment-cli> deployer # [  615.504314]  do_user_addr_fault+0x227/0x640
vm-test-run-deployment-cli> deployer # [  615.504317]  exc_page_fault+0x71/0x160
vm-test-run-deployment-cli> deployer # [  615.504322]  asm_exc_page_fault+0x26/0x30
vm-test-run-deployment-cli> deployer # [  615.504327] RIP: 0033:0x7f19aa909c90
vm-test-run-deployment-cli> deployer # [  615.504351] Code: Unable to access opcode bytes at 0x7f19aa909c66.
vm-test-run-deployment-cli> deployer # [  615.504354] RSP: 002b:00007ffdc2b108b8 EFLAGS: 00010246
vm-test-run-deployment-cli> deployer # [  615.504357] RAX: 00007ffdc2b10c98 RBX: 00007ffdc2b10c98 RCX: 0000000000000000
vm-test-run-deployment-cli> deployer # [  615.504360] RDX: 00005582e2176a28 RSI: 0000000000000000 RDI: 00005582ddfbd440
vm-test-run-deployment-cli> deployer # [  615.504361] RBP: 00007ffdc2b10ac0 R08: 00007ffdc2b10a00 R09: 0000000000bea2b3
vm-test-run-deployment-cli> deployer # [  615.504362] R10: 00005582ddfbd440 R11: 0000000000000000 R12: 00007f18b3d26d40
vm-test-run-deployment-cli> deployer # [  615.504363] R13: 00007ffdc2b10980 R14: 00005582ddfbd440 R15: 0000000000000001
vm-test-run-deployment-cli> deployer # [  615.504365]  </TASK>
vm-test-run-deployment-cli> deployer # [  615.504378] Mem-Info:
vm-test-run-deployment-cli> deployer # [  615.522672] active_anon:773682 inactive_anon:122684 isolated_anon:0
vm-test-run-deployment-cli> deployer # [  615.522672]  active_file:57 inactive_file:706 isolated_file:0
vm-test-run-deployment-cli> deployer # [  615.522672]  unevictable:0 dirty:495 writeback:0
vm-test-run-deployment-cli> deployer # [  615.522672]  slab_reclaimable:25542 slab_unreclaimable:44634
vm-test-run-deployment-cli> deployer # [  615.522672]  mapped:1 shmem:96655 pagetables:1938
vm-test-run-deployment-cli> deployer # [  615.522672]  sec_pagetables:0 bounce:0
vm-test-run-deployment-cli> deployer # [  615.522672]  kernel_misc_reclaimable:0
vm-test-run-deployment-cli> deployer # [  615.522672]  free:20640 free_pcp:3205 free_cma:0
vm-test-run-deployment-cli> deployer # [  615.527285] Node 0 active_anon:3414056kB inactive_anon:171408kB active_file:1536kB inactive_file:1492kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:4kB dirty:1980kB writeback:0kB shmem:386620kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:0kB writeback_tmp:0kB kernel_stack:1696kB pagetables:7752kB sec_pagetables:0kB all_unreclaimable? yes
vm-test-run-deployment-cli> deployer # [  615.530948] Node 0 DMA free:14784kB boost:0kB min:256kB low:320kB high:384kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
vm-test-run-deployment-cli> deployer # [  615.534017] lowmem_reserve[]: 0 2936 3886 3886 3886
vm-test-run-deployment-cli> deployer # [  615.535077] Node 0 DMA32 free:53540kB boost:0kB min:50868kB low:63584kB high:76300kB reserved_highatomic:0KB active_anon:147888kB inactive_anon:2726544kB active_file:316kB inactive_file:0kB unevictable:0kB writepending:32kB present:3129196kB managed:3025412kB mlocked:0kB bounce:0kB free_pcp:4756kB local_pcp:4028kB free_cma:0kB
vm-test-run-deployment-cli> deployer # [  615.539323] lowmem_reserve[]: 0 0 949 949 949
vm-test-run-deployment-cli> deployer # [  615.540227] Node 0 Normal free:14236kB boost:0kB min:16452kB low:20564kB high:24676kB reserved_highatomic:0KB active_anon:98120kB inactive_anon:612912kB active_file:2332kB inactive_file:916kB unevictable:0kB writepending:1948kB present:1048576kB managed:972744kB mlocked:0kB bounce:0kB free_pcp:8064kB local_pcp:8060kB free_cma:0kB
vm-test-run-deployment-cli> deployer # [  615.544483] lowmem_reserve[]: 0 0 0 0 0
vm-test-run-deployment-cli> deployer # [  615.545311] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 1*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 0*1024kB 1*2048kB (M) 3*4096kB (M) = 14784kB
vm-test-run-deployment-cli> deployer # [  615.547159] Node 0 DMA32: 1008*4kB (UME) 503*8kB (UME) 667*16kB (UME) 340*32kB (UME) 146*64kB (UME) 66*128kB (UME) 19*256kB (UME) 0*512kB 1*1024kB (M) 0*2048kB 0*4096kB = 53288kB
vm-test-run-deployment-cli> deployer # [  615.549690] Node 0 Normal: 5*4kB (ME) 356*8kB (UME) 370*16kB (UME) 163*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 14004kB
vm-test-run-deployment-cli> deployer # [  615.551602] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
vm-test-run-deployment-cli> deployer # [  615.553014] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
vm-test-run-deployment-cli> deployer # [  615.554408] 97396 total pagecache pages
vm-test-run-deployment-cli> deployer # [  615.555266] 0 pages in swap cache
vm-test-run-deployment-cli> deployer # [  615.556072] Free swap  = 0kB
vm-test-run-deployment-cli> deployer # [  615.556787] Total swap = 0kB
vm-test-run-deployment-cli> deployer # [  615.557474] 1048441 pages RAM
vm-test-run-deployment-cli> deployer # [  615.558224] 0 pages HighMem/MovableOnly
vm-test-run-deployment-cli> deployer # [  615.559079] 45062 pages reserved
vm-test-run-deployment-cli> deployer # [  615.559849] 0 pages cma reserved
vm-test-run-deployment-cli> deployer # [  615.560564] 0 pages hwpoisoned
vm-test-run-deployment-cli> deployer # [  615.561653] Memory allocations:
vm-test-run-deployment-cli> deployer # [  615.562361]     3.01 GiB   789603 mm/memory.c:1062 func:folio_prealloc
vm-test-run-deployment-cli> deployer # [  615.563543]      378 MiB    96655 mm/shmem.c:1774 func:shmem_alloc_folio
vm-test-run-deployment-cli> deployer # [  615.564699]      239 MiB    35665 mm/slub.c:2433 func:alloc_slab_page
vm-test-run-deployment-cli> deployer # [  615.565808]     80.1 MiB   111635 mm/shmem.c:4764 func:shmem_alloc_inode
vm-test-run-deployment-cli> deployer # [  615.566983]     53.6 MiB        0 mm/compaction.c:1881 func:compaction_alloc
vm-test-run-deployment-cli> deployer # [  615.568230]     39.5 MiB    10108 mm/memory.c:1064 func:folio_prealloc
vm-test-run-deployment-cli> deployer # [  615.569365]     30.6 MiB     3261 mm/slub.c:2435 func:alloc_slab_page
vm-test-run-deployment-cli> deployer # [  615.570553]     20.8 MiB   113785 fs/dcache.c:1636 func:__d_alloc
vm-test-run-deployment-cli> deployer # [  615.571642]     13.6 MiB     3890 mm/execmem.c:31 func:__execmem_alloc
vm-test-run-deployment-cli> deployer # [  615.572759]     8.18 MiB   119200 security/security.c:756 func:lsm_inode_alloc
vm-test-run-deployment-cli> deployer # [  615.574012] Tasks state (memory values in pages):
vm-test-run-deployment-cli> deployer # [  615.575014] [  pid  ]   uid  tgid total_vm      rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
vm-test-run-deployment-cli> deployer # [  615.576712] [    424]     0   424     8104      246      224       20         2    81920        0          -250 systemd-journal
vm-test-run-deployment-cli> deployer # [  615.578432] [    426]   997   426     4041      224      224        0         0    69632        0          -900 systemd-oomd
vm-test-run-deployment-cli> deployer # [  615.580157] [    457]     0   457     8643      407      384       23         0    86016        0         -1000 systemd-udevd
vm-test-run-deployment-cli> deployer # [  615.581944] [    728]     0   728     1956      115       96       19         0    49152        0             0 bash
vm-test-run-deployment-cli> deployer # [  615.583537] [    732]     4   732     3372      258      224       34         0    61440        0          -900 dbus-daemon
vm-test-run-deployment-cli> deployer # [  615.585210] [    759]     0   759     4103      268      224       44         0    69632        0             0 systemd-logind
vm-test-run-deployment-cli> deployer # [  615.586978] [    797]   998   797   172232      378      352       26         0   151552        0             0 nsncd
vm-test-run-deployment-cli> deployer # [  615.588627] [    838]   999   838      894      148      104       44         0    49152        0             0 dhcpcd
vm-test-run-deployment-cli> deployer # [  615.590248] [    884]     0   884     2021       38       32        6         0    61440        0             0 agetty
vm-test-run-deployment-cli> deployer # [  615.591931] [   1041]     0  1041     1923      119       96       23         0    53248        0             0 bash
vm-test-run-deployment-cli> deployer # [  615.593495] [   1042]     0  1042     1956      147      114       33         0    49152        0             0 bash
vm-test-run-deployment-cli> deployer # [  615.595106] [   1043]     0  1043     3412      158      128       30         0    61440        0             0 base64
vm-test-run-deployment-cli> deployer # [  615.596703] [   1044]     0  1044    18500       58       32       26         0    61440        0             0 nixops4
vm-test-run-deployment-cli> deployer # [  615.598272] [   1046]     0  1046  1072515   796387   796387        0         0  6651904        0             0 nixops4-eval
vm-test-run-deployment-cli> deployer # [  615.600007] Kernel panic - not syncing: Out of memory: compulsory panic_on_oom is enabled
vm-test-run-deployment-cli> deployer # [  615.601253] CPU: 1 UID: 0 PID: 1046 Comm: nixops4-eval Not tainted 6.12.21 #1-NixOS
vm-test-run-deployment-cli> deployer # [  615.602460] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
vm-test-run-deployment-cli> deployer # [  615.604095] Call Trace:
vm-test-run-deployment-cli> deployer # [  615.604711]  <TASK>
vm-test-run-deployment-cli> deployer # [  615.605235]  dump_stack_lvl+0x5d/0x90
vm-test-run-deployment-cli> deployer # [  615.605971]  panic+0x118/0x2db
vm-test-run-deployment-cli> deployer # [  615.606628]  out_of_memory.cold+0x58/0x78
vm-test-run-deployment-cli> deployer # [  615.607344]  __alloc_pages_noprof+0xbb2/0x1160
vm-test-run-deployment-cli> deployer # [  615.608172]  alloc_pages_mpol_noprof+0xd7/0x1d0
vm-test-run-deployment-cli> deployer # [  615.609023]  folio_alloc_noprof+0x5b/0xb0
vm-test-run-deployment-cli> deployer # [  615.609746]  __filemap_get_folio+0x1ed/0x340
vm-test-run-deployment-cli> deployer # [  615.610441]  filemap_fault+0x20c/0xd70
vm-test-run-deployment-cli> deployer # [  615.611148]  ? __pfx_lru_add+0x10/0x10
vm-test-run-deployment-cli> deployer # [  615.611906]  __do_fault+0x33/0x180
vm-test-run-deployment-cli> deployer # [  615.612535]  do_fault+0x380/0x550
vm-test-run-deployment-cli> deployer # [  615.613202]  __handle_mm_fault+0x7d0/0xfb0
vm-test-run-deployment-cli> deployer # [  615.613988]  handle_mm_fault+0xe2/0x2d0
vm-test-run-deployment-cli> deployer # [  615.614687]  do_user_addr_fault+0x227/0x640
vm-test-run-deployment-cli> deployer # [  615.615425]  exc_page_fault+0x71/0x160
vm-test-run-deployment-cli> deployer # [  615.616164]  asm_exc_page_fault+0x26/0x30
vm-test-run-deployment-cli> deployer # [  615.616972] RIP: 0033:0x7f19aa909c90
vm-test-run-deployment-cli> deployer # [  615.617650] Code: Unable to access opcode bytes at 0x7f19aa909c66.
vm-test-run-deployment-cli> deployer # [  615.618621] RSP: 002b:00007ffdc2b108b8 EFLAGS: 00010246
vm-test-run-deployment-cli> deployer # [  615.619454] RAX: 00007ffdc2b10c98 RBX: 00007ffdc2b10c98 RCX: 0000000000000000
vm-test-run-deployment-cli> deployer # [  615.620563] RDX: 00005582e2176a28 RSI: 0000000000000000 RDI: 00005582ddfbd440
vm-test-run-deployment-cli> deployer # [  615.621680] RBP: 00007ffdc2b10ac0 R08: 00007ffdc2b10a00 R09: 0000000000bea2b3
vm-test-run-deployment-cli> deployer # [  615.622759] R10: 00005582ddfbd440 R11: 0000000000000000 R12: 00007f18b3d26d40
vm-test-run-deployment-cli> deployer # [  615.623925] R13: 00007ffdc2b10980 R14: 00005582ddfbd440 R15: 0000000000000001
vm-test-run-deployment-cli> deployer # [  615.625021]  </TASK>
vm-test-run-deployment-cli> deployer # [  615.625760] Kernel Offset: 0x32400000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
vm-test-run-deployment-cli> deployer # [  615.627207] Rebooting in 1 seconds..
vm-test-run-deployment-cli> Test "Run deployment with no services enabled" failed with error: "[Errno 32] Broken pipe"
issues in the log: <details> <summary> failed to download acl </summary> ``` vm-test-run-deployment-cli> deployer # building '/nix/store/l70m8z9m7m1yd96vvkl6bq2qimisvvms-users-groups.json.drv'... vm-test-run-deployment-cli> deployer # building '/nix/store/zrrn5m3lq51riq1rmc7i6i7vxbg4l5lg-etc-hostname.drv'... vm-test-run-deployment-cli> deployer # building '/nix/store/0442lf73plz43gym6ahgydmp752dxczk-acl-2.3.2.tar.gz.drv'... vm-test-run-deployment-cli> deployer # building '/nix/store/7byvrfyi1f00i8jxsc1q8z997j9r10y5-attr-2.5.2.tar.gz.drv'... vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 307 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 254 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 506 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 536 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 1316 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 1030 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 2592 ms vm-test-run-deployment-cli> deployer # warning: error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org; retrying in 2218 ms vm-test-run-deployment-cli> deployer # error: vm-test-run-deployment-cli> deployer # … writing file '/nix/store/kfa6p4i5dmvgirqw87557qbg53xif0qh-acl-2.3.2.tar.gz' vm-test-run-deployment-cli> deployer # vm-test-run-deployment-cli> deployer # error: unable to download 'https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.gz': Could not resolve hostname (6) Could not resolve host: download.savannah.gnu.org vm-test-run-deployment-cli> deployer # error: builder for '/nix/store/0442lf73plz43gym6ahgydmp752dxczk-acl-2.3.2.tar.gz.drv' failed with exit code 1 vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/ra2zlblxs9v86ai9adl8132hn9zyx349-acl-2.3.2.drv' failed to build vm-test-run-deployment-cli> deployer # building '/nix/store/i2pfgljh1az3v0xga8s4kvn7l1kb1nj4-autoconf-2.72.tar.xz.drv'... vm-test-run-deployment-cli> deployer # building '/nix/store/3yar2pnvz7ll79z3jlzx09qnhrsi7zj5-automake-1.16.5.tar.xz.drv'... vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/s4wc8bxsrq7wh4j0pkr6fx2vclyh7vq8-libarchive-3.7.7.drv' failed to build vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/9g84y69zyrx6snk86y2nh7h9miwj3wmz-cmake-3.31.5.drv' failed to build vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/7sl47s9yz2a3v8gmbpz28hlzirpw2kyc-elfutils-0.192.drv' failed to build vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/8px6vl7590iy1vzy4jz9fryazfk58qqm-ghc-9.6.6.drv' failed to build vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/dvf00v3jnm5529l60hzdxmgj012qvvan-ShellCheck-0.10.0.drv' failed to build vm-test-run-deployment-cli> deployer # building '/nix/store/vrf2w2sj0azd3f0d3i0y40bblxkmvb3x-etc-pam-environment.drv'... vm-test-run-deployment-cli> deployer # building '/nix/store/kh2js99js9kpsfpa122w4kbvy4wfjd93-extra-hosts.drv'... vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/v11d3yyp69rh2myl6xpqappvff4rhs4p-generate-vars.drv' failed to build vm-test-run-deployment-cli> deployer # error: 1 dependencies of derivation '/nix/store/9f973ra9mkfnfcks7dcw6g44ijs26q6i-system-path.drv' failed to build ``` </details> <details> <summary> OOM </summary> ``` vm-test-run-deployment-cli> deployer # building '/nix/store/h35f56gly8qbac1hdxjgs8viqqh4lxhl-nixos-system-peertube-test.drv'... vm-test-run-deployment-cli> deployer # [ 615.502896] nixops4-eval invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 vm-test-run-deployment-cli> deployer # [ 615.504228] CPU: 1 UID: 0 PID: 1046 Comm: nixops4-eval Not tainted 6.12.21 #1-NixOS vm-test-run-deployment-cli> deployer # [ 615.504232] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 vm-test-run-deployment-cli> deployer # [ 615.504235] Call Trace: vm-test-run-deployment-cli> deployer # [ 615.504237] <TASK> vm-test-run-deployment-cli> deployer # [ 615.504239] dump_stack_lvl+0x5d/0x90 vm-test-run-deployment-cli> deployer # [ 615.504254] dump_header+0x43/0x1c0 vm-test-run-deployment-cli> deployer # [ 615.504257] out_of_memory.cold+0x35/0x78 vm-test-run-deployment-cli> deployer # [ 615.504260] __alloc_pages_noprof+0xbb2/0x1160 vm-test-run-deployment-cli> deployer # [ 615.504276] alloc_pages_mpol_noprof+0xd7/0x1d0 vm-test-run-deployment-cli> deployer # [ 615.504280] folio_alloc_noprof+0x5b/0xb0 vm-test-run-deployment-cli> deployer # [ 615.504281] __filemap_get_folio+0x1ed/0x340 vm-test-run-deployment-cli> deployer # [ 615.504284] filemap_fault+0x20c/0xd70 vm-test-run-deployment-cli> deployer # [ 615.504286] ? __pfx_lru_add+0x10/0x10 vm-test-run-deployment-cli> deployer # [ 615.504292] __do_fault+0x33/0x180 vm-test-run-deployment-cli> deployer # [ 615.504306] do_fault+0x380/0x550 vm-test-run-deployment-cli> deployer # [ 615.504308] __handle_mm_fault+0x7d0/0xfb0 vm-test-run-deployment-cli> deployer # [ 615.504311] handle_mm_fault+0xe2/0x2d0 vm-test-run-deployment-cli> deployer # [ 615.504314] do_user_addr_fault+0x227/0x640 vm-test-run-deployment-cli> deployer # [ 615.504317] exc_page_fault+0x71/0x160 vm-test-run-deployment-cli> deployer # [ 615.504322] asm_exc_page_fault+0x26/0x30 vm-test-run-deployment-cli> deployer # [ 615.504327] RIP: 0033:0x7f19aa909c90 vm-test-run-deployment-cli> deployer # [ 615.504351] Code: Unable to access opcode bytes at 0x7f19aa909c66. vm-test-run-deployment-cli> deployer # [ 615.504354] RSP: 002b:00007ffdc2b108b8 EFLAGS: 00010246 vm-test-run-deployment-cli> deployer # [ 615.504357] RAX: 00007ffdc2b10c98 RBX: 00007ffdc2b10c98 RCX: 0000000000000000 vm-test-run-deployment-cli> deployer # [ 615.504360] RDX: 00005582e2176a28 RSI: 0000000000000000 RDI: 00005582ddfbd440 vm-test-run-deployment-cli> deployer # [ 615.504361] RBP: 00007ffdc2b10ac0 R08: 00007ffdc2b10a00 R09: 0000000000bea2b3 vm-test-run-deployment-cli> deployer # [ 615.504362] R10: 00005582ddfbd440 R11: 0000000000000000 R12: 00007f18b3d26d40 vm-test-run-deployment-cli> deployer # [ 615.504363] R13: 00007ffdc2b10980 R14: 00005582ddfbd440 R15: 0000000000000001 vm-test-run-deployment-cli> deployer # [ 615.504365] </TASK> vm-test-run-deployment-cli> deployer # [ 615.504378] Mem-Info: vm-test-run-deployment-cli> deployer # [ 615.522672] active_anon:773682 inactive_anon:122684 isolated_anon:0 vm-test-run-deployment-cli> deployer # [ 615.522672] active_file:57 inactive_file:706 isolated_file:0 vm-test-run-deployment-cli> deployer # [ 615.522672] unevictable:0 dirty:495 writeback:0 vm-test-run-deployment-cli> deployer # [ 615.522672] slab_reclaimable:25542 slab_unreclaimable:44634 vm-test-run-deployment-cli> deployer # [ 615.522672] mapped:1 shmem:96655 pagetables:1938 vm-test-run-deployment-cli> deployer # [ 615.522672] sec_pagetables:0 bounce:0 vm-test-run-deployment-cli> deployer # [ 615.522672] kernel_misc_reclaimable:0 vm-test-run-deployment-cli> deployer # [ 615.522672] free:20640 free_pcp:3205 free_cma:0 vm-test-run-deployment-cli> deployer # [ 615.527285] Node 0 active_anon:3414056kB inactive_anon:171408kB active_file:1536kB inactive_file:1492kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:4kB dirty:1980kB writeback:0kB shmem:386620kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:0kB writeback_tmp:0kB kernel_stack:1696kB pagetables:7752kB sec_pagetables:0kB all_unreclaimable? yes vm-test-run-deployment-cli> deployer # [ 615.530948] Node 0 DMA free:14784kB boost:0kB min:256kB low:320kB high:384kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB vm-test-run-deployment-cli> deployer # [ 615.534017] lowmem_reserve[]: 0 2936 3886 3886 3886 vm-test-run-deployment-cli> deployer # [ 615.535077] Node 0 DMA32 free:53540kB boost:0kB min:50868kB low:63584kB high:76300kB reserved_highatomic:0KB active_anon:147888kB inactive_anon:2726544kB active_file:316kB inactive_file:0kB unevictable:0kB writepending:32kB present:3129196kB managed:3025412kB mlocked:0kB bounce:0kB free_pcp:4756kB local_pcp:4028kB free_cma:0kB vm-test-run-deployment-cli> deployer # [ 615.539323] lowmem_reserve[]: 0 0 949 949 949 vm-test-run-deployment-cli> deployer # [ 615.540227] Node 0 Normal free:14236kB boost:0kB min:16452kB low:20564kB high:24676kB reserved_highatomic:0KB active_anon:98120kB inactive_anon:612912kB active_file:2332kB inactive_file:916kB unevictable:0kB writepending:1948kB present:1048576kB managed:972744kB mlocked:0kB bounce:0kB free_pcp:8064kB local_pcp:8060kB free_cma:0kB vm-test-run-deployment-cli> deployer # [ 615.544483] lowmem_reserve[]: 0 0 0 0 0 vm-test-run-deployment-cli> deployer # [ 615.545311] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 1*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 0*1024kB 1*2048kB (M) 3*4096kB (M) = 14784kB vm-test-run-deployment-cli> deployer # [ 615.547159] Node 0 DMA32: 1008*4kB (UME) 503*8kB (UME) 667*16kB (UME) 340*32kB (UME) 146*64kB (UME) 66*128kB (UME) 19*256kB (UME) 0*512kB 1*1024kB (M) 0*2048kB 0*4096kB = 53288kB vm-test-run-deployment-cli> deployer # [ 615.549690] Node 0 Normal: 5*4kB (ME) 356*8kB (UME) 370*16kB (UME) 163*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 14004kB vm-test-run-deployment-cli> deployer # [ 615.551602] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB vm-test-run-deployment-cli> deployer # [ 615.553014] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB vm-test-run-deployment-cli> deployer # [ 615.554408] 97396 total pagecache pages vm-test-run-deployment-cli> deployer # [ 615.555266] 0 pages in swap cache vm-test-run-deployment-cli> deployer # [ 615.556072] Free swap = 0kB vm-test-run-deployment-cli> deployer # [ 615.556787] Total swap = 0kB vm-test-run-deployment-cli> deployer # [ 615.557474] 1048441 pages RAM vm-test-run-deployment-cli> deployer # [ 615.558224] 0 pages HighMem/MovableOnly vm-test-run-deployment-cli> deployer # [ 615.559079] 45062 pages reserved vm-test-run-deployment-cli> deployer # [ 615.559849] 0 pages cma reserved vm-test-run-deployment-cli> deployer # [ 615.560564] 0 pages hwpoisoned vm-test-run-deployment-cli> deployer # [ 615.561653] Memory allocations: vm-test-run-deployment-cli> deployer # [ 615.562361] 3.01 GiB 789603 mm/memory.c:1062 func:folio_prealloc vm-test-run-deployment-cli> deployer # [ 615.563543] 378 MiB 96655 mm/shmem.c:1774 func:shmem_alloc_folio vm-test-run-deployment-cli> deployer # [ 615.564699] 239 MiB 35665 mm/slub.c:2433 func:alloc_slab_page vm-test-run-deployment-cli> deployer # [ 615.565808] 80.1 MiB 111635 mm/shmem.c:4764 func:shmem_alloc_inode vm-test-run-deployment-cli> deployer # [ 615.566983] 53.6 MiB 0 mm/compaction.c:1881 func:compaction_alloc vm-test-run-deployment-cli> deployer # [ 615.568230] 39.5 MiB 10108 mm/memory.c:1064 func:folio_prealloc vm-test-run-deployment-cli> deployer # [ 615.569365] 30.6 MiB 3261 mm/slub.c:2435 func:alloc_slab_page vm-test-run-deployment-cli> deployer # [ 615.570553] 20.8 MiB 113785 fs/dcache.c:1636 func:__d_alloc vm-test-run-deployment-cli> deployer # [ 615.571642] 13.6 MiB 3890 mm/execmem.c:31 func:__execmem_alloc vm-test-run-deployment-cli> deployer # [ 615.572759] 8.18 MiB 119200 security/security.c:756 func:lsm_inode_alloc vm-test-run-deployment-cli> deployer # [ 615.574012] Tasks state (memory values in pages): vm-test-run-deployment-cli> deployer # [ 615.575014] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name vm-test-run-deployment-cli> deployer # [ 615.576712] [ 424] 0 424 8104 246 224 20 2 81920 0 -250 systemd-journal vm-test-run-deployment-cli> deployer # [ 615.578432] [ 426] 997 426 4041 224 224 0 0 69632 0 -900 systemd-oomd vm-test-run-deployment-cli> deployer # [ 615.580157] [ 457] 0 457 8643 407 384 23 0 86016 0 -1000 systemd-udevd vm-test-run-deployment-cli> deployer # [ 615.581944] [ 728] 0 728 1956 115 96 19 0 49152 0 0 bash vm-test-run-deployment-cli> deployer # [ 615.583537] [ 732] 4 732 3372 258 224 34 0 61440 0 -900 dbus-daemon vm-test-run-deployment-cli> deployer # [ 615.585210] [ 759] 0 759 4103 268 224 44 0 69632 0 0 systemd-logind vm-test-run-deployment-cli> deployer # [ 615.586978] [ 797] 998 797 172232 378 352 26 0 151552 0 0 nsncd vm-test-run-deployment-cli> deployer # [ 615.588627] [ 838] 999 838 894 148 104 44 0 49152 0 0 dhcpcd vm-test-run-deployment-cli> deployer # [ 615.590248] [ 884] 0 884 2021 38 32 6 0 61440 0 0 agetty vm-test-run-deployment-cli> deployer # [ 615.591931] [ 1041] 0 1041 1923 119 96 23 0 53248 0 0 bash vm-test-run-deployment-cli> deployer # [ 615.593495] [ 1042] 0 1042 1956 147 114 33 0 49152 0 0 bash vm-test-run-deployment-cli> deployer # [ 615.595106] [ 1043] 0 1043 3412 158 128 30 0 61440 0 0 base64 vm-test-run-deployment-cli> deployer # [ 615.596703] [ 1044] 0 1044 18500 58 32 26 0 61440 0 0 nixops4 vm-test-run-deployment-cli> deployer # [ 615.598272] [ 1046] 0 1046 1072515 796387 796387 0 0 6651904 0 0 nixops4-eval vm-test-run-deployment-cli> deployer # [ 615.600007] Kernel panic - not syncing: Out of memory: compulsory panic_on_oom is enabled vm-test-run-deployment-cli> deployer # [ 615.601253] CPU: 1 UID: 0 PID: 1046 Comm: nixops4-eval Not tainted 6.12.21 #1-NixOS vm-test-run-deployment-cli> deployer # [ 615.602460] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 vm-test-run-deployment-cli> deployer # [ 615.604095] Call Trace: vm-test-run-deployment-cli> deployer # [ 615.604711] <TASK> vm-test-run-deployment-cli> deployer # [ 615.605235] dump_stack_lvl+0x5d/0x90 vm-test-run-deployment-cli> deployer # [ 615.605971] panic+0x118/0x2db vm-test-run-deployment-cli> deployer # [ 615.606628] out_of_memory.cold+0x58/0x78 vm-test-run-deployment-cli> deployer # [ 615.607344] __alloc_pages_noprof+0xbb2/0x1160 vm-test-run-deployment-cli> deployer # [ 615.608172] alloc_pages_mpol_noprof+0xd7/0x1d0 vm-test-run-deployment-cli> deployer # [ 615.609023] folio_alloc_noprof+0x5b/0xb0 vm-test-run-deployment-cli> deployer # [ 615.609746] __filemap_get_folio+0x1ed/0x340 vm-test-run-deployment-cli> deployer # [ 615.610441] filemap_fault+0x20c/0xd70 vm-test-run-deployment-cli> deployer # [ 615.611148] ? __pfx_lru_add+0x10/0x10 vm-test-run-deployment-cli> deployer # [ 615.611906] __do_fault+0x33/0x180 vm-test-run-deployment-cli> deployer # [ 615.612535] do_fault+0x380/0x550 vm-test-run-deployment-cli> deployer # [ 615.613202] __handle_mm_fault+0x7d0/0xfb0 vm-test-run-deployment-cli> deployer # [ 615.613988] handle_mm_fault+0xe2/0x2d0 vm-test-run-deployment-cli> deployer # [ 615.614687] do_user_addr_fault+0x227/0x640 vm-test-run-deployment-cli> deployer # [ 615.615425] exc_page_fault+0x71/0x160 vm-test-run-deployment-cli> deployer # [ 615.616164] asm_exc_page_fault+0x26/0x30 vm-test-run-deployment-cli> deployer # [ 615.616972] RIP: 0033:0x7f19aa909c90 vm-test-run-deployment-cli> deployer # [ 615.617650] Code: Unable to access opcode bytes at 0x7f19aa909c66. vm-test-run-deployment-cli> deployer # [ 615.618621] RSP: 002b:00007ffdc2b108b8 EFLAGS: 00010246 vm-test-run-deployment-cli> deployer # [ 615.619454] RAX: 00007ffdc2b10c98 RBX: 00007ffdc2b10c98 RCX: 0000000000000000 vm-test-run-deployment-cli> deployer # [ 615.620563] RDX: 00005582e2176a28 RSI: 0000000000000000 RDI: 00005582ddfbd440 vm-test-run-deployment-cli> deployer # [ 615.621680] RBP: 00007ffdc2b10ac0 R08: 00007ffdc2b10a00 R09: 0000000000bea2b3 vm-test-run-deployment-cli> deployer # [ 615.622759] R10: 00005582ddfbd440 R11: 0000000000000000 R12: 00007f18b3d26d40 vm-test-run-deployment-cli> deployer # [ 615.623925] R13: 00007ffdc2b10980 R14: 00005582ddfbd440 R15: 0000000000000001 vm-test-run-deployment-cli> deployer # [ 615.625021] </TASK> vm-test-run-deployment-cli> deployer # [ 615.625760] Kernel Offset: 0x32400000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) vm-test-run-deployment-cli> deployer # [ 615.627207] Rebooting in 1 seconds.. vm-test-run-deployment-cli> Test "Run deployment with no services enabled" failed with error: "[Errno 32] Broken pipe" ``` </details>
Owner

IIUC, the OOM can be fixed by changing:

virtualisation = {
## NOTE: The deployer machines needs more RAM and default than the
## default. These values have been trimmed down to the gigabyte.
## Memory use is expected to be dominated by the NixOS evaluation,
## which happens on the deployer.
memorySize = 4 * 1024;
diskSize = 4 * 1024;
cores = 2;
};

For the “failed to download”, that's nastier, we're back to trying to guess packages that need to be added to the derivations. A first step would be to enable the new Attic Fediversity module in check-deployment-cli and check-deployment-panel and see if that fixes it. See:

system.extraDependenciesFromModule = {
imports = [ ../../../services/fediversity ];
fediversity = {
domain = "fediversity.net"; # would write `dummy` but that would not type
garage.enable = true;
mastodon = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
peertube = {
enable = true;
secretsFile = dummyFile;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
pixelfed = {
enable = true;
s3AccessKeyFile = dummyFile;
s3SecretKeyFile = dummyFile;
};
temp.cores = 1;
temp.initialUser = {
username = "dummy";
displayName = "dummy";
email = "dummy";
passwordFile = dummyFile;
};
};
};
};

IIUC, the OOM can be fixed by changing: https://git.fediversity.eu/Fediversity/Fediversity/src/commit/c1dc0fef0146f60775a9527eaac7a36fd74ac608/deployment/check/common/deployerNode.nix#L39-L47 For the “failed to download”, that's nastier, we're back to trying to guess packages that need to be added to the derivations. A first step would be to enable the new Attic Fediversity module in `check-deployment-cli` and `check-deployment-panel` and see if that fixes it. See: https://git.fediversity.eu/Fediversity/Fediversity/src/commit/c1dc0fef0146f60775a9527eaac7a36fd74ac608/deployment/check/cli/nixosTest.nix#L30-L60
Author
Owner

the OOM i solved.

the `acl` download error persists despite a seemingly relevant patch.
diff --git a/deployment/check/cli/nixosTest.nix b/deployment/check/cli/nixosTest.nix
index 4945d7c..5b11f69 100644
--- a/deployment/check/cli/nixosTest.nix
+++ b/deployment/check/cli/nixosTest.nix
@@ -23,6 +23,7 @@ in
         peertube.inputDerivation
         gixy
         gixy.inputDerivation
+        pkgs.acl
       ];

       system.extraDependenciesFromModule = {
diff --git a/deployment/check/common/deployerNode.nix b/deployment/check/common/deployerNode.nix
index 407aac7..575667e 100644
--- a/deployment/check/common/deployerNode.nix
+++ b/deployment/check/common/deployerNode.nix
@@ -65,6 +65,7 @@ in

         pkgs.stdenv
         pkgs.stdenvNoCC
+        pkgs.acl
       ]
       ++ (
         let

the log seems to continue for a while tho, making me unsure if the ACL thing is fatal, tho i don't see clear fatal errors in the log to begin with, hm.

i may have use for a second opinion here.

the OOM i solved. <details> <summary> the `acl` download error persists despite a seemingly relevant patch. </summary> ```patch diff --git a/deployment/check/cli/nixosTest.nix b/deployment/check/cli/nixosTest.nix index 4945d7c..5b11f69 100644 --- a/deployment/check/cli/nixosTest.nix +++ b/deployment/check/cli/nixosTest.nix @@ -23,6 +23,7 @@ in peertube.inputDerivation gixy gixy.inputDerivation + pkgs.acl ]; system.extraDependenciesFromModule = { diff --git a/deployment/check/common/deployerNode.nix b/deployment/check/common/deployerNode.nix index 407aac7..575667e 100644 --- a/deployment/check/common/deployerNode.nix +++ b/deployment/check/common/deployerNode.nix @@ -65,6 +65,7 @@ in pkgs.stdenv pkgs.stdenvNoCC + pkgs.acl ] ++ ( let ``` </details> the log seems to continue for a while tho, making me unsure if the ACL thing is fatal, tho i don't see clear fatal errors in the log to begin with, hm. i may have use for a second opinion here.
Owner

@kiara wrote in Fediversity/Fediversity#397 (comment):

the OOM i solved.

yay \o

the acl download error persists despite a seemingly relevant patch.

This is not what you should do. Probably, your Attic module pulls much more than just pkgs.acl. Try enabling it in extraDependenciesFromModule, as suggested in Fediversity/Fediversity#397 (comment).

@kiara wrote in https://git.fediversity.eu/Fediversity/Fediversity/pulls/397#issuecomment-8393: > the OOM i solved. yay \o > the `acl` download error persists despite a seemingly relevant patch. This is not what you should do. Probably, your Attic module pulls much more than just `pkgs.acl`. Try enabling it in `extraDependenciesFromModule`, as suggested in https://git.fediversity.eu/Fediversity/Fediversity/pulls/397#issuecomment-8378.
Author
Owner

my attempt following that advice, unfortunately to no avail

my [attempt](https://git.fediversity.eu/kiara/Fediversity/compare/attic...attic-extra-dependencies) following that advice, unfortunately to no avail
Owner

My two last commits seem to fix it locally. Let's see what the CI says.

The Attic service got its own switch. I suppose we mean for it to be one of the Fediversity services that we provide? This is probably just another formulation of @fricklerhandwerk's question.

I see you add code to click on the attic.enable checkbox, but you haven't added the checkbox (or anything) to the panel if I'm not mistaken.

You haven't mentioned the changes to sources.json but they are not trivial. How stable are those dependencies? What do your forks provide that the upstream does not?

Also, in general, please stop packing together in a WIP PR changes that are unrelated to each other. This makes the diff harder to read, errors harder to debug, and it increases the chances that PR never lands, in which case we would lose some changes that are in fact perfectly reasonable.

For instance, this PR contains:

  • Random flipping of booleans in configuration.sample.json (568cafb2f9). If you believe this is better that way, just open a three lines PR that does this and it will get merged instantly. Or.... in this case, you would see that CI rejects it because it breaks the tests, and you wouldn't have (and I wouldn't have) to debug a 500 lines PR.

  • A commit called “unrelated improvements” (eb63c9072a). If you recognised that this was unrelated, why put it in this PR? Why add 15 lines to an already big diff?

  • Changes to infra/common (15f1909de3). Just put them in their own mini PR and they will be merged within a day. Now, they just make it harder to read the actual change that your PR is hoping to implement.

My two last commits seem to fix it locally. Let's see what the CI says. The Attic service got its own switch. I suppose we mean for it to be one of the Fediversity services that we provide? This is probably just another formulation of @fricklerhandwerk's question. I see you add code to click on the `attic.enable` checkbox, but you haven't added the checkbox (or anything) to the panel if I'm not mistaken. You haven't mentioned the changes to `sources.json` but they are not trivial. How stable are those dependencies? What do your forks provide that the upstream does not? Also, in general, please stop packing together in a WIP PR changes that are unrelated to each other. This makes the diff harder to read, errors harder to debug, and it increases the chances that PR never lands, in which case we would lose some changes that are in fact perfectly reasonable. For instance, this PR contains: - Random flipping of booleans in `configuration.sample.json` (https://git.fediversity.eu/Fediversity/Fediversity/commit/568cafb2f9bedc37b22d5714cb6f8d9713231cb5). If you believe this is better that way, just open a three lines PR that does this and it will get merged instantly. Or.... in this case, you would see that CI rejects it because it breaks the tests, and you wouldn't have (and I wouldn't have) to debug a 500 lines PR. - A commit called “unrelated improvements” (https://git.fediversity.eu/Fediversity/Fediversity/commit/eb63c9072ae644b2c8fda8e8382c9463cf0c918b). If you recognised that this was unrelated, why put it in this PR? Why add 15 lines to an already big diff? - Changes to `infra/common` (https://git.fediversity.eu/Fediversity/Fediversity/commit/15f1909de360ed44c602c8fc128707d1ed019dda). Just put them in their own mini PR and they will be merged within a day. Now, they just make it harder to read the actual change that your PR is hoping to implement.
Author
Owner

nice - thank you @Niols!

@Niols wrote in Fediversity/Fediversity#397 (comment):

My two last commits seem to fix it locally.

thanks! how did you think of this package, despite it not getting mentioned? was this some reverse dependencies trick?

please stop packing together in a WIP PR changes that are unrelated to each other.

apologies, agree, I should have cleaned better before requesting feedback.

to answer a specific question there tho:

If you believe this is better that way, just open a three lines PR that does this and it will get merged instantly

I believed it might have been needed to test this, and had been in the process of validating that, tho facing errors I hadn't gotten far enough to find the answer.

I see you add code to click on the attic.enable checkbox, but you haven't added the checkbox (or anything) to the panel if I'm not mistaken.

is the GUI not just generated from deployment/options.nix?

Let's see what the CI says.

CI looks good!

The Attic service got its own switch. I suppose we mean for it to be one of the Fediversity services that we provide? This is probably just another formulation of @fricklerhandwerk's question.

i just cared about getting this usable to us, and being able to use the garage and testing code i considered to help get closer to that - tho for our purposes, technically it might service us even if deployed 'as an operator' for now. i'd agree tho that isn't preferable, #370 experiments aside.

You haven't mentioned the changes to sources.json but they are not trivial.

right, i hadn't gotten the PR out of a wip experiment stage yet

How stable are those dependencies?

What do your forks provide that the upstream does not?

vars is a library clan distilled from their mono-repo after a period of internal use. they intend to upstream, and are gathering feedback on that.

my fork was an attempt to add templating to allow interpolating sensitive values in string templates, see https://github.com/Lassulus/vars/issues/1.

nix-templating is the follow-up effort between lassulus and myself to loosen templating coupling from vars, tho i hadn't gotten it to work here so far - then forgot to clean it up while testing my fork of vars instead. i considered this an ugly compromise tho, and wanted to find a way to use the upstream versions of both.

for what it's worth, my fork of nix-templating consists of a simple outstanding PR to facilitate use without using flakes.

on stability, i would consider that library an (unmaintained) proof-of-concept.


let me see if i can use your progress to deploy this now...

nice - thank you @Niols! @Niols wrote in https://git.fediversity.eu/Fediversity/Fediversity/pulls/397#issuecomment-8475: > My two last commits seem to fix it locally. thanks! how did you think of this package, despite it not getting mentioned? was this some reverse dependencies trick? > please stop packing together in a WIP PR changes that are unrelated to each other. apologies, agree, I should have cleaned better before requesting feedback. to answer a specific question there tho: > If you believe this is better that way, just open a three lines PR that does this and it will get merged instantly I believed it might have been needed to test this, and had been in the process of validating that, tho facing errors I hadn't gotten far enough to find the answer. > I see you add code to click on the `attic.enable` checkbox, but you haven't added the checkbox (or anything) to the panel if I'm not mistaken. is the GUI not just generated from `deployment/options.nix`? > Let's see what the CI says. CI looks good! > The Attic service got its own switch. I suppose we mean for it to be one of the Fediversity services that we provide? This is probably just another formulation of @fricklerhandwerk's question. i just cared about getting this usable to us, and being able to use the garage and testing code i considered to help get closer to that - tho for our purposes, technically it might service us even if deployed 'as an operator' for now. i'd agree tho that isn't preferable, #370 experiments aside. > You haven't mentioned the changes to `sources.json` but they are not trivial. right, i hadn't gotten the PR out of a wip experiment stage yet > How stable are those dependencies? > What do your forks provide that the upstream does not? [`vars`](https://github.com/Lassulus/vars) is a library clan [distilled](https://clan.lol/blog/vars/) from their mono-repo after a period of internal use. they [intend to upstream, and are gathering feedback on that](https://github.com/NixOS/nixpkgs/pull/370444). my fork was an attempt to [add templating](https://github.com/Lassulus/vars/compare/main...KiaraGrouwstra:vars:templates) to allow interpolating sensitive values in string templates, see https://github.com/Lassulus/vars/issues/1. [`nix-templating`](https://github.com/Lassulus/nix-templating) is the follow-up effort between lassulus and myself to loosen templating coupling from vars, tho i [hadn't gotten it to work](https://github.com/Lassulus/nix-templating/issues/5) here so far - then forgot to clean it up while testing my fork of vars instead. i considered this an ugly compromise tho, and wanted to find a way to use the upstream versions of both. for what it's worth, my fork of `nix-templating` consists of a simple [outstanding PR](https://github.com/Lassulus/nix-templating/pull/3) to facilitate use without using flakes. on stability, i would consider that library an (unmaintained) proof-of-concept. --- let me see if i can use your progress to deploy this now...
Owner

@kiara wrote in Fediversity/Fediversity#397 (comment):

My two last commits seem to fix it locally.

thanks! how did you think of this package, despite it not getting mentioned? was this some reverse dependencies trick?

The download error was about attr and acl, I believe, but then there is a stream of “A dependency of failed” errors, and they get higher and higher level. At the end of it was ShellCheck, which makes sense, because ShellCheck would be in the build closure of many things, but not in the runtime closure!

please stop packing together in a WIP PR changes that are unrelated to each other.

apologies, agree, I should have cleaned better before requesting feedback.

It is probably my fault for looking at a WIP PR. Sorry about that.

If you believe this is better that way, just open a three lines PR that does this and it will get merged instantly

I believed it might have been needed to test this, and had been in the process of validating that, tho facing errors I hadn't gotten far enough to find the answer.

Fair enough!

I see you add code to click on the attic.enable checkbox, but you haven't added the checkbox (or anything) to the panel if I'm not mistaken.

is the GUI not just generated from deployment/options.nix?

Oh yes, I suppose it is. Absolute magic, I love it!

The Attic service got its own switch. I suppose we mean for it to be one of the Fediversity services that we provide? This is probably just another formulation of @fricklerhandwerk's question.

i just cared about getting this usable to us, and being able to use the garage and testing code i considered to help get closer to that - tho for our purposes, technically it might service us even if deployed 'as an operator' for now. i'd agree tho that isn't preferable, #370 experiments aside.

I think that's perfectly reasonable and deserves to be merged as-is. We can always clean those things up later on.

How stable are those dependencies?

What do your forks provide that the upstream does not?

vars is a library clan distilled from their mono-repo after a period of internal use. they intend to upstream, and are gathering feedback on that.

my fork was an attempt to add templating to allow interpolating sensitive values in string templates, see https://github.com/Lassulus/vars/issues/1.

nix-templating is the follow-up effort between lassulus and myself to loosen templating coupling from vars, tho i hadn't gotten it to work here so far - then forgot to clean it up while testing my fork of vars instead. i considered this an ugly compromise tho, and wanted to find a way to use the upstream versions of both.

for what it's worth, my fork of nix-templating consists of a simple outstanding PR to facilitate use without using flakes.

on stability, i would consider that library an (unmaintained) proof-of-concept.

Sounds good, thanks for giving details. So we are hoping that the nix-templating PR gets merged and that we can go back to following upstream? Would you also qualify upstream of an (unmaintained) proof-of-concept? Either way, for our current work, that sounds reasonable!

In general, the current state of the PR looks good. What is there you want to do before merging this?

@kiara wrote in https://git.fediversity.eu/Fediversity/Fediversity/pulls/397#issuecomment-8479: > > My two last commits seem to fix it locally. > > thanks! how did you think of this package, despite it not getting mentioned? was this some reverse dependencies trick? The download error was about `attr` and `acl`, I believe, but then there is a stream of “A dependency of <package> failed” errors, and they get higher and higher level. At the end of it was ShellCheck, which makes sense, because ShellCheck would be in the build closure of many things, but not in the runtime closure! > > please stop packing together in a WIP PR changes that are unrelated to each other. > > apologies, agree, I should have cleaned better before requesting feedback. It is probably my fault for looking at a WIP PR. Sorry about that. > > If you believe this is better that way, just open a three lines PR that does this and it will get merged instantly > > I believed it might have been needed to test this, and had been in the process of validating that, tho facing errors I hadn't gotten far enough to find the answer. Fair enough! > > I see you add code to click on the `attic.enable` checkbox, but you haven't added the checkbox (or anything) to the panel if I'm not mistaken. > > is the GUI not just generated from `deployment/options.nix`? Oh yes, I suppose it is. Absolute magic, I love it! > > The Attic service got its own switch. I suppose we mean for it to be one of the Fediversity services that we provide? This is probably just another formulation of @fricklerhandwerk's question. > > i just cared about getting this usable to us, and being able to use the garage and testing code i considered to help get closer to that - tho for our purposes, technically it might service us even if deployed 'as an operator' for now. i'd agree tho that isn't preferable, #370 experiments aside. I think that's perfectly reasonable and deserves to be merged as-is. We can always clean those things up later on. > > How stable are those dependencies? > > > What do your forks provide that the upstream does not? > > [`vars`](https://github.com/Lassulus/vars) is a library clan [distilled](https://clan.lol/blog/vars/) from their mono-repo after a period of internal use. they [intend to upstream, and are gathering feedback on that](https://github.com/NixOS/nixpkgs/pull/370444). > > my fork was an attempt to [add templating](https://github.com/Lassulus/vars/compare/main...KiaraGrouwstra:vars:templates) to allow interpolating sensitive values in string templates, see https://github.com/Lassulus/vars/issues/1. > > [`nix-templating`](https://github.com/Lassulus/nix-templating) is the follow-up effort between lassulus and myself to loosen templating coupling from vars, tho i [hadn't gotten it to work](https://github.com/Lassulus/nix-templating/issues/5) here so far - then forgot to clean it up while testing my fork of vars instead. i considered this an ugly compromise tho, and wanted to find a way to use the upstream versions of both. > > for what it's worth, my fork of `nix-templating` consists of a simple [outstanding PR](https://github.com/Lassulus/nix-templating/pull/3) to facilitate use without using flakes. > > on stability, i would consider that library an (unmaintained) proof-of-concept. Sounds good, thanks for giving details. So we are hoping that the `nix-templating` PR gets merged and that we can go back to following upstream? Would you also qualify upstream of an (unmaintained) proof-of-concept? Either way, for our current work, that sounds reasonable! In general, the current state of the PR looks good. What is there you want to do before merging this?
Author
Owner

In general, the current state of the PR looks good. What is there you want to do before merging this?

actually try it out 😅, on the road to that i'd now bumped into #431 so far.

So we are hoping that the nix-templating PR gets merged and that we can go back to following upstream?

ideally yes

Would you also qualify upstream of an (unmaintained) proof-of-concept? Either way, for our current work, that sounds reasonable!

honestly tho, i've seemed no less active on that repo myself, with so far no further contributors.
as such, a third option could be to just pull the repo into our org, particularly given the code is limited enough such that expected maintenance seems within reason.

It is probably my fault for looking at a WIP PR

nah, i'd been asking for your advice here on multiple occasions already 😅

> In general, the current state of the PR looks good. What is there you want to do before merging this? actually try it out 😅, on the road to that i'd now bumped into #431 so far. > So we are hoping that the `nix-templating` PR gets merged and that we can go back to following upstream? ideally yes > Would you also qualify upstream of an (unmaintained) proof-of-concept? Either way, for our current work, that sounds reasonable! honestly tho, i've seemed no less active on that repo myself, with so far no further contributors. as such, a third option could be to just pull the repo into our org, particularly given the code is limited enough such that expected maintenance seems within reason. > It is probably my fault for looking at a WIP PR nah, i'd been asking for your advice here on multiple occasions already 😅
Author
Owner

deployment seems working.

server side:

  1. rebase on #432 + #434
  2. enable attic in deployment/configuration.sample.json
  3. nixops4 apply test
  4. ssh test12.abundos.eu:
    sudo generate-vars
    config_path=$(systemctl show atticd | grep ExecStartEx | grep -o '/nix/store/[0-9a-z\-]*-checked-attic-server.toml')
    (set -a; source /etc/vars/secret/templates/attic.env && nix-shell -p attic-server --run "atticadm -f $config_path make-token --sub 'ci' --validity '2y' --pull 'demo' --push 'demo' --create-cache 'demo'")
    

client side:

nix-shell -p attic-client
attic login fediversity https://attic.fediversity.net <token>
# cat ~/.config/attic/config.toml
attic cache create demo
attic use demo
# use `attic push` in CI

(note i get complaints about not being a trusted user unless running these as root, tho otherwise this works.)

deployment seems working. server side: 1. rebase on #432 + #434 1. enable `attic` in `deployment/configuration.sample.json` 1. `nixops4 apply test` 1. `ssh test12.abundos.eu`: ```bash sudo generate-vars config_path=$(systemctl show atticd | grep ExecStartEx | grep -o '/nix/store/[0-9a-z\-]*-checked-attic-server.toml') (set -a; source /etc/vars/secret/templates/attic.env && nix-shell -p attic-server --run "atticadm -f $config_path make-token --sub 'ci' --validity '2y' --pull 'demo' --push 'demo' --create-cache 'demo'") ``` client side: ```bash nix-shell -p attic-client attic login fediversity https://attic.fediversity.net <token> # cat ~/.config/attic/config.toml attic cache create demo attic use demo # use `attic push` in CI ``` (note i get complaints about not being a trusted user unless running these as root, tho otherwise this works.)
Author
Owner

progress as per CI: Error: No servers are available - maybe that also needs to use the login / use commands still (or directly configure the ~/.config/attic/config.toml that the former sets up).

error:
       … while calling the 'import' builtin
         at «string»:1:2:
            1| (import <nixpkgs> {}).bashInteractive
             |  ^
       … while realising the context of a path
       … while calling the 'findFile' builtin
         at «string»:1:9:
            1| (import <nixpkgs> {}).bashInteractive
             |         ^
       error: file 'nixpkgs' was not found in the Nix search path (add it using $NIX_PATH or -I)
will use bash from your environment
warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 280 ms
warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 594 ms
warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 1047 ms
warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 2585 ms
warning: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7)
this derivation will be built:
  /nix/store/mvmsvv7vykpbb8hk0ibfk6p945xv33pd-test-loop.drv
building '/nix/store/mvmsvv7vykpbb8hk0ibfk6p945xv33pd-test-loop.drv'...
git-hooks.nix: updating /var/lib/private/gitea-runner/default/.cache/act/188924eb19292a38/hostexecutor repo
/var/lib/private/gitea-runner/default/.cache/act/188924eb19292a38/hostexecutor/.pre-commit-config.yaml
pre-commit installed at .git/hooks/pre-commit
pre-commit installed at .git/hooks/pre-push
evaluation warning: The user 'root' has multiple of the options
                    `initialHashedPassword`, `hashedPassword`, `initialPassword`, `password`
                    & `hashedPasswordFile` set to a non-null value.
                    If multiple of these password options are set at the same time then a
                    specific order of precedence is followed, which can lead to surprising
                    results. The order of precedence differs depending on whether the
                    {option}`users.mutableUsers` option is set.
                    If the option {option}`users.mutableUsers` is
                    `false`, then the order of precedence is as shown
                    below, where values on the left are overridden by values on the right:
                    {option}`initialHashedPassword` -> {option}`hashedPassword` -> {option}`initialPassword` -> {option}`password` -> {option}`hashedPasswordFile`
                    The values of these options are:
                    * users.users."root".hashedPassword: ""
                    * users.users."root".hashedPasswordFile: "/nix/store/054qjs91d842yjp0b66bwxq94a9jc0i4-hashed-password.root"
                    * users.users."root".password: null
Error: No servers are available.

fyi attic can be unlisted by clearing the substituters / trusted-public-keys / netrc-file entries it adds to ~/.config/nix/nix.conf, as well as cleaning up its added ~/.config/nix/netrc.

<details> <summary> progress as per CI: [`Error: No servers are available`](https://github.com/zhaofengli/attic/blob/e11f630ad9379cc1e74401377a6a03a513451fce/client/src/config.rs#L175) - maybe that also needs to use the `login` / `use` commands still (or directly configure the `~/.config/attic/config.toml` that the former sets up). </summary> ``` error: … while calling the 'import' builtin at «string»:1:2: 1| (import <nixpkgs> {}).bashInteractive | ^ … while realising the context of a path … while calling the 'findFile' builtin at «string»:1:9: 1| (import <nixpkgs> {}).bashInteractive | ^ error: file 'nixpkgs' was not found in the Nix search path (add it using $NIX_PATH or -I) will use bash from your environment warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 280 ms warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 594 ms warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 1047 ms warning: error: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7); retrying in 2585 ms warning: unable to download 'https://attic.fediversity.net/demo/nix-cache-info': Could not connect to server (7) this derivation will be built: /nix/store/mvmsvv7vykpbb8hk0ibfk6p945xv33pd-test-loop.drv building '/nix/store/mvmsvv7vykpbb8hk0ibfk6p945xv33pd-test-loop.drv'... git-hooks.nix: updating /var/lib/private/gitea-runner/default/.cache/act/188924eb19292a38/hostexecutor repo /var/lib/private/gitea-runner/default/.cache/act/188924eb19292a38/hostexecutor/.pre-commit-config.yaml pre-commit installed at .git/hooks/pre-commit pre-commit installed at .git/hooks/pre-push evaluation warning: The user 'root' has multiple of the options `initialHashedPassword`, `hashedPassword`, `initialPassword`, `password` & `hashedPasswordFile` set to a non-null value. If multiple of these password options are set at the same time then a specific order of precedence is followed, which can lead to surprising results. The order of precedence differs depending on whether the {option}`users.mutableUsers` option is set. If the option {option}`users.mutableUsers` is `false`, then the order of precedence is as shown below, where values on the left are overridden by values on the right: {option}`initialHashedPassword` -> {option}`hashedPassword` -> {option}`initialPassword` -> {option}`password` -> {option}`hashedPasswordFile` The values of these options are: * users.users."root".hashedPassword: "" * users.users."root".hashedPasswordFile: "/nix/store/054qjs91d842yjp0b66bwxq94a9jc0i4-hashed-password.root" * users.users."root".password: null Error: No servers are available. ``` </details> fyi attic can be unlisted by clearing the `substituters` / `trusted-public-keys` / `netrc-file` entries it adds to `~/.config/nix/nix.conf`, as well as cleaning up its added `~/.config/nix/netrc`.
Niols was unassigned by kiara 2025-08-07 19:45:23 +02:00
automatically generate secrets
Some checks failed
/ check-pre-commit (pull_request) Has been cancelled
/ check-data-model (pull_request) Has been cancelled
/ check-mastodon (pull_request) Has been cancelled
/ check-peertube (pull_request) Has been cancelled
/ check-panel (pull_request) Has been cancelled
/ check-deployment-basic (pull_request) Has been cancelled
/ check-deployment-cli (pull_request) Has been cancelled
/ check-deployment-panel (pull_request) Has been cancelled
/ check-resources (pull_request) Has been cancelled
57eaae6bed
use branches that include license
Some checks failed
/ check-pre-commit (pull_request) Successful in 14s
/ check-data-model (pull_request) Failing after 1m38s
/ check-mastodon (pull_request) Successful in 22s
/ check-peertube (pull_request) Successful in 23s
/ check-panel (pull_request) Successful in 1m31s
/ check-deployment-basic (pull_request) Successful in 11m42s
/ check-deployment-cli (pull_request) Successful in 41m32s
/ check-deployment-panel (pull_request) Successful in 47m25s
/ check-resources (pull_request) Successful in 4m32s
ac842d320a
Author
Owner

despite using trusted-users on my local setup, attic's generated user config does still give me warnings on nix builds:

warning: Ignoring the client-specified setting '...', because it is a restricted setting and you are not a trusted user
warning: Ignoring the client-specified setting 'netrc-file', because it is a restricted setting and you are not a trusted user
warning: Ignoring the client-specified setting 'trusted-public-keys', because it is a restricted setting and you are not a trusted user

edit: i think i solved that by ditching some local nix cache (?)

despite using [`trusted-users`](https://search.nixos.org/options?channel=unstable&show=nix.settings.trusted-users&query=trusted-users) on my local setup, attic's generated user config does still give me warnings on nix builds: <details> <summary> warning: Ignoring the client-specified setting '...', because it is a restricted setting and you are not a trusted user </summary> ``` warning: Ignoring the client-specified setting 'netrc-file', because it is a restricted setting and you are not a trusted user warning: Ignoring the client-specified setting 'trusted-public-keys', because it is a restricted setting and you are not a trusted user ``` </details> edit: i think i solved that by ditching some local nix cache (?)
make substituters trusted
Some checks failed
/ check-pre-commit (pull_request) Successful in 13s
/ check-data-model (pull_request) Failing after 52s
/ check-mastodon (pull_request) Successful in 27s
/ check-peertube (pull_request) Successful in 21s
/ check-panel (pull_request) Successful in 1m30s
/ check-deployment-basic (pull_request) Successful in 33s
/ check-deployment-cli (pull_request) Successful in 47s
/ check-deployment-panel (pull_request) Successful in 1m59s
/ check-resources (pull_request) Successful in 5m15s
0e90e9ce59
Signed-off-by: Kiara Grouwstra <kiara@procolix.eu>
Author
Owner

current CI failure is because the cache isn't currently deployed

current CI failure is because the cache isn't currently deployed
Some checks failed
/ check-pre-commit (pull_request) Successful in 13s
Required
Details
/ check-data-model (pull_request) Failing after 52s
Required
Details
/ check-mastodon (pull_request) Successful in 27s
Required
Details
/ check-peertube (pull_request) Successful in 21s
Required
Details
/ check-panel (pull_request) Successful in 1m30s
Required
Details
/ check-deployment-basic (pull_request) Successful in 33s
Required
Details
/ check-deployment-cli (pull_request) Successful in 47s
Required
Details
/ check-deployment-panel (pull_request) Successful in 1m59s
Required
Details
/ check-resources (pull_request) Successful in 5m15s
Required
Details
This pull request has changes conflicting with the target branch.
  • .forgejo/workflows/ci.yaml
  • default.nix
  • infra/common/nixos/default.nix
  • npins/sources.json
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u attic:kiara-attic
git switch kiara-attic
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Reference: fediversity/fediversity#397
No description provided.