meta/meeting-notes/2024-11-20-architecture-meetup/2024-11-20-architecture-meetup.md
BjornW 5b8f906bfb Notes from the architecture meetup 2024-11-20
TODO:
find a better place to store large files such as these recordings
as a git repo is not really the best place for it.
2024-11-20 16:57:37 +01:00

80 lines
5 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Architecture meetup 2024-11-20
13:30 - 15:00
Agenda Times are approx. Less is better ;)
13:30 - 13:35 Goals of the meeting (~5min)
13:35 - 14:15 Diagram discussion (~40min)
14:15 - 14:35 Status of the project (~20min)
14:35 - 14:55 Roles & responsibilities (~25min)
14:55 - 15:00 FOSSDEM (~5min)
Attendees: Bjorn, Ronny, Richard, Kevin, Nicolas, Gheorghe, Valentin
Goals:
* Clarify uncertainties and "freeze" the architecture
Notes:
* Robert and Koen are missing today, both are critical
* Management layer:
* IdentityManagement should be clarified as Authorization, Authentication, Accounting (AAA)
* The Nix-Panel to used by the operator (our "customer") with high-level parameters
* operator's email
* DNS zone
* booked storage and compute
* etc
* TBD in how much this will be greenfield, see architecture discussion from a few weeks ago
* About the central database (multiples)
* Q: what is the datamodel? TBD in more detail.
* State of the system per operator
* AAA stuff
* Operator bespoke info
* Provider bespoke info
* upstream DNS config
* Netbox: provider side (physical network layout & hardware)
* Q: What's the role of NixOps4 here?
* Valentin: NixOps4 merely provides a mechanism. The policy is implemented by "resource providers" which are domain-specific and plugged into NixOps4 to CRUD the various data sources
* TO DO: There are no "use cases" yet to describe how the services works; e.g setting up a service like Pixelfed etc. Basically a case to describe how the components work together.
* (some discussion on the various representations of the system: component dependency graph, data flow graph for how deployments come together, user stories for the various actors)
* Valentin proposes to focus on the component dependencies for now, as the current diagram already mostly represents those
* can sketch user stories on the side
* "Nix-configuration" and Proxmos are merely resource providers for NixOps4
* TODO: Glossary for definitions (make sure we all speak the same language).
* Gheorghe proposes to annotate each box with the component type (e.g. "virtualisation provider") and [at least one, if there are multiple planned] concrete implementation (e.g. "proxmox")
* Ronny: there may be services that happen not to run under NixOS but some other Linux distro
* We will need another configuration system for those, e.g. Ansible
* This would be another resource provider for NixOps4
* We shouldNix declare it out of scope, since NixOS is the more natural thing to do, or hack it together with shell scripts if absolutely required
* We currently don't have services that aren't - and defintely none that can't be - nixified
* Also, centrally managed systems, such as provider-side DNS management, can be handled by the provider classically, e.g. on Debian
* Our particular target deSEC would be expensive to package for NixOS, but we need it exactly once per provider and it won't be redeployed
* NixOS services:
* The only difference between "Services" and "FediServices" is that "FediServices" have federation
* More nuanced: Fediverse services are intended to be used by the general public, while the others are more interesting for academic institutions
* This is a distinction per work package
* Technically, all of them are NixOS modules with configurations that are somewhat specific to our architecture
* Unspecified requirement: Backups for all of these
* From previous discussions: storage is one of
* Service data: Block storage (ZFS), and snapshots are pulled on the provider side
* Essentially anything that can't be stored as blob storage
* User data: Blob storage (e.g. Garage), could optionally be replicated to compatible services
* Q: is this already available in Garage or would we need to build it, and is it relevant to our mission?
* Storage needs to be rewired for each service so the data actually lands where it should
* Need to decide whether it's worth the time investment on a case-by-case basis
* "Core services":
* secrets management is more of a concern for deployments (as a NixOps4 resource provider)
* Secrets are a resource & need a resource provider plugin.
* Which resourceproviders does NixOps need to talk & their context.
* The purpose of this block is to signify that some services are mandatory for an operator-side deployment, but they exist already and only need to be interfaced with via NixOps4
* For example, there is a DNS management system and an email server, and a deployment merely needs to register with them
* TODO: label this in the architecture diagram
* TODO: move it into the "Management" block, maybe rename this to "existing" or "pre-defined" services
* TODO: clarify mapping between archtectural components and use case actors, refine naming
*
Due to other meetings we had to stop here. We still have to discuss these topics from the agenda:
14:15 - 14:35 Status of the project (~20min)
14:35 - 14:55 Roles & responsibilities (~25min)
14:55 - 15:00 FOSSDEM (~5min)