4.9 KiB
migration data model requirements
(updated) deployment incl variables, backup creation/restore
Assumptions:
- Our deployment fully controls all versions, bypassing concerns on version mismatches.
for release version 0, focus on known current needs
- to be expanded later as each new application is added and can be transferred between providers
- review migration guides for the known apps with an eye to odd/unusual details that influence design choices (task for Niols? others?)
Specifically, this suggests scoping to migrating:
- managed infrastructure (rather than managed applications)
- between servers initially owned by procolix
- same proxmox version
- NixOS VMs set up by us so we can guarantee identical application versions
- hosting limited to a single application (to start)
- retaining the same domain name
- migrating the applications rather than also say control of domains
First, a bit of an inventory (list without much structure now, later will create structured form/schema with e.g. many-to-many links, useful for the migration code):
- clearly mark items that will not be in the first migration as planned for later or speculative
- or remove them if they would be too far in the future
- later we understand what is useful for migration code, we can extract and transform in to a format suitable as data model documentation
Hosting Provider provides:
- proxmox, git
- hardware
- filesystem storage
- DNS automation hooks (RFC 2136, optionally authenticated by TSIG (RFC 2845) or GSS-TSIG (RFC 3645))
- central/shared garage storage or only hardware+diskspace for the garage VMs to create storage?
- with central: more efficient but less isolated
FooUniversity (Operator)
- domain(s) * may need to rewrite URLs to blobs automatically, depending on the underlying URL scheme, which may be per setup or application * limits? per application? per user? where are these used/set/enforced? * TODO: what does e.g. borgmatic need to back up? * complications: in case details such as connections change, those may need adjusting, implying application-specification reconfiguring * potentially propagate thru by e.g. TF? * out of scope?: focus on actual state, disregarding reconstructable stuff
- pixelfed
When transforming the data-model code to a deliverable version of the data model as part of the technical architecture document, documenting user-data storage and with respects to security and GDPR
MVP scoping ideas
User story 1: New customer When a new customer goes to the Fediversity website we want to show that user what Fediversity is all about and what it can give to the customer. This points the customer to a signup form where they can enter all the details that are needed to get it working. Here they can also decide what applications to use (at first no more than three). Details can be, the admin login, domain, and applications. Then when the customer confirms everything starts to install automagically, after which the customer is presented with (some) url's to login to.
User story 2: Take out / move to other instance At any time a customer may wish to change service providers. They can easily go to an admin screen where they can get their configuration and data packaged for transfer. This packaged data can be provided to a new service provider where they will be up-and-running again easily, with minimal downtime.
proposed MVP scope:
- block storage
- blob storage (garage)
- physical servers
- proxmox vm management
- 1 to 3 applications packaged in Nix (Mastodon, Peertube, Pixelfed)
- frontend / website
- working dns, can be external, but automated
- takeout area
- import area
- 2 Fediversity environments to transfer between
- demonstration of User story 1
- demonstration of User story 2