Merge branch 'main' of git.fediversity.eu:hans/Fediversity

This commit is contained in:
Hans van Zijst 2025-01-15 10:22:13 +01:00
commit 1652183975
17 changed files with 709 additions and 213 deletions

View file

@ -5,10 +5,13 @@ include_toc: true
# A complete Matrix installation
This is going to be a Matrix installation with all bells and whistles. Not
just the server, but every other bit that you need or want.
This documentation describes how to build a complete Matrix environment with
all bells and whistles. Not just the Synapse server, but (almost) every bit
you want.
The main focus will be on the server itself, Synapse, but there's a lot more
than just that.
We're building it with workers, so it will scale.
This documentation isn't ready yet, and if you find errors or room for improvement,
please let me know. You can do that via Matrix, obviously (`@hans:woefdram.nl`), via
@ -29,8 +32,8 @@ conferencing
* [Consent
tracking](https://element-hq.github.io/synapse/latest/consent_tracking.html)
* Authentication via
[OpenID](https://element-hq.github.io/synapse/latest/openid.html)
* Several [bridges](https://matrix.org/ecosystem/bridges/)
[OpenID](https://element-hq.github.io/synapse/latest/openid.html) (later)
* Several [bridges](https://matrix.org/ecosystem/bridges/) (later)
# Overview
@ -40,15 +43,30 @@ platform, with all bells and whistles. Several components are involved and
finishing the installation of one can be necessary for the installation of the
next.
These are the components we're going to use
Before you start, make sure you take a look at the [checklist](checklist.md).
These are the components we're going to use:
## Synapse
This is the core component: the Matrix server itself.
This is the core component: the Matrix server itself, you should probably
install this first.
Installation and configuration is documented under [synapse](synapse).
Because not every usecase is the same, we'll describe two different
architectures:
You should probably install this first.
** [Monolithic](synapse)
This is the default way of installing Synapse, this is suitable for scenarios
with not too many users, and, importantly, users do not join many very crowded
rooms.
** [Worker-based](synapse/workers)
For servers that get a bigger load, for example those that host users that use
many big rooms, we'll describe how to process that higher load by distributing
it over workers.
## PostgreSQL
@ -57,6 +75,10 @@ This is the database Synapse uses. This should be the first thing you install
after Synapse, and once you're done, reconfigure the default Synapse install
to use PostgreSQL.
If you have already added stuff to the SQLite database that Synapse installs
by default that you don't want to lose: [here's how to migrate from SQLite to
PostgreSQL](https://element-hq.github.io/synapse/latest/postgres.html#porting-from-sqlite).
## nginx
@ -78,7 +100,7 @@ how to [setup and configure it](element-call).
# Element Web
This is the fully-fledged web client, which is very [easy to set
up](element-call).
up](element-web).
# TURN
@ -87,8 +109,8 @@ We may need a TURN server, and we'll use
[coturn](coturn) for that.
It's apparently also possible to use the built-in TURN server in Livekit,
which we'll use if we use [Element Call](call). It's either/or, so make sure
you pick the right approach.
which we'll use if we use [Element Call](element-call). It's either/or, so make
sure you pick the right approach.
You could possibly use both coturn and LiveKit, if you insist on being able to
use both legacy and Element Call functionality. This is not documented here
@ -99,3 +121,4 @@ yet.
With Draupnir you can do moderation. It requires a few changes to both Synapse
and nginx, here's how to [install and configure Draupnir](draupnir).

97
matrix/checklist.md Normal file
View file

@ -0,0 +1,97 @@
# Checklist
Before you dive in and start installing, you should do a little planning
ahead. Ask yourself what you expect from your server.
Is it a small server, just for yourself and some friends and family, or for
your hundreds of colleagues at work? Is it for private use, or do you need
decent moderation tools? Do you need audio and videoconferencing or not?
# Requirements
It's difficult to specify hardware requirements upfront, because they don't
really depend on the number of users you have, but on their behaviour. A
server with users who don't engage in busy rooms like
[#matrix:matrix.org](https://matrix.to/#/#matrix:matrix.org) doesn't need more
than 2 CPU cores, 8GB of RAM and 50GB of diskspace.
A server with users who do join very busy rooms, can easily eat 4 cores and
16GB of RAM. Or more. Or even much more. If you have a public server, where
unknown people can register new accounts, you'll probably need a bit more
oompf (and [moderation](draupnir)).
During its life, the server may need more resources, if users change
their behaviour. Or less. There's no one-size-fits-all approach.
If you have no idea, you should probably start with 2 cores, 8GB RAM and some
50GB diskspace, and follow the [monolithic setup](synapse).
If you expect a higher load (you might get there sooner than you think), you
should probably follow the [worker-based setup](synapse/workers), because
changing the architecture from monolithic to worker-based once the server is
already in use, is a tricky task.
Here's a ballpark figure. Remember, your mileage will probably vary. And
remember, just adding RAM and CPU doesn't automatically scale: you'll need to
tune [PostgreSQL](postgresql/README.md#tuning) and your workers as well so
that your hardware is optimally used.
| Scenario | Architecture | CPU | RAM | Diskspace (GB) |
| :------------------------------------ | :-----------------------------: | :----: | :----: | :------------: |
| Personal, not many very busy rooms | [monolithic](synapse) | 2 | 8GB | 50 |
| Private, users join very busy rooms | [worker-based](synapse/workers) | 4 | 16GB | 100 |
| Public, many users in very busy rooms | [worker-based](synapse/workers) | 8 | 32GB | 250 |
# DNS and certificates
You'll need to configure several things in DNS, and you're going to need a
couple of TLS-certificates. Best to configure those DNS entries first, so that
you can quickly generate the certificates once you're there.
It's usually a good idea to keep the TTL of all these records very low while
installing and configuring, so that you can quickly change records without
having to wait for the TTL to expire. Setting a TTL of 300 (5 minutes) should
be fine. Once everything is in place and working, you should probably increase
it to a more production ready value, like 3600 (1 hour) or more.
What do you need? Well, first of all you need a domain. In this documentation
we'll use `example.com`, you'll need to substitute that with your own domain.
Under the top of that domain, you'll need to host 2 files under
`/.well-known`, so you'll need a webserver there, using a valid
TLS-certificate. This doesn't have to be the same machine as the one you're
installing Synapse on. In fact, it usually isn't.
Assuming you're hosting Matrix on the machine `matrix.example.com`, you need
at least an `A` record in DNS, and -if you have IPv6 support, which you
should- an `AAAA` record too. **YOU CAN NOT USE A CNAME FOR THIS RECORD!**
You'll need a valid TLS-certificate for `matrix.example.com` too.
You'll probably want the webclient too, so that users aren't forced to use an
app on their phone or install the desktop client on their PC. You should never
run the web client on the same name as the server, that opens you up for all
kinds of Cross-Site-Scripting attack. We'll assume you use
`element.example.com` for the web client. You need a DNS entry for that. This
can be a CNAME, but make sure you have a TLS-certificate with the correct name
on it.
If you install a [TURN-server](coturn), either for legacy calls or for [Element
Call](element-call) (or both), you need a DNS entry for that too, and -again- a
TLS-certificate. We'll use `turn.example.com` for this.
If you install Element Call (and why shouldn't you?), you need a DNS entry plus
certificate for that, let's assume you use `call.example.com` for that. This
can be a CNAME again. Element Call uses [LiveKit](element-call#livekit) for the
actual processing of audio and video, and that needs its own DNS entry and certificate
too. We'll use `livekit.example.com`.
| FQDN | Use | Comment |
| :-------------------- | :--------------------- | :--------------------------------------- |
| `example.com` | Hosting `.well-known` | This is the `server_name` |
| `matrix.example.com` | Synapse server | This is the `base_url`, can't be `CNAME` |
| `element.example.com` | Webclient | |
| `turn.example.com` | TURN / Element Call | Highly recommended |
| `call.example.com` | Element Call | Optional |
| `livekit.example.com` | LiveKit SFU | Optional, needed for Element Call |

View file

@ -5,16 +5,22 @@ include_toc: true
# TURN server
You need an TURN server to connect participants that are behind a NAT firewall.
You need a TURN server to connect participants that are behind a NAT firewall.
Because IPv6 doesn't really need TURN, and Chrome can get confused if it has
to use TURN over IPv6, we'll stick to a strict IPv4-only configuration.
Also, because VoIP traffic is only UDP, we won't do TCP.
IMPORTANT! TURN can also be offered by [LiveKit](../element-call#livekit), in
which case you should probably not run coturn (unless you don't use LiveKit's
built-in TURN server, or want to run both to support legacy calls too).
TURN-functionality can be offered by coturn and LiveKit alike: coturn is used
for legacy calls (only one-on-one, supported in Element Android), whereas
Element Call (supported by ElementX, Desktop and Web) uses LiveKit.
In our documentation we'll enable both, which is probably not the optimal
solution, but at least it results in a system that supports old and new
clients.
Here we'll describe coturn, the dedicated ICE/STUN/TURN server that needs to
be configured in Synapse, [LiveKit](../element-call#livekit) has its own page.
# Installation
@ -72,24 +78,24 @@ certbot certonly --nginx -d turn.example.com
This assumes you've already setup and started nginx (see [nginx](../nginx)).
The certificate files reside under `/etc/letsencrypt/live`, but coturn doesn't
run as root, and can't read them. Therefore we create the directory
{#fixssl}
The certificate files reside under `/etc/letsencrypt/live`, but coturn and
LiveKit don't run as root, and can't read them. Therefore we create the directory
`/etc/coturn/ssl` where we copy the files to. This script should be run after
each certificate renewal:
```
#!/bin/bash
# This script is hooked after a renewal of the certificate, so
# that it's copied and chowned and made readable by coturn:
# This script is hooked after a renewal of the certificate, so that the
# certificate files are copied and chowned, and made readable by coturn:
cd /etc/coturn/ssl
cp /etc/letsencrypt/live/turn.example.com/{fullchain,privkey}.pem .
chown turnserver:turnserver *.pem
# We should restart either coturn or LiveKit, they cannot run both!
systemctl restart coturn
#systemctl restart livekit-server
# Make sure you only start/restart the servers that you need!
systemctl try-reload-or-restart coturn livekit-server
```
@ -101,7 +107,8 @@ renew_hook = /etc/coturn/fixssl
```
Yes, it's a bit primitive and could (should?) be polished. But for now: it
works.
works. This will copy and chown the certificate files and restart coturn
and/or LiveKit, depending on if they're running or not.
# Configuration {#configuration}
@ -120,9 +127,13 @@ Now that we have this, we can configure our configuration file under
`/etc/coturn/turnserver.conf`.
```
# We don't use the default ports, because LiveKit uses those
listening-port=3480
tls-listening-port=5351
# We don't need more than 10000 connections:
min-port=50000
max-port=60000
min-port=40000
max-port=49999
use-auth-secret
static-auth-secret=<previously created secret>
@ -132,7 +143,7 @@ user-quota=12
total-quota=1200
# Of course: substitute correct IPv4 address:
listening-ip=185.206.232.60
listening-ip=111.222.111.222
# VoIP traffic is only UDP
no-tcp-relay

View file

@ -3,11 +3,17 @@
# Only IPv4, IPv6 can confuse some software
listening-ip=111.222.111.222
# Listening port for TURN (UDP and TCP):
listening-port=3480
# Listening port for TURN TLS (UDP and TCP):
tls-listening-port=5351
# Lower and upper bounds of the UDP relay endpoints:
# (default values are 49152 and 65535)
#
min-port=50000
max-port=60000
min-port=40000
max-port=49999
use-auth-secret
static-auth-secret=<very secure password>

View file

@ -3,153 +3,38 @@ gitea: none
include_toc: true
---
# Element Call
# Overview
This bit needs to be updated: Go compiler and the whole Node.js/yarn/npm stuff
needs to be cleaned up and standardized. For now the procedure below will
probably work.
Element Call consists of a few parts, you don't have to host all of them
yourself. In this document, we're going to host everything ourselves, so
here's what you need.
Element Call enables users to have audio and videocalls with groups, while
maintaining full E2E encryption.
* **lk-jwt**. This authenticates Synapse users to LiveKit.
* **LiveKit**. This is the "SFU", which actually handles the audio and video, and does TURN.
* **Element Call widget**. This is basically the webapplication, the user interface.
It requires several bits of software and entries in .well-known/matrix/client
As mentioned in the [checklist](../checklist.md) you need to define these
three entries in DNS and get certificates for them:
This bit is for later, but here's a nice bit of documentation to start:
* `turn.example.com`
* `livekit.example.com`
* `call.example.com`
https://sspaeth.de/2024/11/sfu/
You may already have DNS and TLS for `turn.example.com`, as it is also used
for [coturn](../coturn).
For more inspiraten, check https://sspaeth.de/2024/11/sfu/
# Install prerequisites
Define an entry in DNS for Livekit and Call, e.g. `livekit.example.com`
and `call.example.com`. Get certificates for them and make sure to
[automatically renew them](../nginx/README.md#certrenew).
Expand `.well-known/matrix/client` to contain the pointer to the SFU:
```
"org.matrix.msc4143.rtc_foci": [
{
"type": "livekit",
"livekit_service_url": "https://livekit.example.com"
}
]
```
Create `.well-known/element/element.json`, which is opened by Element-web and
ElementX to find the Element Call widget. It should contain something like
this:
```
{
"call": {
"widget_url": "https://call.example.com"
}
}
```
Make sure it is served as `application/json`, just like the other .well-known
files.
lk-jwt-service is a small Go program that handles authorization tokens. You'll need a
Go compiler, so install that:
```
apt install golang
```
# lk-jwt-service {#lkjwt}
Get the latest source code and comile it (preferably *NOT* as root):
```
git clone https://github.com/element-hq/lk-jwt-service.git
cd lk-jwt-service
go build -o lk-jwt-service
```
You'll then notice that you need a newer compiler, so we'll download that and add it to
our PATH (again not as root):
```
wget https://go.dev/dl/go1.23.3.linux-amd64.tar.gz
tar xvfz go1.23.3.linux-amd64.tar.gz
cd go/bin
export PATH=`pwd`:$PATH
cd
```
Now, compile:
```
cd lk-jwt-service
go build -o lk-jwt-service
```
Copy and chown the binary to `/usr/local/sbin` (yes: as root):
```
cp ~user/lk-jwt-service/lk-jwt-service /usr/local/sbin
chown root:root /usr/local/sbin/lk-jwt-service
```
Create a service file for systemd, something like this:
```
# This thing does authorization for Element Call
[Unit]
Description=LiveKit JWT Service
After=network.target
[Service]
Restart=always
User=www-data
Group=www-data
WorkingDirectory=/etc/lk-jwt-service
EnvironmentFile=/etc/lk-jwt-service/config
ExecStart=/usr/local/sbin/lk-jwt-service
[Install]
WantedBy=multi-user.target
```
We read the options from `/etc/lk-jwt-service/config`,
which we make read-only for group `www-data` and non-accessible by anyone
else.
```
mkdir /etc/lk-jwt-service
vi /etc/lk-jwt-service/config
chgrp -R www-data /etc/lk-jwt-service
chmod -R o-rwx /etc/lk-jwt-service
```
The contents of `/etc/lk-jwt-service/config` are not fully known yet (see
further, installation of the actual LiveKit, the SFU), but for now it's enough
to fill it with this:
```
LIVEKIT_URL=wss://livekit.example.com
LIVEKIT_SECRET=xxx
LIVEKIT_KEY=xxx
LK_JWT_PORT=8080
```
Now enable and start this thing:
```
systemctl enable --now lk-jwt-service
```
# LiveKit {#livekit}
The actual SFU, Selective Forwarding Unit, is LiveKit. Downloading and
installing is easy: download the [binary from Github](https://github.com/livekit/livekit/releases/download/v1.8.0/livekit_1.8.0_linux_amd64.tar.gz)
to /usr/local/bin, chown
it to root:root and you're done.
The actual SFU, Selective Forwarding Unit, is LiveKit; this is the part that
handles the audio and video feeds and also does TURN (this TURN-functionality
does not support the legacy calls, you'll need [coturn](coturn) for that).
Downloading and installing is easy: download the [binary from
Github](https://github.com/livekit/livekit/releases/download/v1.8.0/livekit_1.8.0_linux_amd64.tar.gz)
to /usr/local/bin, chown it to root:root and you're done.
The quickest way to do precisely that, is to run the script:
@ -159,17 +44,42 @@ curl -sSL https://get.livekit.io | bash
You can do this as a normal user, it will use sudo to do its job.
Configuring this thing is [documented
here](https://docs.livekit.io/home/self-hosting/deployment/).
While you're at it, you might consider installing the cli tool as well, you
can use it -for example- to generate tokens so you can [test LiveKit's
connectivity](https://livekit.io/connection-test):
Create a key and secret:
```
curl -sSL https://get.livekit.io/cli | bash
```
Configuring LiveKit is [documented
here](https://docs.livekit.io/home/self-hosting/deployment/). We're going to
run LiveKit under authorization of user `turnserver`, the same users we use
for [coturn](coturn). This user is created when installing coturn, so if you
haven't installed that, you should create the user yourself:
```
adduser --system turnserver
```
## Configure {#keysecret}
Start by creating a key and secret:
```
livekit-server generate-keys
```
This key/secret has to be fed to lk-jwt-service, of course. Create a
configuration file for livekit, `/etc/livekit/livekit.yaml`:
This key and secret have to be fed to lk-jwt-service too, [see here](#jwtconfig).
Create the directory for LiveKit's configuration:
```
mkdir /etc/livekit
chown root:turnserver /etc/livekit
chmod 750 /etc/livekit
```
Create a configuration file for livekit, `/etc/livekit/livekit.yaml`:
```
port: 7880
@ -190,24 +100,53 @@ turn:
udp_port: 3478
external_tls: true
keys:
# KEY: secret were autogenerated by livekit/generate
# in the lk-jwt-service environment variables
xxxxxxxxxxxxxxx: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# KEY: SECRET were generated by "livekit-server generate-keys"
<KEY>: <SECRET>
```
The LiveKit API listens on localhost, IPv6, port 7880. Traffic to this port is
forwarded from port 443by nginx, which handles TLS, so it shouldn't be reachable
from the outside world.
Being a bit paranoid: make sure LiveKit can only read this file, not write it:
The certificate files are not in the usual place under
```
chown root:turnserver /etc/livekit/livekit.yaml
chmod 640 /etc/livekit/livekit.yaml
```
Port `7880` is forwarded by nginx: authentication is also done there, and that
bit has to be forwarded to `lk-jwt-service` on port `8080`. Therefore, we
listen only on localhost.
The TURN ports are the normal, default ones. If you also use coturn, make sure
it doesn't use the same ports as LiveKit. Also, make sure you open the correct
ports in the [firewall](../firewall).
## TLS certificate
The TLS-certificate files are not in the usual place under
`/etc/letsencrypt/live`, see [DNS and
certificate (coturn)](../coturn/README.md#dnscert) why that is.
certificate](../coturn/README.md#dnscert) under coturn why that is.
The `xxx: xxxx` is the key and secret as generated before.
As stated before, we use the same user as for coturn. Because this user does
not have the permission to read private keys under `/etc/letsencrypt`, we copy
those files to a place where it can read them. For coturn we copy them to
`/etc/coturn/ssl`, and if you use coturn and have this directory, LiveKit can
read them there too.
If you don't have coturn installed, you should create a directory under
`/etc/livekit` and copy the files to there. Modify the `livekit.yaml` file and
the [script to copy the files](../coturn/README.md#fixssl) to use that
directory. Don't forget to update the `renew_hook` in Letsencrypt if you do.
The LiveKit API listens on localhost, IPv6, port 7880. Traffic to this port is
forwarded from port 443 by nginx, which handles TLS, so it shouldn't be reachable
from the outside world.
See [LiveKit's config documentation](https://github.com/livekit/livekit/blob/master/config-sample.yaml)
for more options.
## Systemd
Now define a systemd servicefile, like this:
```
@ -230,11 +169,125 @@ WantedBy=multi-user.target
Enable and start it.
IMPORTANT!
Clients don't know about LiveKit yet, you'll have to give them the information
via the `.well-known/matrix/client`: add this bit to it to point them at the
SFU:
LiveKit is configured to use its built-in TURN server, using the same ports as
[coturn](../coturn). Obviously, LiveKit and coturn are mutually exclusive in
this setup. Shutdown and disable coturn if you use LiveKit's TURN server.
```
"org.matrix.msc4143.rtc_foci": [
{
"type": "livekit",
"livekit_service_url": "https://livekit.example.com"
}
]
```
Make sure it is served as `application/json`, just like the other .well-known
files.
# lk-jwt-service {#lkjwt}
lk-jwt-service is a small Go program that handles authorization tokens for use with LiveKit.
You'll need a Go compiler, but the one Debian provides is too old (at the time
of writing this, at least), so we'll install the latest one manually. Check
[the Go website](https://go.dev/dl/) to see which version is the latest, at
the time of writing it's 1.23.3, so we'll install that:
```
wget https://go.dev/dl/go1.23.3.linux-amd64.tar.gz
tar xvfz go1.23.3.linux-amd64.tar.gz
cd go/bin
export PATH=`pwd`:$PATH
cd
```
This means you now have the latest Go compiler in your path, but it's not
installed system-wide. If you want that, copy the whole `go` directory to
`/usr/local` and add `/usr/local/go/bin` to everybody's $PATH.
Get the latest lk-jwt-service source code and comile it (preferably *NOT* as root):
```
git clone https://github.com/element-hq/lk-jwt-service.git
cd lk-jwt-service
go build -o lk-jwt-service
```
Now, compile:
```
cd lk-jwt-service
go build -o lk-jwt-service
```
Copy and chown the binary to `/usr/local/sbin` (yes: as root):
```
cp ~user/lk-jwt-service/lk-jwt-service /usr/local/sbin
chown root:root /usr/local/sbin/lk-jwt-service
```
## Systemd
Create a service file for systemd, something like this:
```
# This thing does authorization for Element Call
[Unit]
Description=LiveKit JWT Service
After=network.target
[Service]
Restart=always
User=www-data
Group=www-data
WorkingDirectory=/etc/lk-jwt-service
EnvironmentFile=/etc/lk-jwt-service/config
ExecStart=/usr/local/sbin/lk-jwt-service
[Install]
WantedBy=multi-user.target
```
## Configuration {#jwtconfig}
We read the options from `/etc/lk-jwt-service/config`,
which we make read-only for group `www-data` and non-accessible by anyone
else.
```
mkdir /etc/lk-jwt-service
vi /etc/lk-jwt-service/config
chgrp -R root:www-data /etc/lk-jwt-service
chmod 750 /etc/lk-jwt-service
```
This is what you should put into that config file,
`/etc/lk-jwt-service/config`. The `LIVEKIT_SECRET` and `LIVEKIT_KEY` are the
ones you created while [configuring LiveKit](#keysecret).
```
LIVEKIT_URL=wss://livekit.example.com
LIVEKIT_SECRET=xxx
LIVEKIT_KEY=xxx
LK_JWT_PORT=8080
```
Change the permission accordingly:
```
chown root:www-data /etc/lk-jwt-service/config
chmod 640 /etc/lk-jwt-service/config
```
Now enable and start this thing:
```
systemctl enable --now lk-jwt-service
```
# Element Call widget {#widget}
@ -263,6 +316,9 @@ sudo apt install yarnpkg
/usr/share/nodejs/yarn/bin/yarn install
```
Yes, this whole Node.js, yarn and npm thing is a mess. Better documentation
could be written, but for now this will have to do.
Now clone the Element Call repository and "compile" stuff (again: not as
root):
@ -273,8 +329,12 @@ cd element-call
/usr/share/nodejs/yarn/bin/yarn build
```
After that, you can find the whole shebang under "dist". Copy that to
`/var/www/element-call` and point nginx to it ([see nginx](../nginx#callwidget)).
If it successfully compiles (warnings are more or less ok, errors aren't), you will
find the whole shebang under "dist". Copy that to `/var/www/element-call` and point
nginx to it ([see nginx](../nginx#callwidget)).
## Configuring
It needs a tiny bit of configuring. The default configuration under `config/config.sample.json`
is a good place to start, copy it to `/etc/element-call` and change where
@ -300,3 +360,16 @@ necessary:
"eula": "https://www.example.com/online-EULA.pdf"
}
```
Now tell the clients about this widget. Create
`.well-known/element/element.json`, which is opened by Element Web, Element Desktop
and ElementX to find the Element Call widget. It should look this:
```
{
"call": {
"widget_url": "https://call.example.com"
}
}
```

View file

@ -0,0 +1,6 @@
{
"call":
{
"widget_url": "https://call.example.com"
}
}

View file

@ -1,21 +1,25 @@
# Firewall
This page is mostly a placeholder for now, but configuration of the firewall
is -of course- very important.
Several ports need to be opened in the firewall, this is a list of all ports
that are needed by the components we describe in this document.
First idea: the ports that need to be opened are:
Those for nginx are necessary for Synapse to work, the ones for coturn and
LiveKit only need to be opened if you run those servers.
| Port(s) / range | IP version | Protocol | Application |
| :-------------: | :--------: | :------: | :--------------------- |
| 80, 443 | IPv4/IPv6 | TCP | nginx, reverse proxy |
| 8443 | IPv4/IPv6 | TCP | nginx, federation |
| 7881 | IPv4/IPv6 | TCP/UDP | coturn/LiveKit TURN |
| 3478 | IPv4 | UDP | coturn/LiveKit TURN |
| 5349 | IPv4 | TCP | coturn/LiveKit TURN |
| 50000-60000 | IPv4 | TCP/UDP | coturn/LiveKit TURN |
| 3478 | IPv4 | UDP | LiveKit TURN |
| 5349 | IPv4 | TCP | LiveKit TURN TLS |
| 7881 | IPv4/IPv6 | TCP | LiveKit RTC |
| 50000-60000 | IPv4/IPv6 | TCP/UDP | LiveKit RTC |
| 3480 | IPv4 | TCP/UDP | coturn TURN |
| 5351 | IPv4 | TCP/UDP | coturn TURN TLS |
| 40000-49999 | IPv4 | TCP/UDP | coturn RTC |
The ports necessary for TURN depend very much on the specific configuration
[coturn](../coturn#configuration) or [LiveKit](../element-call#livekit).
The ports necessary for TURN depend very much on the specific configuration of
[coturn](../coturn#configuration) and/or [LiveKit](../element-call#livekit).

View file

@ -49,7 +49,7 @@ list-timers` lists `certbot.timer`.
However, renewing the certificate means you'll have to restart the software
that's using it. We have 2 or 3 pieces of software that use certificates:
[coturn](../cotorun) and/or [LiveKit](../livekit), and [nginx](../nginx).
[coturn](../coturn) and/or [LiveKit](../element-call#livekit), and [nginx](../nginx).
Coturn/LiveKit are special with regards to the certificate, see their
respective pages. For nginx it's pretty easy: tell Letsencrypt to restart it
@ -167,6 +167,54 @@ This is a very, very basic configuration; just enough to give us a working
service. See this [complete example](revproxy.conf) which also includes
[Draupnir](../draupnir) and a protected admin endpoint.
# Element Web
You can host the webclient on a different machine, but we'll run it on the
same one in this documentation. You do need a different FQDN however, you
can't host it under the same name as Synapse, such as:
```
https://matrix.example.com/element-web
```
So you'll need to create an entry in DNS and get a TLS-certificate for it (as
mentioned in the [checklist](../checklist.md)).
Other than that, configuration is quite simple. We'll listen on both http and
https, and redirect http to https:
```
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/letsencrypt/live/element.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/element.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/ssl/dhparams.pem;
server_name element.example.com;
location / {
if ($scheme = http) {
return 301 https://$host$request_uri;
}
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Content-Security-Policy "frame-ancestors 'self'";
}
root /usr/share/element-web;
index index.html;
access_log /var/log/nginx/elementweb-access.log;
error_log /var/log/nginx/elementweb-error.log;
}
```
This assumes Element Web is installed under `/usr/share/element-web`, as done
by the Debian package provided by Element.io.
# Synapse-admin {#synapse-admin}

View file

@ -1,8 +1,8 @@
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/letsencrypt/live/element.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/element.example.com/privkey.pem;
@ -14,7 +14,7 @@ server {
location / {
if ($scheme = http) {
return 301 https://$host$request_uri;
}
}
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
@ -24,6 +24,6 @@ server {
root /usr/share/element-web;
index index.html;
access_log /var/log/nginx/element-access.log;
error_log /var/log/nginx/element-error.log;
access_log /var/log/nginx/elementweb-access.log;
error_log /var/log/nginx/elementweb-error.log;
}

View file

@ -214,6 +214,8 @@ upstream login {
Ater this definition, we can forward traffic to `login`. What traffic to
forward is decided in the `location` statements, see further.
## Synchronisation
A more complex example are the sync workers. Under [Maps](#Maps) we split sync
requests into two different types; those different types are handled by
different worker pools. In our case we have 2 workers for the initial_sync
@ -240,6 +242,39 @@ The `hash` bit is to make sure that request from one user are consistently
forwarded to the same worker. We filled the variable `$mxid_localpart` in the
maps.
## Federation
Something similar goes for the federation workers. Some requests need to go
to the same worker as all the other requests from the same IP-addres, other
can go to any of these workers.
We define two upstreams with the same workers, only with different names and
the explicit IP-address ordering for one:
```
upstream incoming_federation {
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
upstream federation_requests {
hash $remote_addr consistent;
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
```
Same workers, different handling. See how we forward requests in the next
paragraph.
See [upstreams.conf](upstreams.conf) for a complete example.
# Locations
@ -249,6 +284,8 @@ the right traffic to the right workers. The Synapse documentation about
types](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications)
lists which endpoints a specific worker type can handle.
## Login
Let's forward login requests to our login worker. The [documentation for the
generic_worker](https://element-hq.github.io/synapse/latest/workers.html#synapseappgeneric_worker)
says these endpoints are for registration and login:
@ -272,6 +309,8 @@ location ~ ^(/_matrix/client/(api/v1|r0|v3|unstable)/login|/_matrix/client/(r0|v
}
```
## Synchronisation
The docs say that the `generic_worker` can handle these requests for synchronisation
requests:
@ -283,8 +322,9 @@ requests:
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
```
We forward those to our 2 worker pools, `normal_sync` and `initial_sync`, like
this, using the variable `$sync` we defined in maps.conf:
We forward those to our 2 worker pools making sure the heavy initial syncs go
to the `initial_sync` pool, and the normal ones to `normal_sync`. We use the
variable `$sync`for that, which we defined in maps.conf.
```
# Normal/initial sync
@ -306,6 +346,8 @@ location ~ ^(/_matrix/client/(api/v1|r0|v3)/initialSync|/_matrix/client/(api/v1|
}
```
## Media
The media worker is slightly different: some parts are public, but a few bits
are admin stuff. We split those, and limit the admin endpoints to the trusted
addresses we defined earlier:
@ -325,3 +367,31 @@ location ~ ^/_synapse/admin/v1/(purge_)?(media(_cache)?|room|user|quarantine_med
}
```
# Federation
Federation is done by two types of workers: one pool for requests from our
server to the rest of the world, and one pool for everything coming in from the
outside world. Only the latter is relevant for nginx.
The documentation mentions two different types of federation:
* Federation requests
* Inbound federation transaction request
The second is special, in that requests for that specific endpoint must be
balanced by IP-address. The "normal" federation requests can be sent to any
worker. We're sending all these requests to the same workers, but we make sure
to always send requests from 1 IP-address to the same worker:
```
# Federation readers
location ~ ^(/_matrix/federation/v1/event/|/_matrix/federation/v1/state/|/_matrix/federation/v1/state_ids/|/_matrix/federation/v1/backfill/|/_matrix/federation/v1/get_missing_events/|/_matrix/federation/v1/publicRooms|/_matrix/federation/v1/query/|/_matrix/federation/v1/make_join/|/_matrix/federation/v1/make_leave/|/_matrix/federation/(v1|v2)/send_join/|/_matrix/federation/(v1|v2)/send_leave/|/_matrix/federation/v1/make_knock/|/_matrix/federation/v1/send_knock/|/_matrix/federation/(v1|v2)/invite/|/_matrix/federation/v1/event_auth/|/_matrix/federation/v1/timestamp_to_event/|/_matrix/federation/v1/exchange_third_party_invite/|/_matrix/federation/v1/user/devices/|/_matrix/key/v2/query|/_matrix/federation/v1/hierarchy/) {
include snippets/proxy.conf;
proxy_pass http://incoming_federation;
}
# Inbound federation transactions
location ~ ^/_matrix/federation/v1/send/ {
include snippets/proxy.conf;
proxy_pass http://federation_requests;
}
```

View file

@ -68,19 +68,20 @@ location ~ ^(/_matrix/client/(api/v1|r0|v3|unstable)/login|/_matrix/client/(r0|v
proxy_pass http://login;
}
# Normal/initial sync
# Normal/initial sync:
# To which upstream to pass the request depends on the map "$sync"
location ~ ^/_matrix/client/(r0|v3)/sync$ {
include snippets/proxy.conf;
proxy_pass http://$sync;
}
# Normal sync
# Normal sync:
# These endpoints are used for normal syncs
location ~ ^/_matrix/client/(api/v1|r0|v3)/events$ {
include snippets/proxy.conf;
proxy_pass http://normal_sync;
}
# Initial sync
# Initial sync:
# These endpoints are used for initial syncs
location ~ ^/_matrix/client/(api/v1|r0|v3)/initialSync$ {
include snippets/proxy.conf;
proxy_pass http://initial_sync;
@ -90,11 +91,18 @@ location ~ ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$ {
proxy_pass http://initial_sync;
}
# Federation readers
location ~ ^(/_matrix/federation/v1/event/|/_matrix/federation/v1/state/|/_matrix/federation/v1/state_ids/|/_matrix/federation/v1/backfill/|/_matrix/federation/v1/get_missing_events/|/_matrix/federation/v1/publicRooms|/_matrix/federation/v1/query/|/_matrix/federation/v1/make_join/|/_matrix/federation/v1/make_leave/|/_matrix/federation/(v1|v2)/send_join/|/_matrix/federation/(v1|v2)/send_leave/|/_matrix/federation/v1/make_knock/|/_matrix/federation/v1/send_knock/|/_matrix/federation/(v1|v2)/invite/|/_matrix/federation/v1/event_auth/|/_matrix/federation/v1/timestamp_to_event/|/_matrix/federation/v1/exchange_third_party_invite/|/_matrix/federation/v1/user/devices/|/_matrix/key/v2/query|/_matrix/federation/v1/hierarchy/|/_matrix/federation/v1/send/) {
# Federation
# All the "normal" federation stuff:
location ~ ^(/_matrix/federation/v1/event/|/_matrix/federation/v1/state/|/_matrix/federation/v1/state_ids/|/_matrix/federation/v1/backfill/|/_matrix/federation/v1/get_missing_events/|/_matrix/federation/v1/publicRooms|/_matrix/federation/v1/query/|/_matrix/federation/v1/make_join/|/_matrix/federation/v1/make_leave/|/_matrix/federation/(v1|v2)/send_join/|/_matrix/federation/(v1|v2)/send_leave/|/_matrix/federation/v1/make_knock/|/_matrix/federation/v1/send_knock/|/_matrix/federation/(v1|v2)/invite/|/_matrix/federation/v1/event_auth/|/_matrix/federation/v1/timestamp_to_event/|/_matrix/federation/v1/exchange_third_party_invite/|/_matrix/federation/v1/user/devices/|/_matrix/key/v2/query|/_matrix/federation/v1/hierarchy/) {
include snippets/proxy.conf;
proxy_pass http://incoming_federation;
}
# Inbound federation transactions:
location ~ ^/_matrix/federation/v1/send/ {
include snippets/proxy.conf;
proxy_pass http://federation_requests;
}
# Main thread for all the rest
location / {

View file

@ -0,0 +1,116 @@
# Stream workers first, they are special. The documentation says:
# "each stream can only have a single writer"
# Account-data
upstream account_data {
server unix:/run/matrix-synapse/inbound_accountdata.sock max_fails=0;
keepalive 10;
}
# Userdir
upstream userdir {
server unix:/run/matrix-synapse/inbound_userdir.sock max_fails=0;
keepalive 10;
}
# Typing
upstream typing {
server unix:/run/matrix-synapse/inbound_typing.sock max_fails=0;
keepalive 10;
}
# To device
upstream todevice {
server unix:/run/matrix-synapse/inbound_todevice.sock max_fails=0;
keepalive 10;
}
# Receipts
upstream receipts {
server unix:/run/matrix-synapse/inbound_receipts.sock max_fails=0;
keepalive 10;
}
# Presence
upstream presence {
server unix:/run/matrix-synapse/inbound_presence.sock max_fails=0;
keepalive 10;
}
# Push rules
upstream push_rules {
server unix:/run/matrix-synapse/inbound_push_rules.sock max_fails=0;
keepalive 10;
}
# End of the stream workers, the following workers are of a "normal" type
# Media
# If more than one media worker is used, they *must* all run on the same machine
upstream media {
server unix:/run/matrix-synapse/inbound_mediaworker.sock max_fails=0;
keepalive 10;
}
# Synchronisation by clients:
# Normal sync. Not particularly heavy, but happens a lot
upstream normal_sync {
# Use the username mapper result for hash key
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_normal_sync1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_normal_sync2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_normal_sync3.sock max_fails=0;
keepalive 10;
}
# Initial sync
# Much heavier than a normal sync, but happens less often
upstream initial_sync {
# Use the username mapper result for hash key
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_initial_sync1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_initial_sync2.sock max_fails=0;
keepalive 10;
}
# Login
upstream login {
server unix:/run/matrix-synapse/inbound_login.sock max_fails=0;
keepalive 10;
}
# Clients
upstream client {
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_clientworker1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_clientworker2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_clientworker3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_clientworker4.sock max_fails=0;
keepalive 10;
}
# Federation
# "Normal" federation, balanced round-robin over 4 workers.
upstream incoming_federation {
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
# Inbound federation requests, need to be balanced by IP-address, but can go
# to the same pool of workers as the other federation stuff.
upstream federation_requests {
hash $remote_addr consistent;
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
# Main thread for all the rest
upstream inbound_main {
server unix:/run/matrix-synapse/inbound_main.sock max_fails=0;
keepalive 10;
}

View file

@ -75,7 +75,7 @@ Make sure you add these lines under the one that gives access to the postgres
superuser, the first line.
# Tuning
# Tuning {#tuning}
This is for later, check [Tuning your PostgreSQL Server](https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server)
on the PostgreSQL wiki.

View file

@ -180,7 +180,11 @@ Pointing clients to the correct server needs this at
Very important: both names (example.com and matrix.example.com) must be A
and/or AAAA records in DNS, not CNAME.
See [nginx](../nginx) for details about how to publish this data.
You can also publish support data: administrator, security officer, helpdesk
page. Publish that as `.well-known/matrix/support`.
See the included files for more elaborate examples, and check
[nginx](../nginx) for details about how to publish this data.
# E-mail {#Email}

View file

@ -0,0 +1,12 @@
{
"m.homeserver": {
"base_url": "https://matrix.example.com"
},
"org.matrix.msc4143.rtc_foci":[
{
"type": "livekit",
"livekit_service_url": "https://livekit.example.com"
}
]
}

View file

@ -0,0 +1 @@
{"m.server": "matrix.example.com"}

View file

@ -0,0 +1,17 @@
{
"contacts": [
{
"email_address": "admin@example.com",
"matrix_id": "@john:example.com",
"role": "m.role.admin"
},
{
"email_address": "security@example.com",
"matrix_id": "@bob:example.com",
"role": "m.role.security"
}
],
"support_page": "https://support.example.com/"
}