forked from Fediversity/Fediversity
527 lines
16 KiB
Markdown
527 lines
16 KiB
Markdown
---
|
|
gitea: none
|
|
include_toc: true
|
|
---
|
|
|
|
# Introduction to a worker-based setup
|
|
|
|
Very busy servers are brought down because a single thread can't keep up with
|
|
the load. So you want to create several threads for different types of work.
|
|
|
|
See this [Matrix blog](https://matrix.org/blog/2020/11/03/how-we-fixed-synapse-s-scalability/)
|
|
for some background information.
|
|
|
|
The traditional Synapse setup is one monolithic piece of software that does
|
|
everything. Joining a very busy room makes a bottleneck, as the server will
|
|
spend all its cycles on synchronizing that room.
|
|
|
|
You can split the server into workers, that are basically Synapse servers
|
|
themselves. Redirect specific tasks to them and you have several different
|
|
servers doing all kinds of tasks at the same time. A busy room will no longer
|
|
freeze the rest.
|
|
|
|
Workers communicate with each other via UNIX sockets and Redis. We choose
|
|
UNIX sockets because they're much more efficient than network sockets. Of
|
|
course, if you scale to more than one machine, you will need network sockets
|
|
instead.
|
|
|
|
**Important note**
|
|
|
|
While the use of workers can drastically improve speed, the law of diminished
|
|
returns applies. Splitting off more and more workers will not further improve
|
|
speed after a certain point. Plus: you need to understand what the most
|
|
resource-consuming tasks are before you can start to plan how many workers for
|
|
what tasks you need.
|
|
|
|
In this document we'll basically create a worker for every task, and several
|
|
workers for a few heavy tasks, as an example. You mileage may not only vary, it
|
|
will.
|
|
|
|
Tuning the rest of the machine and network also counts, especially PostgreSQL.
|
|
A well-tuned PostgreSQL can make a really big difference and should probably
|
|
be considered even before configuring workers.
|
|
|
|
With workers, PostgreSQL's configuration should be changed accordingly: see
|
|
[Tuning PostgreSQL for a Matrix Synapse
|
|
server](https://tcpipuk.github.io/postgres/tuning/index.html) for hints and
|
|
examples.
|
|
|
|
|
|
# Redis
|
|
|
|
Workers need Redis as part of their communication, so our first step is
|
|
to install Redis.
|
|
|
|
```
|
|
apt install redis-server
|
|
```
|
|
|
|
For less overhead we use a UNIX socket instead of a network connection to
|
|
localhost. Disable the TCP listener and enable the socket in
|
|
`/etc/redis/redis.conf`:
|
|
|
|
```
|
|
port 0
|
|
|
|
unixsocket /run/redis/redis-server.sock
|
|
unixsocketperm 770
|
|
```
|
|
|
|
Our matrix user (`matrix-synapse`) has to be able to read from and write to
|
|
that socket, which is created by Redis and owned by `redis:redis`, so we add
|
|
user `matrix-synapse` to the group `redis`. You may come up with a
|
|
finer-grained permission solution, but for our example this will do.
|
|
|
|
```
|
|
adduser matrix-synapse redis
|
|
```
|
|
|
|
Restart Redis for these changes to take effect. Check for error messages in
|
|
the logs, if port 6379 is no longer active, and if the socketfile
|
|
`/run/redis/redis-server.sock` exists.
|
|
|
|
Now point Synapse at Redis in `conf.d/redis.yaml`:
|
|
|
|
```
|
|
redis:
|
|
enabled: true
|
|
path: /run/redis/redis-server.sock
|
|
```
|
|
|
|
Restart Synapse and check if it can connect to Redis via the socket, you should find log
|
|
entries like this:
|
|
|
|
```
|
|
synapse.replication.tcp.redis - 292 - INFO - sentinel - Connecting to redis server UNIXAddress('/run/redis/redis-server.sock')
|
|
synapse.util.httpresourcetree - 56 - INFO - sentinel - Attaching <synapse.replication.http.ReplicationRestResource object at 0x7f95f850d150> to path b'/_synapse/replication'
|
|
synapse.replication.tcp.redis - 126 - INFO - sentinel - Connected to redis
|
|
synapse.replication.tcp.redis - 138 - INFO - subscribe-replication-0 - Sending redis SUBSCRIBE for ['matrix.example.com/USER_IP', 'matrix.example.com']
|
|
synapse.replication.tcp.redis - 141 - INFO - subscribe-replication-0 - Successfully subscribed to redis stream, sending REPLICATE command
|
|
synapse.replication.tcp.redis - 146 - INFO - subscribe-replication-0 - REPLICATE successfully sent
|
|
```
|
|
|
|
|
|
# Synapse
|
|
|
|
Workers communicate with each other over sockets, that are all placed in one
|
|
directory. These sockets are owned by `matrix-synapse:matrix-synapse`, so make
|
|
sure nginx can write to them: add user `www-data` to group `matrix-synapse`
|
|
and restart nginx.
|
|
|
|
Then, make sure systemd creates the directory for the sockets as soon as
|
|
Synapse starts:
|
|
|
|
```
|
|
systemctl edit matrix-synapse
|
|
```
|
|
|
|
Now override parts of the `Service` stanza to add these two lines:
|
|
|
|
```
|
|
[Service]
|
|
RuntimeDirectory=matrix-synapse
|
|
RuntimeDirectoryPreserve=yes
|
|
```
|
|
|
|
The directory `/run/matrix-synapse` will be created as soon
|
|
as Synapse starts, and will not be removed on restart or stop, because that
|
|
would create problems with workers who suddenly lose their sockets.
|
|
|
|
Then we change Synapse from listening on `localhost:8008` to listening on a
|
|
socket. We'll do most of our workers work in `conf.d/listeners.yaml`, so let's
|
|
put the new listener configuration for the main proccess there.
|
|
|
|
Remove the `localhost:8008` stanza, and configure these two sockets:
|
|
|
|
```
|
|
listeners:
|
|
- path: /run/matrix-synapse/inbound_main.sock
|
|
mode: 0660
|
|
type: http
|
|
resources:
|
|
- names:
|
|
- client
|
|
- consent
|
|
- federation
|
|
|
|
- path: /run/matrix-synapse/replication_main.sock
|
|
mode: 0660
|
|
type: http
|
|
resources:
|
|
- names:
|
|
- replication
|
|
```
|
|
|
|
This means Synapse will create two sockets under `/run/matrix-synapse`: one
|
|
for incoming traffic that is forwarded by nginx (`inbound_main.sock`), and one for
|
|
communicating with all the other workers (`replication_main.sock`).
|
|
|
|
If you restart Synapse now, it won't do anything anymore, because nginx is
|
|
still forwarding its traffic to `localhost:8008`. We'll get to nginx later,
|
|
but for now you should change:
|
|
|
|
```
|
|
proxy_forward http://localhost:8008;
|
|
```
|
|
|
|
to
|
|
|
|
```
|
|
proxy_forward http://unix:/run/matrix-synapse/inbound_main.sock;
|
|
```
|
|
|
|
If you've done this, restart Synapse and nginx, and check if the sockets are created
|
|
and have the correct permissions.
|
|
|
|
Synapse should work normally again, we've switched from network sockets to
|
|
UNIX sockets, and added Redis. Now we'll create the actual workers.
|
|
|
|
|
|
# Worker overview
|
|
|
|
Every worker is, in fact, a Synapse server, only with a limited set of tasks.
|
|
Some tasks can be handled by a number of workers, others only by one. Every
|
|
worker starts as a normal Synapse process, reading all the normal
|
|
configuration files, and then a bit of configuration for the specific worker
|
|
itself.
|
|
|
|
Workers need to communicate with each other and the main process, they do that
|
|
via the `replication` sockets under `/run/matrix-synapse` and Redis.
|
|
|
|
Most worker also need a way to be fed traffic by nginx: they have an `inbound`
|
|
socket for that, in the same directory.
|
|
|
|
Finally, all those replicating workers need to be registered in the main
|
|
process: all workers and their replication sockets are listed in the `instance_map`.
|
|
|
|
|
|
## Types of workers
|
|
|
|
We'll make separate workers for almost every task, and several for the
|
|
heaviest tasks: synchronising. An overview of what endpoints are to be
|
|
forwarded to a worker is in [Synapse's documentation](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications).
|
|
|
|
We'll create the following workers:
|
|
|
|
* login
|
|
* federation_sender
|
|
* mediaworker
|
|
* userdir
|
|
* pusher
|
|
* push_rules
|
|
* typing
|
|
* todevice
|
|
* accountdata
|
|
* presence
|
|
* receipts
|
|
* initial_sync: 1 and 2
|
|
* normal_sync: 1, 2 and 3
|
|
|
|
Some of them are `stream_writers`, and the [documentation about
|
|
stream_witers](https://element-hq.github.io/synapse/latest/workers.html#stream-writers)
|
|
says:
|
|
|
|
```
|
|
Note: The same worker can handle multiple streams, but unless otherwise documented, each stream can only have a single writer.
|
|
```
|
|
|
|
So, stream writers must have unique tasks: you can't have two or more workers
|
|
writing to the same stream. Stream writers have to be listed in `stream_writers`:
|
|
|
|
```
|
|
stream_writers:
|
|
account_data:
|
|
- accountdata
|
|
presence:
|
|
- presence
|
|
receipts:
|
|
- receipts
|
|
to_device:
|
|
- todevice
|
|
typing:
|
|
- typing
|
|
push_rules:
|
|
- push_rules
|
|
```
|
|
|
|
As you can see, we've given the stream workers the name of the stream they're
|
|
writing to. We could combine all those streams into one worker, which would
|
|
probably be enough for most instances.
|
|
|
|
We could define a worker with the name streamwriter and list it under all
|
|
streams instead of a single worker for every stream.
|
|
|
|
Finally, we have to list all these workers under `instance_map`: their name
|
|
and their replication socket:
|
|
|
|
```
|
|
instance_map:
|
|
main:
|
|
path: "/run/matrix-synapse/replication_main.sock"
|
|
login:
|
|
path: "/run/matrix-synapse/replication_login.sock"
|
|
federation_sender:
|
|
path: "/run/matrix-synapse/replication_federation_sender.sock"
|
|
mediaworker:
|
|
path: "/run/matrix-synapse/replication_mediaworker.sock"
|
|
...
|
|
normal_sync1:
|
|
path: "unix:/run/matrix-synapse/replication_normal_sync1.sock"
|
|
normal_sync2:
|
|
path: "unix:/run/matrix-synapse/replication_normal_sync2.sock"
|
|
normal_sync3:
|
|
path: "unix:/run/matrix-synapse/replication_normal_sync3.sock"
|
|
```
|
|
|
|
|
|
## Defining a worker
|
|
|
|
Every working starts with the normal configuration files, and then loads its
|
|
own. We put those files under `/etc/matrix-synapse/workers`. You have to
|
|
create that directory, and make sure Synapse can read them. Being
|
|
profesionally paranoid, we restrict access to that directory and the files in
|
|
it:
|
|
|
|
```
|
|
mkdir /etc/matrix-synapse/workers
|
|
chown matrix-synapse:matrix-synapse /etc/matrix-synapse/workers
|
|
chmod 750 /etc/matrix-synapse-workers
|
|
```
|
|
|
|
We'll fill this directory with `yaml` files; one for each worker.
|
|
|
|
|
|
### Generic worker
|
|
|
|
Workers look very much the same, very little configuration is needed. This is
|
|
what you need:
|
|
|
|
* name
|
|
* replication socket (not every worker needs this)
|
|
* inbound socket (not every worker needs this)
|
|
* log configuration
|
|
|
|
One worker we use handles the login actions, this is how it's configured:
|
|
|
|
```
|
|
worker_app: "synapse.app.generic_worker"
|
|
worker_name: "login"
|
|
worker_log_config: "/etc/matrix-synapse/logconf.d/login.yaml"
|
|
|
|
worker_listeners:
|
|
- path: "/run/matrix-synapse/inbound_login.sock"
|
|
type: http
|
|
resources:
|
|
- names:
|
|
- client
|
|
- consent
|
|
- federation
|
|
|
|
- path: "/run/matrix-synapse/replication_login.sock"
|
|
type: http
|
|
resources:
|
|
- names: [replication]
|
|
```
|
|
|
|
First listener is the socket where nginx sends all traffic related to logins
|
|
to. You have to configure nginx to do that, we'll get to that later.
|
|
|
|
First line defines the type of worker. In the past there were quite a few
|
|
different types, but most of them have been phased out in favour of one
|
|
generic worker.
|
|
|
|
The `worker_log_config` defines how and where the worker logs. Of course you'll
|
|
need to configure that too, see further.
|
|
|
|
The first `listener` is the inbound socket, that nginx uses to forward login
|
|
related traffic to. Make sure nginx can write to this socket. The
|
|
`resources` vary between workers.
|
|
|
|
The second `listener` is used for communication with the other workers and the
|
|
main thread. The only `resource` it needs is `replication`. This socket needs
|
|
to be listed in the `instance_map` in the main thread.
|
|
|
|
Of course, if you need to scale up to the point where you need more than one
|
|
machine, these listeners can no longer use UNIX sockets, but will have to use
|
|
the network. This creates extra overhead, so you want to use sockets whenever
|
|
possible.
|
|
|
|
|
|
### Media worker
|
|
|
|
The media worker is slightly different than the generic one. It doesn't use the
|
|
`synapse.app.generic_worker`, but a specialised one: `synapse.app.media_repository`.
|
|
To prevent the main process from handling media itself, you have to explicitly
|
|
tell it to leave that to the worker, by adding this to the configuration (in
|
|
our setup `conf.d/listeners.yaml`):
|
|
|
|
```
|
|
enable_media_repo: false
|
|
media_instance_running_background_jobs: mediaworker
|
|
```
|
|
|
|
The worker `mediaworker` looks like this:
|
|
|
|
```
|
|
worker_app: "synapse.app.media_repository"
|
|
worker_name: "mediaworker"
|
|
worker_log_config: "/etc/matrix-synapse/logconf.d/media.yaml"
|
|
|
|
worker_listeners:
|
|
- path: "/run/matrix-synapse/inbound_mediaworker.sock"
|
|
type: http
|
|
resources:
|
|
- names:
|
|
- media
|
|
- federation
|
|
|
|
- path: "/run/matrix-synapse/replication_mediaworker.sock"
|
|
type: http
|
|
resources:
|
|
- names: [replication]
|
|
```
|
|
|
|
If you use more than one mediaworker, know that they must all run on the same
|
|
machine; scaling it over more than one machine will not work.
|
|
|
|
|
|
## Worker logging
|
|
|
|
As stated before, you configure the logging of workers in a separate yaml
|
|
file. As with the definitions of the workers themselves, you need a directory for
|
|
that. We'll use `/etc/matrix-synapse/logconf.d` for that; make it and fix the
|
|
permissions.
|
|
|
|
There's a lot you can configure for logging, but for now we'll give every
|
|
worker the same layout. Here's the configuration for the `login` worker:
|
|
|
|
```
|
|
version: 1
|
|
formatters:
|
|
precise:
|
|
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
|
|
handlers:
|
|
file:
|
|
class: logging.handlers.TimedRotatingFileHandler
|
|
formatter: precise
|
|
filename: /var/log/matrix-synapse/login.log
|
|
when: midnight
|
|
backupCount: 3
|
|
encoding: utf8
|
|
|
|
buffer:
|
|
class: synapse.logging.handlers.PeriodicallyFlushingMemoryHandler
|
|
target: file
|
|
capacity: 10
|
|
flushLevel: 30
|
|
period: 5
|
|
|
|
loggers:
|
|
synapse.metrics:
|
|
level: WARN
|
|
handlers: [buffer]
|
|
synapse.replication.tcp:
|
|
level: WARN
|
|
handlers: [buffer]
|
|
synapse.util.caches.lrucache:
|
|
level: WARN
|
|
handlers: [buffer]
|
|
twisted:
|
|
level: WARN
|
|
handlers: [buffer]
|
|
synapse:
|
|
level: INFO
|
|
handlers: [buffer]
|
|
|
|
root:
|
|
level: INFO
|
|
handlers: [buffer]
|
|
```
|
|
|
|
The only thing you need to change if the filename to which the logs are
|
|
written. You could create only one configuration and use that in every worker,
|
|
but that would mean all logs will end up in the same file, which may not be
|
|
what you want.
|
|
|
|
See the [Python
|
|
documentation](https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema)
|
|
for all the ins and outs of logging.
|
|
|
|
|
|
# Systemd
|
|
|
|
You want Synapse and its workers managed by systemd. First of all we define a
|
|
`target`: a group of services that belong together.
|
|
|
|
```
|
|
systemctl edit --force --full matrix-synapse.target
|
|
```
|
|
|
|
Feed it with this bit:
|
|
|
|
```
|
|
[Unit]
|
|
Description=Matrix Synapse with all its workers
|
|
After=network.target
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
```
|
|
|
|
First add `matrix-synapse.service` to this target by overriding the `WantedBy`
|
|
in the unit file (`systemctl edit matrix-synapse.service`):
|
|
|
|
```
|
|
[Install]
|
|
WantedBy=matrix.target
|
|
```
|
|
|
|
The same `WantedBy` need to go in the unit files for every worker. For the
|
|
workers we're using a template instead of separate unit files for every single
|
|
one. Create the template:
|
|
|
|
```
|
|
systemctl edit --full --force matrix-synapse-worker@
|
|
```
|
|
|
|
Fill it with this content:
|
|
|
|
```
|
|
[Unit]
|
|
Description=Synapse %i
|
|
AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
|
|
|
|
# This service should be restarted when the synapse target is restarted.
|
|
PartOf=matrix-synapse.target
|
|
ReloadPropagatedFrom=matrix-synapse.target
|
|
|
|
# if this is started at the same time as the main, let the main process start
|
|
# first, to initialise the database schema.
|
|
After=matrix-synapse.service
|
|
|
|
[Service]
|
|
Type=notify
|
|
NotifyAccess=main
|
|
User=matrix-synapse
|
|
WorkingDirectory=/var/lib/matrix-synapse
|
|
EnvironmentFile=-/etc/default/matrix-synapse
|
|
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
|
|
ExecReload=/bin/kill -HUP $MAINPID
|
|
Restart=always
|
|
RestartSec=3
|
|
SyslogIdentifier=matrix-synapse-%i
|
|
|
|
[Install]
|
|
WantedBy=matrix-synapse.target
|
|
```
|
|
|
|
Every worker needs to be enabled and started individually. Quickest way to do
|
|
that, is to run a loop in the directory:
|
|
|
|
```
|
|
cd /etc/matrix-synapse/workers
|
|
for worker in `ls *yaml`; do systemctl enable --now matrix-synapse-worker@$worker; done
|
|
```
|
|
|
|
After a reboot, Synapse and all its workers should be started.
|