2024-12-04 16:03:05 +01:00
|
|
|
---
|
|
|
|
gitea: none
|
|
|
|
include_toc: true
|
|
|
|
---
|
|
|
|
|
2024-12-22 16:49:20 +01:00
|
|
|
# Introduction to a worker-based setup
|
2024-12-04 16:03:05 +01:00
|
|
|
|
|
|
|
Very busy servers are brought down because a single thread can't keep up with
|
|
|
|
the load. So you want to create several threads for different types of work.
|
|
|
|
|
|
|
|
See this [Matrix blog](https://matrix.org/blog/2020/11/03/how-we-fixed-synapse-s-scalability/)
|
|
|
|
for some background information.
|
|
|
|
|
2024-12-09 18:51:29 +01:00
|
|
|
The traditional Synapse setup is one monolithic piece of software that does
|
|
|
|
everything. Joining a very busy room makes a bottleneck, as the server will
|
|
|
|
spend all its cycles on synchronizing that room.
|
|
|
|
|
|
|
|
You can split the server into workers, that are basically Synapse servers
|
|
|
|
themselves. Redirect specific tasks to them and you have several different
|
|
|
|
servers doing all kinds of tasks at the same time. A busy room will no longer
|
|
|
|
freeze the rest.
|
|
|
|
|
|
|
|
Workers communicate with each other via socket files and Redis.
|
|
|
|
|
2024-12-22 16:49:20 +01:00
|
|
|
**Important note**
|
|
|
|
|
|
|
|
While the use of workers can drastically improve speed, the law of diminished
|
|
|
|
returns applies. Splitting off more and more workers will not further improve
|
|
|
|
speed after a certain point. Plus: you need to understand what the most
|
|
|
|
resource-consuming tasks are before you can start to plan how many workers for
|
|
|
|
what tasks you need.
|
|
|
|
|
|
|
|
In this document we'll basically create a worker for every task, and several
|
|
|
|
workers for a few heavy tasks, as an example. You mileage may not only vary, it
|
|
|
|
will.
|
|
|
|
|
|
|
|
Tuning the rest of the machine and network also counts, especially PostgreSQL.
|
|
|
|
A well-tuned PostgreSQL can make a really big difference and should probably
|
|
|
|
be considered even before configuring workers.
|
|
|
|
|
|
|
|
With workers, PostgreSQL's configuration should be changed accordingly: see
|
|
|
|
[Tuning PostgreSQL for a Matrix Synapse
|
|
|
|
server](https://tcpipuk.github.io/postgres/tuning/index.html) for hints and
|
|
|
|
examples.
|
|
|
|
|
2024-12-04 16:03:05 +01:00
|
|
|
|
|
|
|
# Redis
|
|
|
|
|
|
|
|
First step is to install Redis.
|
|
|
|
|
|
|
|
```
|
|
|
|
apt install redis-server
|
|
|
|
```
|
|
|
|
|
|
|
|
For less overhead we use a UNIX socket instead of a network connection to
|
|
|
|
localhost. Disable the TCP listener and enable the socket in
|
|
|
|
`/etc/redis/redis.conf`:
|
|
|
|
|
|
|
|
```
|
|
|
|
port 0
|
|
|
|
|
|
|
|
unixsocket /run/redis/redis-server.sock
|
|
|
|
unixsocketperm 770
|
|
|
|
```
|
|
|
|
|
|
|
|
Our matrix user (`matrix-synapse`) has to be able to read from and write to
|
|
|
|
that socket, which is created by Redis and owned by `redis:redis`, so we add
|
|
|
|
user `matrix-synapse` to the group `redis`.
|
|
|
|
|
|
|
|
```
|
|
|
|
adduser matrix-synapse redis
|
|
|
|
```
|
|
|
|
|
|
|
|
Restart Redis for these changes to take effect. Check if port 6379 is no
|
|
|
|
longer active, and if the socketfile `/run/redis/redis-server.sock` exists.
|
|
|
|
|
|
|
|
|
|
|
|
# Synapse
|
|
|
|
|
2024-12-09 18:51:29 +01:00
|
|
|
Workers communicate with each other over sockets, that are all placed in one
|
|
|
|
directory. To make sure only the users that need access will have it, we
|
|
|
|
create a new group and add the users to it.
|
|
|
|
|
|
|
|
Then, create the directory where all the socket files for workers will come,
|
2024-12-04 16:03:05 +01:00
|
|
|
and give it the correct user, group and permission:
|
|
|
|
|
|
|
|
```
|
2024-12-09 18:51:29 +01:00
|
|
|
groupadd --system clubmatrix
|
|
|
|
useradd matrix-synapse clubmatrix
|
|
|
|
useradd www-data clubmatrix
|
2024-12-04 16:03:05 +01:00
|
|
|
mkdir /run/matrix-synapse
|
2024-12-09 18:51:29 +01:00
|
|
|
dpkg-statoverride --add --update matrix-synapse clubmatrix 2770 /run/matrix-synapse
|
2024-12-04 16:03:05 +01:00
|
|
|
```
|
|
|
|
|
2024-12-17 13:22:21 +01:00
|
|
|
First we change Synapse from listening on `localhost:8008` to listening on a
|
|
|
|
socket. We'll do most of our workers work in `conf.d/listeners.yaml`, so let's
|
|
|
|
put the new configuration for the main proccess there:
|
|
|
|
|
2024-12-04 16:03:05 +01:00
|
|
|
Add a replication listener:
|
|
|
|
|
|
|
|
```
|
|
|
|
listeners:
|
2024-12-17 13:22:21 +01:00
|
|
|
- path: /run/matrix-synapse/inbound_main.sock
|
|
|
|
mode: 0660
|
|
|
|
type: http
|
|
|
|
resources:
|
|
|
|
- names:
|
|
|
|
- client
|
|
|
|
- consent
|
|
|
|
- federation
|
2024-12-04 16:03:05 +01:00
|
|
|
|
|
|
|
- path: /run/matrix-synapse/replication.sock
|
|
|
|
mode: 0660
|
|
|
|
type: http
|
|
|
|
resources:
|
|
|
|
- names:
|
|
|
|
- replication
|
|
|
|
```
|
|
|
|
|
2024-12-17 13:22:21 +01:00
|
|
|
This means Synapse will create two sockets under `/run/matrix/synapse`: one
|
|
|
|
for incoming traffic that is forwarded by nginx (`inbound_main.sock`), and one for
|
|
|
|
communicating with all the other workers (`replication.sock`).
|
|
|
|
|
|
|
|
If you restart Synapse now, it won't do anything anymore, because nginx is
|
|
|
|
still forwarding its traffic to `localhost:8008`. We'll get to nginx later,
|
|
|
|
but you'd have to change
|
|
|
|
|
|
|
|
```
|
|
|
|
proxy_forward http://localhost:8008;
|
|
|
|
```
|
|
|
|
|
|
|
|
to
|
|
|
|
|
|
|
|
```
|
|
|
|
proxy_forward http://unix:/run/matrix-synapse/inbound_main.sock;
|
|
|
|
```
|
|
|
|
|
|
|
|
If you've done this, restart Synapse, check if the socket is created and has
|
|
|
|
the correct permissions. Now point Synapse at Redis in `conf.d/redis.yaml`:
|
2024-12-04 16:03:05 +01:00
|
|
|
|
|
|
|
```
|
|
|
|
redis:
|
|
|
|
enabled: true
|
|
|
|
path: /run/redis/redis-server.sock
|
|
|
|
```
|
|
|
|
|
|
|
|
Check if Synapse can connect to Redis via the socket, you should find log
|
|
|
|
entries like this:
|
|
|
|
|
|
|
|
```
|
|
|
|
synapse.replication.tcp.redis - 292 - INFO - sentinel - Connecting to redis server UNIXAddress('/run/redis/redis-server.sock')
|
2024-12-04 16:57:45 +01:00
|
|
|
synapse.util.httpresourcetree - 56 - INFO - sentinel - Attaching <synapse.replication.http.ReplicationRestResource object at 0x7f95f850d150> to path b'/_synapse/replication'
|
2024-12-04 16:03:05 +01:00
|
|
|
synapse.replication.tcp.redis - 126 - INFO - sentinel - Connected to redis
|
|
|
|
synapse.replication.tcp.redis - 138 - INFO - subscribe-replication-0 - Sending redis SUBSCRIBE for ['matrix.example.com/USER_IP', 'matrix.example.com']
|
|
|
|
synapse.replication.tcp.redis - 141 - INFO - subscribe-replication-0 - Successfully subscribed to redis stream, sending REPLICATE command
|
|
|
|
synapse.replication.tcp.redis - 146 - INFO - subscribe-replication-0 - REPLICATE successfully sent
|
|
|
|
```
|
|
|
|
|
2024-12-17 13:22:21 +01:00
|
|
|
|
|
|
|
# Worker overview
|
|
|
|
|
|
|
|
Every worker is, in fact, a Synapse server, only with a limited set of tasks.
|
|
|
|
Some tasks can be handled by a number of workers, others only by one. Every
|
|
|
|
worker starts as a normal Synapse process, reading all the normal
|
|
|
|
configuration files, and then a bit of configuration for the specific worker
|
|
|
|
itself.
|
|
|
|
|
|
|
|
Workers need to communicate with each other and the main process, they do that
|
|
|
|
via the `replication` sockets under `/run/matrix-synapse`.
|
|
|
|
|
|
|
|
Most worker also need a way to be fed traffic by nginx, they have an `inbound`
|
|
|
|
socket for that, in the same directory.
|
|
|
|
|
|
|
|
Finally, all those replicating workers need to be registered in the main
|
|
|
|
process: all workers and their replication sockets are listed inin the `instance_map`.
|
|
|
|
|
|
|
|
|
2024-12-09 18:51:29 +01:00
|
|
|
Every worker has its own configuration file, we'll put those under
|
|
|
|
`/etc/matrix-synapse/workers`. Create it, and then one systemd service file for
|
|
|
|
all workers:
|
2024-12-09 09:18:44 +01:00
|
|
|
|
2024-12-17 13:22:21 +01:00
|
|
|
|
|
|
|
## Types of workers
|
|
|
|
|
|
|
|
We'll make separate workers for almost every task, and several for the
|
|
|
|
heaviest tasks: synchronising. An overview of what endpoints are to be
|
|
|
|
forwarded to a worker is in [Synapse's documentation](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications).
|
|
|
|
|
|
|
|
We'll create the following workers:
|
|
|
|
|
|
|
|
* login
|
|
|
|
* federation_sender
|
|
|
|
* mediaworker
|
|
|
|
* userdir
|
|
|
|
* pusher
|
|
|
|
* push_rules
|
|
|
|
* typing
|
|
|
|
* todevice
|
|
|
|
* accountdata
|
|
|
|
* presence
|
|
|
|
* receipts
|
|
|
|
* initial_sync: 1 and 2
|
|
|
|
* normal_sync: 1, 2 and 3
|
|
|
|
|
|
|
|
Some of them are `stream_writers`, and the [documentation about
|
|
|
|
stream_witers](https://element-hq.github.io/synapse/latest/workers.html#stream-writers)
|
|
|
|
says:
|
|
|
|
|
|
|
|
```
|
|
|
|
Note: The same worker can handle multiple streams, but unless otherwise documented, each stream can only have a single writer.
|
|
|
|
```
|
|
|
|
|
|
|
|
So, stream writers must have unique tasks: you can't have two or more workers
|
|
|
|
writing to the same stream. Stream writers have to be listed in `stream_writers`:
|
|
|
|
|
|
|
|
```
|
|
|
|
stream_writers:
|
|
|
|
account_data:
|
|
|
|
- accountdata
|
|
|
|
presence:
|
|
|
|
- presence
|
|
|
|
receipts:
|
|
|
|
- receipts
|
|
|
|
to_device:
|
|
|
|
- todevice
|
|
|
|
typing:
|
|
|
|
- typing
|
|
|
|
push_rules:
|
|
|
|
- push_rules
|
|
|
|
```
|
|
|
|
|
|
|
|
As you can see, we've given the stream workers the name of the stream they're
|
|
|
|
writing to. We could combine all those streams into one worker, which would
|
|
|
|
probably be enough for most instances.
|
|
|
|
|
|
|
|
We could define a worker with the name streamwriter and list it under all
|
|
|
|
streams instead of a single worker for every stream.
|
|
|
|
|
2024-12-17 17:13:41 +01:00
|
|
|
Finally, we have to list all these workers under `instance_map`: their name
|
|
|
|
and their replication socket:
|
|
|
|
|
|
|
|
```
|
|
|
|
instance_map:
|
|
|
|
main:
|
|
|
|
path: "/run/matrix-synapse/replication_main.sock"
|
|
|
|
login:
|
|
|
|
path: "/run/matrix-synapse/replication_login.sock"
|
|
|
|
federation_sender:
|
|
|
|
path: "/run/matrix-synapse/replication_federation_sender.sock"
|
|
|
|
mediaworker:
|
|
|
|
path: "/run/matrix-synapse/replication_mediaworker.sock"
|
|
|
|
...
|
|
|
|
normal_sync1:
|
2024-12-18 13:23:49 +01:00
|
|
|
path: "unix:/run/matrix-synapse/replication_normal_sync1.sock"
|
2024-12-17 17:13:41 +01:00
|
|
|
normal_sync2:
|
2024-12-18 13:23:49 +01:00
|
|
|
path: "unix:/run/matrix-synapse/replication_normal_sync2.sock"
|
2024-12-17 17:13:41 +01:00
|
|
|
normal_sync3:
|
2024-12-18 13:23:49 +01:00
|
|
|
path: "unix:/run/matrix-synapse/replication_normal_sync3.sock"
|
2024-12-17 17:13:41 +01:00
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
## Defining a worker
|
|
|
|
|
2024-12-18 12:49:45 +01:00
|
|
|
Every working starts with the normal configuration files, and then loads its
|
|
|
|
own. We put those files under `/etc/matrix-synapse/workers`. You have to
|
|
|
|
create that directory, and make sure Synapse can read them. Being
|
|
|
|
profesionally paranoid, we restrict access to that directory and the files in
|
|
|
|
it:
|
|
|
|
|
|
|
|
```
|
|
|
|
mkdir /etc/matrix-synapse/workers
|
|
|
|
chown matrix-synapse:matrix-synapse /etc/matrix-synapse/workers
|
|
|
|
chmod 750 /etc/matrix-synapse-workers
|
|
|
|
```
|
|
|
|
|
2024-12-19 14:03:27 +01:00
|
|
|
|
|
|
|
### Generic worker
|
|
|
|
|
2024-12-18 12:49:45 +01:00
|
|
|
Workers look very much the same, very little configuration is needed. This is
|
|
|
|
what you need:
|
|
|
|
|
|
|
|
* name
|
|
|
|
* replication socket (not every worker needs this)
|
|
|
|
* inbound socket (not every worker needs this)
|
|
|
|
* log configuration
|
|
|
|
|
|
|
|
One worker we use handles the login actions, this is how it's configured:
|
|
|
|
|
|
|
|
```
|
|
|
|
worker_app: "synapse.app.generic_worker"
|
|
|
|
worker_name: "login"
|
|
|
|
worker_log_config: "/etc/matrix-synapse/logconf.d/login.yaml"
|
|
|
|
|
|
|
|
worker_listeners:
|
|
|
|
- path: "/run/matrix-synapse/inbound_login.sock"
|
|
|
|
type: http
|
|
|
|
resources:
|
|
|
|
- names:
|
|
|
|
- client
|
|
|
|
- consent
|
|
|
|
- federation
|
|
|
|
|
|
|
|
- path: "/run/matrix-synapse/replication_login.sock"
|
|
|
|
type: http
|
|
|
|
resources:
|
|
|
|
- names: [replication]
|
|
|
|
```
|
|
|
|
|
|
|
|
First listener is the socket where nginx sends all traffic related to logins
|
|
|
|
to. You have to configure nginx to do that, we'll get to that later.
|
|
|
|
|
|
|
|
First line defines the type of worker. In the past there were quite a few
|
|
|
|
different types, but most of them have been phased out in favour of one
|
|
|
|
generic worker.
|
|
|
|
|
|
|
|
The `worker_log_config` defines how and where the worker logs. Of course you'll
|
|
|
|
need to configure that too, see further.
|
|
|
|
|
|
|
|
The first `listener` is the inbound socket, that nginx uses to forward login
|
|
|
|
related traffic to. Make sure nginx can write to this socket. The
|
|
|
|
`resources` vary between workers.
|
|
|
|
|
|
|
|
The second `listener` is used for communication with the other workers and the
|
|
|
|
main thread. The only `resource` it needs is `replication`. This socket needs
|
|
|
|
to be listed in the `instance_map` in the main thread.
|
|
|
|
|
|
|
|
Of course, if you need to scale up to the point where you need more than one
|
|
|
|
machine, these listeners can no longer use UNIX sockets, but will have to use
|
|
|
|
the network. This creates extra overhead, so you want to use sockets whenever
|
|
|
|
possible.
|
|
|
|
|
|
|
|
|
2024-12-19 14:03:27 +01:00
|
|
|
### Media worker
|
|
|
|
|
|
|
|
The media worker is slightly different than the generic one. It doesn't use the
|
|
|
|
`synapse.app.generic_worker`, but a specialised one: `synapse.app.media_repository`.
|
|
|
|
To prevent the main process from handling media itself, you have to explicitly
|
|
|
|
tell it to leave that to the worker, by adding this to the configuration (in
|
|
|
|
our setup `conf.d/listeners.yaml`):
|
|
|
|
|
|
|
|
```
|
|
|
|
enable_media_repo: false
|
|
|
|
media_instance_running_background_jobs: mediaworker
|
|
|
|
```
|
|
|
|
|
|
|
|
The worker `mediaworker` looks like this:
|
|
|
|
|
|
|
|
```
|
|
|
|
worker_app: "synapse.app.media_repository"
|
|
|
|
worker_name: "mediaworker"
|
|
|
|
worker_log_config: "/etc/matrix-synapse/logconf.d/media.yaml"
|
|
|
|
|
|
|
|
worker_listeners:
|
|
|
|
- path: "/run/matrix-synapse/inbound_mediaworker.sock"
|
|
|
|
type: http
|
|
|
|
resources:
|
|
|
|
- names:
|
|
|
|
- media
|
|
|
|
- federation
|
|
|
|
|
|
|
|
- path: "/run/matrix-synapse/replication_mediaworker.sock"
|
|
|
|
type: http
|
|
|
|
resources:
|
|
|
|
- names: [replication]
|
|
|
|
```
|
|
|
|
|
|
|
|
If you use more than one mediaworker, know that they must all run on the same
|
|
|
|
machine; scaling it over more than one machine will not work.
|
|
|
|
|
|
|
|
|
2024-12-18 12:49:45 +01:00
|
|
|
## Worker logging
|
|
|
|
|
|
|
|
As stated before, you configure the logging of workers in a separate yaml
|
|
|
|
file. As with the definitions of the workers themselves, you need a directory for
|
|
|
|
that. We'll use `/etc/matrix-synapse/logconf.d` for that; make it and fix the
|
|
|
|
permissions.
|
|
|
|
|
|
|
|
There's a lot you can configure for logging, but for now we'll give every
|
|
|
|
worker the same layout. Here's the configuration for the `login` worker:
|
|
|
|
|
|
|
|
```
|
|
|
|
version: 1
|
|
|
|
formatters:
|
|
|
|
precise:
|
2024-12-18 13:23:49 +01:00
|
|
|
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
|
2024-12-18 12:49:45 +01:00
|
|
|
handlers:
|
|
|
|
file:
|
|
|
|
class: logging.handlers.TimedRotatingFileHandler
|
|
|
|
formatter: precise
|
|
|
|
filename: /var/log/matrix-synapse/login.log
|
|
|
|
when: midnight
|
|
|
|
backupCount: 3
|
|
|
|
encoding: utf8
|
|
|
|
|
|
|
|
buffer:
|
|
|
|
class: synapse.logging.handlers.PeriodicallyFlushingMemoryHandler
|
|
|
|
target: file
|
|
|
|
capacity: 10
|
|
|
|
flushLevel: 30
|
|
|
|
period: 5
|
|
|
|
|
|
|
|
loggers:
|
|
|
|
synapse.metrics:
|
|
|
|
level: WARN
|
|
|
|
handlers: [buffer]
|
|
|
|
synapse.replication.tcp:
|
|
|
|
level: WARN
|
|
|
|
handlers: [buffer]
|
|
|
|
synapse.util.caches.lrucache:
|
|
|
|
level: WARN
|
|
|
|
handlers: [buffer]
|
|
|
|
twisted:
|
|
|
|
level: WARN
|
|
|
|
handlers: [buffer]
|
|
|
|
synapse:
|
|
|
|
level: INFO
|
|
|
|
handlers: [buffer]
|
|
|
|
|
|
|
|
root:
|
|
|
|
level: INFO
|
|
|
|
handlers: [buffer]
|
|
|
|
```
|
|
|
|
|
|
|
|
The only thing you need to change if the filename to which the logs are
|
|
|
|
written. You could create only one configuration and use that in every worker,
|
|
|
|
but that would mean all logs will end up in the same file, which may not be
|
|
|
|
what you want.
|
|
|
|
|
|
|
|
See the [Python
|
|
|
|
documentation](https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema)
|
|
|
|
for all the ins and outs of logging.
|
2024-12-17 17:13:41 +01:00
|
|
|
|
|
|
|
|
2024-12-18 13:23:49 +01:00
|
|
|
# Systemd
|
2024-12-17 17:13:41 +01:00
|
|
|
|
2024-12-18 13:23:49 +01:00
|
|
|
You want Synapse and its workers managed by systemd. First of all we define a
|
|
|
|
`target`: a group of services that belong together.
|
|
|
|
|
|
|
|
```
|
|
|
|
systemctl edit --force --full matrix-synapse.target
|
|
|
|
```
|
|
|
|
|
|
|
|
Feed it with this bit:
|
|
|
|
|
|
|
|
```
|
|
|
|
[Unit]
|
|
|
|
Description=Matrix Synapse with all its workers
|
|
|
|
After=network.target
|
|
|
|
|
|
|
|
[Install]
|
|
|
|
WantedBy=multi-user.target
|
|
|
|
```
|
|
|
|
|
|
|
|
First add `matrix-synapse.service` to this target by overriding the `WantedBy`
|
|
|
|
in the unit file (`systemctl edit matrix-synapse.service`):
|
|
|
|
|
|
|
|
```
|
|
|
|
[Install]
|
|
|
|
WantedBy=matrix.target
|
|
|
|
```
|
|
|
|
|
|
|
|
The same `WantedBy` need to go in the unit files for every worker. For the
|
|
|
|
workers we're using a template instead of separate unit files for every single
|
|
|
|
one. Create the template:
|
|
|
|
|
|
|
|
```
|
|
|
|
systemctl edit --full --force matrix-synapse-worker@
|
|
|
|
```
|
|
|
|
|
|
|
|
Fill it with this content:
|
2024-12-17 13:22:21 +01:00
|
|
|
|
2024-12-09 09:18:44 +01:00
|
|
|
```
|
|
|
|
[Unit]
|
|
|
|
Description=Synapse %i
|
|
|
|
AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
|
|
|
|
|
|
|
|
# This service should be restarted when the synapse target is restarted.
|
|
|
|
PartOf=matrix-synapse.target
|
|
|
|
ReloadPropagatedFrom=matrix-synapse.target
|
|
|
|
|
|
|
|
# if this is started at the same time as the main, let the main process start
|
|
|
|
# first, to initialise the database schema.
|
|
|
|
After=matrix-synapse.service
|
|
|
|
|
|
|
|
[Service]
|
|
|
|
Type=notify
|
|
|
|
NotifyAccess=main
|
|
|
|
User=matrix-synapse
|
|
|
|
WorkingDirectory=/var/lib/matrix-synapse
|
|
|
|
EnvironmentFile=-/etc/default/matrix-synapse
|
|
|
|
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
|
|
|
|
ExecReload=/bin/kill -HUP $MAINPID
|
|
|
|
Restart=always
|
|
|
|
RestartSec=3
|
|
|
|
SyslogIdentifier=matrix-synapse-%i
|
|
|
|
|
|
|
|
[Install]
|
|
|
|
WantedBy=matrix-synapse.target
|
|
|
|
```
|
|
|
|
|
2024-12-18 13:23:49 +01:00
|
|
|
Every worker needs to be enabled and started individually. Quickest way to do
|
|
|
|
that, is to run a loop in the directory:
|
2024-12-11 16:59:08 +01:00
|
|
|
|
|
|
|
```
|
2024-12-18 13:23:49 +01:00
|
|
|
cd /etc/matrix-synapse/workers
|
|
|
|
for worker in `ls *yaml`; do systemctl enable --now matrix-synapse-worker@$worker; done
|
2024-12-11 16:59:08 +01:00
|
|
|
```
|
|
|
|
|
2024-12-18 13:23:49 +01:00
|
|
|
After a reboot, Synapse and all its workers should be started.
|