Fediversity/matrix/nginx/workers/README.md

213 lines
7 KiB
Markdown

---
gitea: none
include_toc: true
---
# Reverse proxy for Synapse with workers
Changing nginx's configuration from a reverse proxy for a normal, monolithic
Synapse to one for a Synapse that uses workers, quite a lot has to be changed.
As mentioned in [Synapse with workers](../../synapse/workers/README.md#synapse),
we're changing the "backend" from network sockets to UNIX sockets.
Because we're going to have to forward a lot of specific requests to all kinds
of workers, we'll split the configuration into a few bits:
* all `proxy_forward` settings
* all `location` definitions
* maps that define variables
* upstreams that point to the correct socket(s) with the correct settings
* settings for private access
* connection optimizations
Some of these go into `/etc/nginx/conf.d` because they are part of the
configuration of nginx itself, others go into `/etc/nginx/snippets` because we
need to include them several times in different places.
# Optimizations
In the quest for speed, we are going to tweak several settings in nginx. To
keep things manageable, most of those tweaks go into separate configuration
files that are either automatically included (those under `/etc/nginx/conf.d`)
or explicitly where we need them (those under `/etc/nginx/snippets`).
Let's start with a few settings that affect nginx as a whole. Edit these
options in `/etc/nginx/nginx.conf`:
```
pcre_jit on;
worker_rlimit_nofile 8192;
worker_connections 4096;
multi_accept off;
gzip_comp_level 2;
gzip_types application/javascript application/json application/x-javascript application/xml application/xml+rss image/svg+xml text/css text/javascript text/plain text/xml;
gzip_min_length 1000;
gzip_disable "MSIE [1-6]\.";
```
We're going to use lots of regular expressions in our config, `pcre_jit on`
speeds those up considerably. Workers get 8K open files, and we want 4096
workers instead of the default 768. Workers can only accept one connection,
which is (in almost every case) proxy_forwarded, so we set `multi_accept off`.
We change `gzip_comp_level` from 6 to 2, we expand the list of content that is
to be gzipped, and don't zip anything shorter than 1000 characters, instead of
the default 20. MSIE can take a hike...
These are tweaks for the connection, save this in `/etc/ngnix/conf.d/conn_optimize.conf`.
```
client_body_buffer_size 32m;
client_header_buffer_size 32k;
client_max_body_size 1g;
http2_max_concurrent_streams 128;
keepalive_timeout 65;
keepalive_requests 100;
large_client_header_buffers 4 16k;
server_names_hash_bucket_size 128;
tcp_nodelay on;
server_tokens off;
```
For every `proxy_forward` we want to configure several settings, and because
we don't want to include the same list of settings every time, we put all of
them in one snippet of code, that we can include every time we need it.
Create `/etc/nginx/snippets/proxy.conf` and put this in it:
```
proxy_connect_timeout 2s;
proxy_buffering off;
proxy_http_version 1.1;
proxy_read_timeout 3600s;
proxy_redirect off;
proxy_send_timeout 120s;
proxy_socket_keepalive on;
proxy_ssl_verify off;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
client_max_body_size 50M;
```
Every time we use a `proxy_forward`, we include this snippet.
# Maps
A map sets a variable based on, usually, another variable. One case we use this
is in determining the type of sync a client is doing. A normal sync, simply
updating an existing session, is a rather lightweight operation. An initial sync,
meaning a full sync because the session is brand new, is not so lightweight.
A normal sync can be recognised by the `since` bit in the request: it tells
the server when its last sync was. If there is no `since`, we're dealing with
an initial sync.
We want to forward requests for normal syncs to the `normal_sync` workers, and
the initial syncs to the `initial_sync` workers.
We decide to which type of worker to forward the sync request to by looking at
the presence or absence of `since`: if it's there, it's a normal sync and we
set the variable `$sync` to `normal_sync`. If it's not there, we set `$sync` to
`initial_sync`. The content of `since` is irrelevant for nginx.
This is what the map looks like:
```
map $arg_since $sync {
default normal_sync;
'' initial_sync;
}
```
We evaluate `$arg_since` to set `$sync`: `$arg_since` is nginx's variable `$arg_`
followed by `since`, the argument we want. See [the index of
variables in nginx](https://nginx.org/en/docs/varindex.html) for more
variables we can use in nginx.
By default we set `$sync` to `normal_sync`, unless the argument `since` is
empty (absent); then we set it to `initial_sync`.
After this mapping, we forward the request to the correct worker like this:
```
proxy_pass http://$sync;
```
# Upstreams
In our configuration, nginx is not only a reverse proxy, it's a load balancer.
Just like what `haproxy` does, it can forward requests to "servers" behind it.
Such a server is the inbound UNIX socket of a worker, and there can be several
of them in one group.
Let's start with a simple one, the `login` worker, that handles the login
process for clients.
```
login worker komt hier...
```
Two of these upstreams are the sync workers: `normal_sync` and `initial_sync`,
both consisting of several "servers":
```
upstream initial_sync {
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_initial_sync1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_initial_sync2.sock max_fails=0;
keepalive 10;
}
upstream normal_sync {
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_normal_sync1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_normal_sync2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_normal_sync3.sock max_fails=0;
keepalive 10;
}
```
The `hash` bit is to make sure requests are always forwarded to the same
worker.
# Locations
Now that we have defined the workers and/or worker pools, we have to forward
the right traffic to the right workers. The Synapse documentation about
[available worker
types](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications)
lists which endpoints a specific worker type can handle.
The docs say that the `generic_worker` can handle these requests for synchronisation
requests:
```
# Sync requests
^/_matrix/client/(r0|v3)/sync$
^/_matrix/client/(api/v1|r0|v3)/events$
^/_matrix/client/(api/v1|r0|v3)/initialSync$
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
```
Now, if we had only one worker type for synchronisations, named `syncworkers`, not
splitting those requests up in normal and initial, we would direct all
sync-requests to that worker pool with this `location`:
```
location ~ ^(/_matrix/client/(r0|v3)/sync|/_matrix/client/(api/v1|r0|v3)/events|/_matrix/client/(api/v1|r0|v3)/initialSync|/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync)$ {
proxy_pass http://syncworkers;
}