Corrected a configuration error in the handing of worker pools.

This commit is contained in:
Hans van Zijst 2025-01-08 19:02:22 +01:00
parent 1c361a8092
commit 4f7b1b5468
Signed by: hans
GPG key ID: 43DBCC37BFDEFD72
3 changed files with 203 additions and 9 deletions

View file

@ -214,6 +214,8 @@ upstream login {
Ater this definition, we can forward traffic to `login`. What traffic to Ater this definition, we can forward traffic to `login`. What traffic to
forward is decided in the `location` statements, see further. forward is decided in the `location` statements, see further.
## Synchronisation
A more complex example are the sync workers. Under [Maps](#Maps) we split sync A more complex example are the sync workers. Under [Maps](#Maps) we split sync
requests into two different types; those different types are handled by requests into two different types; those different types are handled by
different worker pools. In our case we have 2 workers for the initial_sync different worker pools. In our case we have 2 workers for the initial_sync
@ -240,6 +242,39 @@ The `hash` bit is to make sure that request from one user are consistently
forwarded to the same worker. We filled the variable `$mxid_localpart` in the forwarded to the same worker. We filled the variable `$mxid_localpart` in the
maps. maps.
## Federation
Something similar goes for the federation workers. Some requests need to go
to the same worker as all the other requests from the same IP-addres, other
can go to any of these workers.
We define two upstreams with the same workers, only with different names and
the explicit IP-address ordering for one:
```
upstream incoming_federation {
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
upstream federation_requests {
hash $remote_addr consistent;
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
```
Same workers, different handling. See how we forward requests in the next
paragraph.
See [upstreams.conf](upstreams.conf) for a complete example.
# Locations # Locations
@ -249,6 +284,8 @@ the right traffic to the right workers. The Synapse documentation about
types](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications) types](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications)
lists which endpoints a specific worker type can handle. lists which endpoints a specific worker type can handle.
## Login
Let's forward login requests to our login worker. The [documentation for the Let's forward login requests to our login worker. The [documentation for the
generic_worker](https://element-hq.github.io/synapse/latest/workers.html#synapseappgeneric_worker) generic_worker](https://element-hq.github.io/synapse/latest/workers.html#synapseappgeneric_worker)
says these endpoints are for registration and login: says these endpoints are for registration and login:
@ -272,6 +309,8 @@ location ~ ^(/_matrix/client/(api/v1|r0|v3|unstable)/login|/_matrix/client/(r0|v
} }
``` ```
## Synchronisation
The docs say that the `generic_worker` can handle these requests for synchronisation The docs say that the `generic_worker` can handle these requests for synchronisation
requests: requests:
@ -283,8 +322,9 @@ requests:
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$ ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
``` ```
We forward those to our 2 worker pools, `normal_sync` and `initial_sync`, like We forward those to our 2 worker pools making sure the heavy initial syncs go
this, using the variable `$sync` we defined in maps.conf: to the `initial_sync` pool, and the normal ones to `normal_sync`. We use the
variable `$sync`for that, which we defined in maps.conf.
``` ```
# Normal/initial sync # Normal/initial sync
@ -306,6 +346,8 @@ location ~ ^(/_matrix/client/(api/v1|r0|v3)/initialSync|/_matrix/client/(api/v1|
} }
``` ```
## Media
The media worker is slightly different: some parts are public, but a few bits The media worker is slightly different: some parts are public, but a few bits
are admin stuff. We split those, and limit the admin endpoints to the trusted are admin stuff. We split those, and limit the admin endpoints to the trusted
addresses we defined earlier: addresses we defined earlier:
@ -325,3 +367,31 @@ location ~ ^/_synapse/admin/v1/(purge_)?(media(_cache)?|room|user|quarantine_med
} }
``` ```
# Federation
Federation is done by two types of workers: one pool for requests from our
server to the rest of the world, and one pool for everything coming in from the
outside world. Only the latter is relevant for nginx.
The documentation mentions two different types of federation:
* Federation requests
* Inbound federation transaction request
The second is special, in that requests for that specific endpoint must be
balanced by IP-address. The "normal" federation requests can be sent to any
worker. We're sending all these requests to the same workers, but we make sure
to always send requests from 1 IP-address to the same worker:
```
# Federation readers
location ~ ^(/_matrix/federation/v1/event/|/_matrix/federation/v1/state/|/_matrix/federation/v1/state_ids/|/_matrix/federation/v1/backfill/|/_matrix/federation/v1/get_missing_events/|/_matrix/federation/v1/publicRooms|/_matrix/federation/v1/query/|/_matrix/federation/v1/make_join/|/_matrix/federation/v1/make_leave/|/_matrix/federation/(v1|v2)/send_join/|/_matrix/federation/(v1|v2)/send_leave/|/_matrix/federation/v1/make_knock/|/_matrix/federation/v1/send_knock/|/_matrix/federation/(v1|v2)/invite/|/_matrix/federation/v1/event_auth/|/_matrix/federation/v1/timestamp_to_event/|/_matrix/federation/v1/exchange_third_party_invite/|/_matrix/federation/v1/user/devices/|/_matrix/key/v2/query|/_matrix/federation/v1/hierarchy/) {
include snippets/proxy.conf;
proxy_pass http://incoming_federation;
}
# Inbound federation transactions
location ~ ^/_matrix/federation/v1/send/ {
include snippets/proxy.conf;
proxy_pass http://federation_requests;
}
```

View file

@ -68,19 +68,20 @@ location ~ ^(/_matrix/client/(api/v1|r0|v3|unstable)/login|/_matrix/client/(r0|v
proxy_pass http://login; proxy_pass http://login;
} }
# Normal/initial sync # Normal/initial sync:
# To which upstream to pass the request depends on the map "$sync"
location ~ ^/_matrix/client/(r0|v3)/sync$ { location ~ ^/_matrix/client/(r0|v3)/sync$ {
include snippets/proxy.conf; include snippets/proxy.conf;
proxy_pass http://$sync; proxy_pass http://$sync;
} }
# Normal sync:
# Normal sync # These endpoints are used for normal syncs
location ~ ^/_matrix/client/(api/v1|r0|v3)/events$ { location ~ ^/_matrix/client/(api/v1|r0|v3)/events$ {
include snippets/proxy.conf; include snippets/proxy.conf;
proxy_pass http://normal_sync; proxy_pass http://normal_sync;
} }
# Initial sync:
# Initial sync # These endpoints are used for initial syncs
location ~ ^/_matrix/client/(api/v1|r0|v3)/initialSync$ { location ~ ^/_matrix/client/(api/v1|r0|v3)/initialSync$ {
include snippets/proxy.conf; include snippets/proxy.conf;
proxy_pass http://initial_sync; proxy_pass http://initial_sync;
@ -90,11 +91,18 @@ location ~ ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$ {
proxy_pass http://initial_sync; proxy_pass http://initial_sync;
} }
# Federation readers # Federation
location ~ ^(/_matrix/federation/v1/event/|/_matrix/federation/v1/state/|/_matrix/federation/v1/state_ids/|/_matrix/federation/v1/backfill/|/_matrix/federation/v1/get_missing_events/|/_matrix/federation/v1/publicRooms|/_matrix/federation/v1/query/|/_matrix/federation/v1/make_join/|/_matrix/federation/v1/make_leave/|/_matrix/federation/(v1|v2)/send_join/|/_matrix/federation/(v1|v2)/send_leave/|/_matrix/federation/v1/make_knock/|/_matrix/federation/v1/send_knock/|/_matrix/federation/(v1|v2)/invite/|/_matrix/federation/v1/event_auth/|/_matrix/federation/v1/timestamp_to_event/|/_matrix/federation/v1/exchange_third_party_invite/|/_matrix/federation/v1/user/devices/|/_matrix/key/v2/query|/_matrix/federation/v1/hierarchy/|/_matrix/federation/v1/send/) { # All the "normal" federation stuff:
location ~ ^(/_matrix/federation/v1/event/|/_matrix/federation/v1/state/|/_matrix/federation/v1/state_ids/|/_matrix/federation/v1/backfill/|/_matrix/federation/v1/get_missing_events/|/_matrix/federation/v1/publicRooms|/_matrix/federation/v1/query/|/_matrix/federation/v1/make_join/|/_matrix/federation/v1/make_leave/|/_matrix/federation/(v1|v2)/send_join/|/_matrix/federation/(v1|v2)/send_leave/|/_matrix/federation/v1/make_knock/|/_matrix/federation/v1/send_knock/|/_matrix/federation/(v1|v2)/invite/|/_matrix/federation/v1/event_auth/|/_matrix/federation/v1/timestamp_to_event/|/_matrix/federation/v1/exchange_third_party_invite/|/_matrix/federation/v1/user/devices/|/_matrix/key/v2/query|/_matrix/federation/v1/hierarchy/) {
include snippets/proxy.conf; include snippets/proxy.conf;
proxy_pass http://incoming_federation; proxy_pass http://incoming_federation;
} }
# Inbound federation transactions:
location ~ ^/_matrix/federation/v1/send/ {
include snippets/proxy.conf;
proxy_pass http://federation_requests;
}
# Main thread for all the rest # Main thread for all the rest
location / { location / {

View file

@ -0,0 +1,116 @@
# Stream workers first, they are special. The documentation says:
# "each stream can only have a single writer"
# Account-data
upstream account_data {
server unix:/run/matrix-synapse/inbound_accountdata.sock max_fails=0;
keepalive 10;
}
# Userdir
upstream userdir {
server unix:/run/matrix-synapse/inbound_userdir.sock max_fails=0;
keepalive 10;
}
# Typing
upstream typing {
server unix:/run/matrix-synapse/inbound_typing.sock max_fails=0;
keepalive 10;
}
# To device
upstream todevice {
server unix:/run/matrix-synapse/inbound_todevice.sock max_fails=0;
keepalive 10;
}
# Receipts
upstream receipts {
server unix:/run/matrix-synapse/inbound_receipts.sock max_fails=0;
keepalive 10;
}
# Presence
upstream presence {
server unix:/run/matrix-synapse/inbound_presence.sock max_fails=0;
keepalive 10;
}
# Push rules
upstream push_rules {
server unix:/run/matrix-synapse/inbound_push_rules.sock max_fails=0;
keepalive 10;
}
# End of the stream workers, the following workers are of a "normal" type
# Media
# If more than one media worker is used, they *must* all run on the same machine
upstream media {
server unix:/run/matrix-synapse/inbound_mediaworker.sock max_fails=0;
keepalive 10;
}
# Synchronisation by clients:
# Normal sync. Not particularly heavy, but happens a lot
upstream normal_sync {
# Use the username mapper result for hash key
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_normal_sync1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_normal_sync2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_normal_sync3.sock max_fails=0;
keepalive 10;
}
# Initial sync
# Much heavier than a normal sync, but happens less often
upstream initial_sync {
# Use the username mapper result for hash key
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_initial_sync1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_initial_sync2.sock max_fails=0;
keepalive 10;
}
# Login
upstream login {
server unix:/run/matrix-synapse/inbound_login.sock max_fails=0;
keepalive 10;
}
# Clients
upstream client {
hash $mxid_localpart consistent;
server unix:/run/matrix-synapse/inbound_clientworker1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_clientworker2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_clientworker3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_clientworker4.sock max_fails=0;
keepalive 10;
}
# Federation
# "Normal" federation, balanced round-robin over 4 workers.
upstream incoming_federation {
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
# Inbound federation requests, need to be balanced by IP-address, but can go
# to the same pool of workers as the other federation stuff.
upstream federation_requests {
hash $remote_addr consistent;
server unix:/run/matrix-synapse/inbound_federation_reader1.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader2.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader3.sock max_fails=0;
server unix:/run/matrix-synapse/inbound_federation_reader4.sock max_fails=0;
keepalive 10;
}
# Main thread for all the rest
upstream inbound_main {
server unix:/run/matrix-synapse/inbound_main.sock max_fails=0;
keepalive 10;
}