forked from Fediversity/Fediversity
Mostly completed nginx documentation.
This commit is contained in:
parent
712590af69
commit
47b21fb388
|
@ -106,7 +106,36 @@ proxy_set_header Upgrade $http_upgrade;
|
||||||
client_max_body_size 50M;
|
client_max_body_size 50M;
|
||||||
```
|
```
|
||||||
|
|
||||||
Every time we use a `proxy_forward`, we include this snippet.
|
Every time we use a `proxy_forward`, we include this snippet. There are 2 more
|
||||||
|
things we might set: trusted locations that can use the admin endpoints, and a
|
||||||
|
dedicated DNS-recursor. We include the `snippets/private.conf` in the
|
||||||
|
forwards to admin endpoints, so that not the entire Internet can play with it.
|
||||||
|
The dedicated nameserver is something you really want, because synchronising a
|
||||||
|
large room can easily result in 100.000+ DNS requests. You'll hit flood
|
||||||
|
protection on most servers if you do that.
|
||||||
|
|
||||||
|
List the addresses from which you want to allow admin access in
|
||||||
|
`snippets/private.conf`:
|
||||||
|
|
||||||
|
```
|
||||||
|
allow 127.0.0.1;
|
||||||
|
allow ::1;
|
||||||
|
allow 12.23.45.78;
|
||||||
|
allow 87.65.43.21;
|
||||||
|
allow dead:beef::/48;
|
||||||
|
allow 2a10:1234:abcd::1;
|
||||||
|
deny all;
|
||||||
|
satisfy all;
|
||||||
|
```
|
||||||
|
|
||||||
|
Of course, subsitute these random addresses for the ones you trust. The
|
||||||
|
dedicated nameserver (if you have one) should be configured in
|
||||||
|
`conf.d/resolver.conf`:
|
||||||
|
|
||||||
|
```
|
||||||
|
resolver [::1] 127.0.0.1 valid=60;
|
||||||
|
resolver_timeout 10s;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
# Maps {#maps}
|
# Maps {#maps}
|
||||||
|
@ -209,6 +238,30 @@ the right traffic to the right workers. The Synapse documentation about
|
||||||
types](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications)
|
types](https://element-hq.github.io/synapse/latest/workers.html#available-worker-applications)
|
||||||
lists which endpoints a specific worker type can handle.
|
lists which endpoints a specific worker type can handle.
|
||||||
|
|
||||||
|
Let's forward login requests to our login worker. The [documentation for the
|
||||||
|
generic_worker](https://element-hq.github.io/synapse/latest/workers.html#synapseappgeneric_worker)
|
||||||
|
says these endpoints are for registration and login:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Registration/login requests
|
||||||
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
||||||
|
^/_matrix/client/(r0|v3|unstable)/register$
|
||||||
|
^/_matrix/client/(r0|v3|unstable)/register/available$
|
||||||
|
^/_matrix/client/v1/register/m.login.registration_token/validity$
|
||||||
|
^/_matrix/client/(r0|v3|unstable)/password_policy$
|
||||||
|
```
|
||||||
|
|
||||||
|
We forward that to our worker with this `location` definition, using the
|
||||||
|
`proxy_forward` settings we defined earlier:
|
||||||
|
|
||||||
|
```
|
||||||
|
location ~ ^(/_matrix/client/(api/v1|r0|v3|unstable)/login|/_matrix/client/(r0|v3|unstable)/register|/_matrix/client/(r0|v3|unstable)/register/available|/_matrix/client/v1/register/m.login.registration_token/validity|/_matrix/client/(r0|v3|unstable)/password_policy)$
|
||||||
|
{
|
||||||
|
include snippets/proxy.conf;
|
||||||
|
proxy_pass http://login;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
The docs say that the `generic_worker` can handle these requests for synchronisation
|
The docs say that the `generic_worker` can handle these requests for synchronisation
|
||||||
requests:
|
requests:
|
||||||
|
|
||||||
|
@ -220,12 +273,45 @@ requests:
|
||||||
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, if we had only one worker type for synchronisations, named `syncworkers`, not
|
We forward those to our 2 worker pools, `normal_sync` and `initial_sync`, like
|
||||||
splitting those requests up in normal and initial, we would direct all
|
this, using the variable `$sync` we defined in maps.conf:
|
||||||
sync-requests to that worker pool with this `location`:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
location ~ ^(/_matrix/client/(r0|v3)/sync|/_matrix/client/(api/v1|r0|v3)/events|/_matrix/client/(api/v1|r0|v3)/initialSync|/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync)$ {
|
# Normal/initial sync
|
||||||
proxy_pass http://syncworkers;
|
location ~ ^/_matrix/client/(r0|v3)/sync$ {
|
||||||
|
include snippets/proxy.conf;
|
||||||
|
proxy_pass http://$sync;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Normal sync
|
||||||
|
location ~ ^/_matrix/client/(api/v1|r0|v3)/events$ {
|
||||||
|
include snippets/proxy.conf;
|
||||||
|
proxy_pass http://normal_sync;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Initial sync
|
||||||
|
location ~ ^(/_matrix/client/(api/v1|r0|v3)/initialSync|/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync)$ {
|
||||||
|
include snippets/proxy.conf;
|
||||||
|
proxy_pass http://initial_sync;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The media worker is slightly different: some parts are public, but a few bits
|
||||||
|
are admin stuff. We split those, and limit the admin endpoints to the trusted
|
||||||
|
addresses we defined earlier:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Media, public
|
||||||
|
location ~* ^(/_matrix/((client|federation)/[^/]+/)media/|/_matrix/media/v3/upload/) {
|
||||||
|
include snippets/proxy.conf;
|
||||||
|
proxy_pass http://media;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Media, admin
|
||||||
|
location ~ ^/_synapse/admin/v1/(purge_)?(media(_cache)?|room|user|quarantine_media|users)/[\s\S]+|media$ {
|
||||||
|
include snippets/private.conf;
|
||||||
|
include snippets/proxy.conf;
|
||||||
|
proxy_pass http://media;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue