According to RUSTSEC-2023-0052 we need to upgrade rustls to 0.21
to get a fix for the issue which may or may not affect Veloren
since it's about client certificates but with the absence of a
PoC it seems like a good idea to upgrade anyway, just to be sure.
webpki has gone unmaintained (which rustls 0.20 depends on),
starting with rustls 0.21 it depends on rustls-webpki which
contains a fix for the issue. Since quinn also depends on
rustls 0.20 in 0.8 and 0.9 versions, we needed to upgrade it to
0.10 so that it depends on rustls 0.21 which we now use.
and because of that we decided to set the recv buffer to 512kB via linux kernel parameters.
We didn't had any problems on the official server so we wanted to bake that into code.
There is a calculator https://www.switch.ch/network/tools/tcp_throughput/?mss=1460&rtt=80&loss=1e-06&bw=10&rtt2=500&win=1024&Calculate=Calculate
that said for 10Mbits/ speed 16kB recv buffer and 500ms we only can expect effective 0.24 Mbits of actuall data speed.
Nothing mentioned increasing the send buffer, but it prob cant hurt from a server side or ?
cat /proc/sys/net/core/wmem_default showed 212992
cat /proc/sys/net/core/rmem_default showed 212992
sysctl -w net.ipv4.tcp_wmem="4096 524288 4194304" was beeing used
sysctl -w net.ipv4.tcp_wmem="4096 16384 4194304" was the default
First we though about hashing it together with the username, but we dont have the username. hope that removing 50% of the information is enough to identify 2 addresses as equal or different without knowing the exact address
Because the old case didn't account for the Scheduler got dropped but Participant keept around.
Shutdown behavior is quite easy now: bParticipant sends a oneshot, when that hits we stop. also when Participant gets droped we still stop as there woul dbe no sense in continue running the report_mgr ...
This will ask the bparticipant for a list of all channels and their respective connection arguments.
With that one could prob reach the remote side.
The data is gathered by scheduler (or channel for the listener code).
It requeres some read logs so we shouldn't abuse that function call.
in bparticipant we have a new manager that also properly shuts down as the Participant holds the sender to the respective receiver.
The sender is always dropped. inside the Mutex via disconnect and outside via Drop (we need 2 Options as otherwise we would create a runtime inside async context implicitly o.O )
(also i didn't liked the alternative by just overwriting the sender with a fake one, i want a propper Option that can be taken)
The code might also come handy in the future when we implement a auto-reconnect feature in the bparticipant.
Note: maybe upgrading tokio helps against the connection issues, there is some mio related stuff: https://github.com/tokio-rs/tokio/releases/tag/tokio-1.18.0
Note: quick also still has the old read, but its more complicated, thus i only want to change it if its tested
for that we need to create a socket2 socket and set_only_v6 to true.
We need to privude set_nonblocking for tokio to work and we need to set reuseaddr for it to work on linux systems
cannot update the following dependencies:
- vek: Sharps SIMD isnt upstream
- tracing-subscriber: MakeWriter was adjusted and i was to lazy to fiddle with lifetimes,
- refinery, rustsql: we have a custom refinery version which is incompatible with newer rustsql
- equi + egui_winit + egui_wgpu_backend: i tried it in this commit but it turned out that they dependo n wgpu which we cant update
- wgpu: cant update due new version doesnt support DX11
Got quinn updated which now require some dependencies to be explicit.
The security model has been updated to reflect this change (for example,
moderators cannot revert a ban by an administrator). Ban history is
also now recorded in the ban file, and much more information about the
ban is stored (whitelists and administrators also have extra
information).
To support the new information without losing important information,
this commit also introduces a new migration path for editable settings
(both from legacy to the new format, and between versions). Examples
of how to do this correctly, and migrate to new versions of a settings
file, are in the settings/ subdirectory.
As part of this effort, editable settings have been revamped to
guarantee atomic saves (due to the increased amount of information in
each file), some latent bugs in networking were fixed, and server-cli
has been updated to go through StructOpt for both calls through TUI
and argv, greatly simplifying parsing logic.
When local SendProtocol is opening a Stream, it will send a empty message to QuicDrain which will then know that its time to open a quic stream.
It will open a QuicStream and send its SID over to remote.
The RecvStream will be send to local QuicSink
RemoteRecv will notice a new BiStream was opened and read its Sid. It will now start listening on it. while remote main will get the information that a stream was opened and will notice the frontend.
in participant remote Recv is synced with remote send (without triggering a empty message!).
RemoteRecv Sink will send the sendstream to RemoteSend Drain and it will be used when a first message is send on this stream.
- locally we open a stream, our local Drain is sending OpenStream
- remote Sink will know this and notify remote Drain
- remote side sends a message
- local sink does not know about the Stream. as there is (and CANT) be a wat to notify local Sink from local Drain (it could introduce race conditions).
One of the possible solutions was, that the remote drain will copy the OpenStream Msg ON the Quic::stream before first data is send. This would work but is complicated.
Instead we now just mark such streams as "potentially open" and we listen for the first DataHeader to get it's SID.
add support for unreliable messages in quic protocol, benchmarks
Its available to `api` and `metrics` and can be used to slow down msg send in veloren.
It uses a tokio::watch for now, as i plan to have a watch job in the scheduler that recalculates prio on change.
Also cleaning up participant metrics after a disconnect