The security model has been updated to reflect this change (for example,
moderators cannot revert a ban by an administrator). Ban history is
also now recorded in the ban file, and much more information about the
ban is stored (whitelists and administrators also have extra
information).
To support the new information without losing important information,
this commit also introduces a new migration path for editable settings
(both from legacy to the new format, and between versions). Examples
of how to do this correctly, and migrate to new versions of a settings
file, are in the settings/ subdirectory.
As part of this effort, editable settings have been revamped to
guarantee atomic saves (due to the increased amount of information in
each file), some latent bugs in networking were fixed, and server-cli
has been updated to go through StructOpt for both calls through TUI
and argv, greatly simplifying parsing logic.
When local SendProtocol is opening a Stream, it will send a empty message to QuicDrain which will then know that its time to open a quic stream.
It will open a QuicStream and send its SID over to remote.
The RecvStream will be send to local QuicSink
RemoteRecv will notice a new BiStream was opened and read its Sid. It will now start listening on it. while remote main will get the information that a stream was opened and will notice the frontend.
in participant remote Recv is synced with remote send (without triggering a empty message!).
RemoteRecv Sink will send the sendstream to RemoteSend Drain and it will be used when a first message is send on this stream.
- locally we open a stream, our local Drain is sending OpenStream
- remote Sink will know this and notify remote Drain
- remote side sends a message
- local sink does not know about the Stream. as there is (and CANT) be a wat to notify local Sink from local Drain (it could introduce race conditions).
One of the possible solutions was, that the remote drain will copy the OpenStream Msg ON the Quic::stream before first data is send. This would work but is complicated.
Instead we now just mark such streams as "potentially open" and we listen for the first DataHeader to get it's SID.
add support for unreliable messages in quic protocol, benchmarks
If its Closed it looks like the TCP connection got dropped/cut off (e.g. OS, Wifi).
If its Violated we for sure know the cause is the messages send/recv in a wrong way
Its available to `api` and `metrics` and can be used to slow down msg send in veloren.
It uses a tokio::watch for now, as i plan to have a watch job in the scheduler that recalculates prio on change.
Also cleaning up participant metrics after a disconnect
When a stream is opened we are searching for the best (currently) available channel.
The stream will then be keept on that channel.
Adjusted the rest of the algorithms that they now respect this rule.
improved a HashMap for Pids to be based on a Vec. Also using this for Sid -> Cid relation which is more performance critical
WARN: our current send()? error handling allows it for some close_stream messages to get lost.
Instead of keeping Runtime and manually spawn a task on `drop` this task is spawned at start and will wait to be triggered.
The `drop` methods then wait for completion, UNLESS they are in a async context, then they MUST NOT BLOCK (deadlock potential), so they defer it to the Runtime and HOPE for the runtime to exist long enough.
This get rid of the weird `block_in_place` which is only accessable with `rt-multi-threaded` and has some disadvantages.
We also wont requiere the runtime to be active all the time. Though its needed for a clean shutdown
- Timeout for Participant::drop, it will stop eventually
- Detect tokio runtime in Participant::drop and no longer use std::sleep in that case (it could hang the thread that is actually doing the shutdown work and deadlock
- Parallel Shutdown in Scheduler: Instead of a slow shutdown locking up everything we can now shutdown participants in parallel, this should reduces `WARN` part took long for shutdown dramatically
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- a channel was stale and wasn't shut down propertly AS WELL AS
- the msg ingoing pipe was bounded, so it could fill up
To mitigate this we
a) unbounded the pipe
b) stoped spam the log in no channel case
c) instead of ever reaching "no channel" state we actually shutdown participant
d) when send_mgr is closed it will no longer be able to SEND on streams
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
- better logging in network
- we now notify the send of what happened in recv in participant.
- works with veloren master servers
- works in singleplayer, using a actual mid.
- add `mpsc` in whole stack incl tests
- speed up internal read/write with `Bytes` crate
- use `prometheus-hyper` for metrics
- use a metrics cache
- Implementing a async non-io protocol crate
a) no tokio / no channels
b) I/O is based on abstraction Sink/Drain
c) different Protocols can have a different Drain Type
This allow MPSC to send its content without splitting up messages at all!
It allows UDP to have internal extra frames to care for security
It allows better abstraction for tests
Allows benchmarks on the mpsc variant
Custom Handshakes to allow sth like Quic protocol easily
- reduce the participant managers to 4: channel creations, send, recv and shutdown.
keeping the `mut data` in one manager removes the need for all RwLocks.
reducing complexity and parallel access problems
- more strategic participant shutdown. first send. then wait for remote side to notice recv stop, then remote side will stop send, then local side can stop recv.
- metrics are internally abstracted to fit protocol and network layer
- in this commit network/protocol tests work and network tests work someway, veloren compiles but does not work
- handshake compatible to async_std
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
* Remove tweak feature
* Remove const-tweaker
* Update tiny_http
* Update bitvec to 0.21.0
* Downgrade euc to avoid conflict with vek 0.12.0
* Require exactly vek 0.12.0
* Update all other dependencies automatically based on these changes
* Update gilrs to latest at the request of Ada Lovegirls
* Update meshing benchmarks to new criterion API
But i have the feeling that maybe something with the channel is wrong which leads to this behavior (or maybe did i made a copy somewhere, though i dobt this).
Again, not sure if this is a fix, but i think it doesn't hurt
And the guard generated here in the if let Some() will live through the WHOLE statement, which is definitly critical.
The fix is quite easy. We just move it out in a seperate line.
There are still some participants where the dropping will fail, and followup disconnects will NOT get triggered and be queued up till ethernity, but at least we are not blocking new logins