- locally we open a stream, our local Drain is sending OpenStream
- remote Sink will know this and notify remote Drain
- remote side sends a message
- local sink does not know about the Stream. as there is (and CANT) be a wat to notify local Sink from local Drain (it could introduce race conditions).
One of the possible solutions was, that the remote drain will copy the OpenStream Msg ON the Quic::stream before first data is send. This would work but is complicated.
Instead we now just mark such streams as "potentially open" and we listen for the first DataHeader to get it's SID.
add support for unreliable messages in quic protocol, benchmarks
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
- better logging in network
- we now notify the send of what happened in recv in participant.
- works with veloren master servers
- works in singleplayer, using a actual mid.
- add `mpsc` in whole stack incl tests
- speed up internal read/write with `Bytes` crate
- use `prometheus-hyper` for metrics
- use a metrics cache
- Implementing a async non-io protocol crate
a) no tokio / no channels
b) I/O is based on abstraction Sink/Drain
c) different Protocols can have a different Drain Type
This allow MPSC to send its content without splitting up messages at all!
It allows UDP to have internal extra frames to care for security
It allows better abstraction for tests
Allows benchmarks on the mpsc variant
Custom Handshakes to allow sth like Quic protocol easily
- reduce the participant managers to 4: channel creations, send, recv and shutdown.
keeping the `mut data` in one manager removes the need for all RwLocks.
reducing complexity and parallel access problems
- more strategic participant shutdown. first send. then wait for remote side to notice recv stop, then remote side will stop send, then local side can stop recv.
- metrics are internally abstracted to fit protocol and network layer
- in this commit network/protocol tests work and network tests work someway, veloren compiles but does not work
- handshake compatible to async_std