Instead of keeping Runtime and manually spawn a task on `drop` this task is spawned at start and will wait to be triggered.
The `drop` methods then wait for completion, UNLESS they are in a async context, then they MUST NOT BLOCK (deadlock potential), so they defer it to the Runtime and HOPE for the runtime to exist long enough.
This get rid of the weird `block_in_place` which is only accessable with `rt-multi-threaded` and has some disadvantages.
We also wont requiere the runtime to be active all the time. Though its needed for a clean shutdown
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
- voxygen abort when the server has a invalid veloren_network handshake, e.g. by outdated version instead of try again
- rename Network `Address` to `ProtocolAddr` as sugested by zest as it's a combination of Protocol and std::io::Addr
- remove the manual byte arrays in `protocols.rs` with something more nice
- API behavior switched!
- the `Network` no longer holds a copy of participant, thus if the return of `connect` (before `Arc<Participant>, now `Participant`) got dropped, the `Participant::Drop` is triggered!
- you can close a Participant async via `Particiant::disconnect()`, no more need to know the network at this point
- the `Network::Drop` will check and drop not yet disconnected Participants.
- you can compare Participants via PartialEq, if they are true they point to the same endpoint (it checks remote_pid)
- Note: multiple Participants are only supported in theory, wont work yet
Additionally:
- fix some `debug!`
- veloren-client will now drop the participant gracefully on shutdown
- rename `error` to `debug` when 2 times Bparticipant shutdown is called, as it is to be expected in a async runtime
- added PartialEq to StreamError for test purposes (only yet!)
- removed async_recv example as it's no longer for any use.
It was created before the COMPLETE REWRITE in order to verify that my own async interface on top of mio works.
However it's now guaranteed by async-std and futures. no need for a special test
- remove uvth from dependencies and replace it with a `FnOnce`
- fix ALL clippy (network) lints
- basic fix for a channel drop scenario:
TODO: this needs some further fixes
up to know only destruction of participant by api was covered correctly.
we had an issue when the underlying channels got dropped. So now we have a participant without channels.
We need to buffer the requests and try to reopen a channel ASAP!
If no channel could be reopened we need to close the Participant, while
a) leaving the BParticipant in takt, knowing that it only waits for a propper close by scheduler
b) close the BParticipant gracefully. Notifying the scheduler to remove its stuff (either scheduler schould detect a stopped BParticipant or BParticipant will send Scheduler it's own destruction, and then Scheduler just does the same like when API forces a close)
Keep the Participant alive and wait for the api to acces BParticipant to notice it's closed and then wait for a disconnect which isn't doing anything as it was already cleaned up in the background
fix async_recv and double block_on panic on Network::drop and participant::drop
include Cargo.lock from all examples
Found a bug on imbris runners with doc tests of `stream::send` and `stream::recv`
As neither a backtrace, nor tracing on runners in the doc tests seems to help, i disable them and add them as unit tests
- switch `listen` to async in oder to verify if the bind was successful
- Introduce the following examples
- network speed
- chat
- fileshare
- add additional tests
- fix dropping stream before last messages can be handled bug, when dropping a stream, BParticipant will wait for prio to be empty before dropping the stream and sending the signal
- correct closing of stream and participant
- move tcp to protocols and create udp front and backend
- tracing and fixing a bug that is caused by not waiting for configuration after receiving a frame
- fix a bug in network-speed, but there is still a bug if trace=warn after 2.000.000 messages the server doesnt get that client has shut down and seems to lock somewhere. hard to reproduce
open tasks
[ ] verify UDP works correctly, especcially the connect!
[ ] implements UDP shutdown correctly, the one created in connect!
[ ] unify logging
[ ] fill metrics
[ ] fix dropping stream before last messages can be handled bug
[ ] add documentation
[ ] add benchmarks
[ ] remove async_serde???
[ ] add mpsc