- authc no longer uses reqwest
- image only supports PNG
- replace routille with tiny_http
- several other dependencies
- cargo upgrade
- following improvement was measured on R7 1700X:
before:
- cargo build: 3076.73s user / 4:45 total / 589 dependencies
- cargo test: 6118.38s user / 7:30 total / 959 dependencies
after:
- cargo build: 2680.54s user / 4:05 total / 480 dependencies
- cargo test: 5351.81s user / 7:04 total / 791 dependencies
- added xMAC94x to CODEOWNERS for Cargo.toml, he will protect them from now on and hit people with evil looks ;)
- added PartialEq to StreamError for test purposes (only yet!)
- removed async_recv example as it's no longer for any use.
It was created before the COMPLETE REWRITE in order to verify that my own async interface on top of mio works.
However it's now guaranteed by async-std and futures. no need for a special test
- remove uvth from dependencies and replace it with a `FnOnce`
- fix ALL clippy (network) lints
- basic fix for a channel drop scenario:
TODO: this needs some further fixes
up to know only destruction of participant by api was covered correctly.
we had an issue when the underlying channels got dropped. So now we have a participant without channels.
We need to buffer the requests and try to reopen a channel ASAP!
If no channel could be reopened we need to close the Participant, while
a) leaving the BParticipant in takt, knowing that it only waits for a propper close by scheduler
b) close the BParticipant gracefully. Notifying the scheduler to remove its stuff (either scheduler schould detect a stopped BParticipant or BParticipant will send Scheduler it's own destruction, and then Scheduler just does the same like when API forces a close)
Keep the Participant alive and wait for the api to acces BParticipant to notice it's closed and then wait for a disconnect which isn't doing anything as it was already cleaned up in the background
- this bug was initially called imbris bug, as it happened on his runners and i couldn't reproduce it locally at fist :)
- When in a Handshake a seperate mpsc::Channel was created for (Cid, Frame) transport
however the protocol could already catch non handshake data any more and push in into this
mpsc::Channel.
Then this channel got dropped and a fresh one was created for the network::Channel.
These droped Frames are ofc a BUG!
I tried multiple things to solve this:
- dont create a new mpsc::Channel, but instead bind it to the Protocol itself and always use 1.
This would work theoretically, but in bParticipant side we are using 1 mpsc::Channel<(Cid, Frame)>
to handle ALL the network::channel.
If now ever Protocol would have it's own, and with that every network::Channel had it's own it would no longer work out
Bad Idea...
- using the first method but creating the mpsc::Channel inside the scheduler instead protocol neither works, as the
scheduler doesnt know the remote_pid yet
- i dont want a hack to say the protocol only listen to 2 messages and then stop no matter what
So i switched over to the simply method now:
- Do everything like before with 2 mpsc::Channels
- after the handshake. close the receiver and listen for all remaining (cid, frame) combinations
- when starting the channel, reapply them to the new sender/listener combination
- added tracing
- switched Protocol RwLock to Mutex, as it's only ever 1
- Additionally changed the layout and introduces the c2w_frame_s and w2s_cid_frame_s name schema
- Fixed a bug in scheduler which WOULD cause a DEADLOCK if handshake would fail
- fixd a but in api_send_send_main, i need to store the stream_p otherwise it's immeadiatly closed and a stream_a.send() isn't guaranteed
- add extra test to verify that a send message is received even if the Stream is already closed
- changed OutGoing to Outgoing
- fixed a bug that `metrics.tick()` was never called
- removed 2 unused nightly features and added `deny_code`
fix async_recv and double block_on panic on Network::drop and participant::drop
include Cargo.lock from all examples
Found a bug on imbris runners with doc tests of `stream::send` and `stream::recv`
As neither a backtrace, nor tracing on runners in the doc tests seems to help, i disable them and add them as unit tests
however i need to coordinate the prio adjustments in scheduler from now on, so that ParticipantA doesn't get all the network bandwith and ParticipantB nothing
- https://github.com/tikv/rust-prometheus/issues/321
- split up channel into a hanshake part and channel part.
The handshake part is non endless and ends when its either done or aborted.
If its okay i will send a request to the BParticipant which then opens a channel on the existing TCP or UDP connection.
this streamlines the command chain alot. also the channel is almost empty now, thinking about removing it completly.
isnt perfect, as shutdown and udp doesnt work yet
- make PID to print as Base64
- replace rouille with tiny_http
- removing async_serde as it seems to be not usefull
the idea was because deserialising is slow parallising it could speed up.
Whoever we need to keep the order of frames, (at least for controlframes) so serialising in threads would be quite complicated.
Also serialisation is quite fast, about 1 Gbit/s such speed is enough for messaging, it's more important to serve parallel streams better.
Thats why i am removing async serde coding for now
- frames are no longer serialized by serde, by byte by byte manually, increadible speed upgrade
- more metrics
- switch channel_creator into for_each_concurrent
- removing some pool.spwan_ok() as they dont allow me to use self
- reduce features needed