however i need to coordinate the prio adjustments in scheduler from now on, so that ParticipantA doesn't get all the network bandwith and ParticipantB nothing
- https://github.com/tikv/rust-prometheus/issues/321
- split up channel into a hanshake part and channel part.
The handshake part is non endless and ends when its either done or aborted.
If its okay i will send a request to the BParticipant which then opens a channel on the existing TCP or UDP connection.
this streamlines the command chain alot. also the channel is almost empty now, thinking about removing it completly.
isnt perfect, as shutdown and udp doesnt work yet
- make PID to print as Base64
- replace rouille with tiny_http
- removing async_serde as it seems to be not usefull
the idea was because deserialising is slow parallising it could speed up.
Whoever we need to keep the order of frames, (at least for controlframes) so serialising in threads would be quite complicated.
Also serialisation is quite fast, about 1 Gbit/s such speed is enough for messaging, it's more important to serve parallel streams better.
Thats why i am removing async serde coding for now
- frames are no longer serialized by serde, by byte by byte manually, increadible speed upgrade
- more metrics
- switch channel_creator into for_each_concurrent
- removing some pool.spwan_ok() as they dont allow me to use self
- reduce features needed
- switch `listen` to async in oder to verify if the bind was successful
- Introduce the following examples
- network speed
- chat
- fileshare
- add additional tests
- fix dropping stream before last messages can be handled bug, when dropping a stream, BParticipant will wait for prio to be empty before dropping the stream and sending the signal
- correct closing of stream and participant
- move tcp to protocols and create udp front and backend
- tracing and fixing a bug that is caused by not waiting for configuration after receiving a frame
- fix a bug in network-speed, but there is still a bug if trace=warn after 2.000.000 messages the server doesnt get that client has shut down and seems to lock somewhere. hard to reproduce
open tasks
[ ] verify UDP works correctly, especcially the connect!
[ ] implements UDP shutdown correctly, the one created in connect!
[ ] unify logging
[ ] fill metrics
[ ] fix dropping stream before last messages can be handled bug
[ ] add documentation
[ ] add benchmarks
[ ] remove async_serde???
[ ] add mpsc
- We can now get rid of most sleep and get true remote part and stream working, however there seems to be a deadlock after registered new handle trace with 10% spawn chance
- removal of the events trait, as we use channels
- streams now directly communicate with each other for performance reasons, somewhere are still deadlocks, oonce directly at listening somehow and after the first message has read, but i also got it to run perfectly through at this state without code change, maybe a sleep or more detailed rust-dgb session would help here!
It should compile and tests run fine now.
If not, the 2nd last squashed commit message said it currently only send frames but not incomming messages, also recv would only handle frames. The last one said i added internal messages and a reverse path (prob for .recv)
- introduce a loadtest, for tcp messages
- cleanup api
- added a unittest
- prepared a handshake message, which will in next commits get removed again
- experimental mio worker merges
- using uuid for participant id