lagging a bit behind on terrain. Which is fine. Block Places and Block Pickup are not handled in this stream, as they go through the standart route of event handling.
- get rid of old SysTimers for each system in favour of VSystem tracking
- move metrics generation from lib.rs to own system
- code cleanup
- remove time tracking in common::sys
Instead of keeping Runtime and manually spawn a task on `drop` this task is spawned at start and will wait to be triggered.
The `drop` methods then wait for completion, UNLESS they are in a async context, then they MUST NOT BLOCK (deadlock potential), so they defer it to the Runtime and HOPE for the runtime to exist long enough.
This get rid of the weird `block_in_place` which is only accessable with `rt-multi-threaded` and has some disadvantages.
We also wont requiere the runtime to be active all the time. Though its needed for a clean shutdown
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- better logging in network
- we now notify the send of what happened in recv in participant.
- works with veloren master servers
- works in singleplayer, using a actual mid.
- add `mpsc` in whole stack incl tests
- speed up internal read/write with `Bytes` crate
- use `prometheus-hyper` for metrics
- use a metrics cache
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
In order to keep the performance we made it Internal Mutability and use a `Mutex` per Stream, till `Stream.send` is no longer `&mut self`.
The old solution didn't rely on this, but needed multiple Components instead which zest didn't liked
His reason to reqeust that is, that there might not be a perfect disctinction in the future.
Now we need to send ServerGeneral over streams and do additional checking at various places to verify that not the wrong variant is send.
- Instread we have a dedicated thread that will async wait for new participants to connect and then notify the main thread
- registry no longer sends a view distance with it.
- remove ClientMsg::Command again as it's unused
```
//This is a helper structure, containing all possible data send over
pub enum ClientMsg {
Initial(ClientType),
General(ClientGeneralMsg),
InGame(ClientInGameMsg),
NotInGame(ClientNotInGameMsg),
Register(ClientRegisterMsg),
Ping(PingMsg)
}
```
Rather than having a single Stream to handle ALL data, seperate into multiple streams:
- Ping Stream, for seperate PINGS
- Register Stream, only used till the client is registered, then no longer used!
- General Stream, used for msg that can occur always
- NotInGame Stream, used for everything NOT ingame, e.g. Character Screen
- InGame Stream, used for all GAME data, players, terrain, entities, etc...
This version does compile, and gets the client registered (with auth too) but doesnt get to the char screen yet.
This fixes also the ignoring messages problem we had, as we are not sending data to the register stream!
This fixes also the problem that the server had to sleep for the Stream Creation, as the Server is now creating the streams and client has to sleep.
3x - 5x depending on terrain. We can do a lot better but this is a good
start.
Also, added chunk group count to metrics. This correlates with memory
usage specifically by chunk voxel data in a much more direct way than
chonk or chunk count do, so this should provide extra useful information
(especially for our average overhead per chonk / chunk).
`stable-0.7.0 (<hash>-<datetime>)` for release versions.
And
`nightly-<date> (<hash>)` for nightly and master versions
Reason is, many players only give information that they are running `0.x.0` but are not giving us the information which day, or commit they are running. So we should make master builds less confusing.
- Player online is now seperated into players connected and disconnected and is event driven
- Metrics for ChunkGeneration: this is the server side for tracking actuall generation
- Metrics for Chunk Network Requests