Instead of keeping Runtime and manually spawn a task on `drop` this task is spawned at start and will wait to be triggered.
The `drop` methods then wait for completion, UNLESS they are in a async context, then they MUST NOT BLOCK (deadlock potential), so they defer it to the Runtime and HOPE for the runtime to exist long enough.
This get rid of the weird `block_in_place` which is only accessable with `rt-multi-threaded` and has some disadvantages.
We also wont requiere the runtime to be active all the time. Though its needed for a clean shutdown
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
- Separate `invite` machinery from `group_manip` into it's own thing (includes renaming `group_invite` to `invite` where applicable).
- Move some invite/trade machinery to `ControlEvent`.
- Make `TradePhase` a proper enum instead of a bunch of bools.
- Make `TradeId` a proper newtype.
- Remove trades from `Trades` on accept (previously was only on decline).
- Typo fixes/misc cleanup.
- Add bullet point for trading to the changelog.
- Fix item swapping edge case
- Document more assumptions/edge cases
- fmt and clippy
- s/ServerGeneral::GroupInvite/ServerGeneral::Invite/
- Use `Client::current` in `Client::is_dead`
- Accept/decline buttons that submit the proper messages
- A phase2 screen that renders the (item, quantity) pairs as text
- More checks in the trade state machine server-side.
this tests the broad api of veloren and FAILS in case the interface is changed. It contains a not that those functions are commonly used by 3rd parties
and thus they need to be notified
* Improved server error message for character load errors.
* Added server logging for item asset load errors during character load.
* Fixed character select error message dialog not supporting long messages.
player buffs animation
more testing debuffs
sorting and display limit fix
overhead buffs
fix
WIP buff removal function
fmt
Update buffs.rs
Now with compiling: WIP group UI buffs
positioning
Update group.rs
Update group.rs
Small optimizations.
Fixed positioning of buffs in group panel. Broke buff tooltips in group panel.
buff frame visuals
added setting for displaying buffs at minimap
His reason to reqeust that is, that there might not be a perfect disctinction in the future.
Now we need to send ServerGeneral over streams and do additional checking at various places to verify that not the wrong variant is send.
- Instread we have a dedicated thread that will async wait for new participants to connect and then notify the main thread
- registry no longer sends a view distance with it.
- remove ClientMsg::Command again as it's unused
```
//This is a helper structure, containing all possible data send over
pub enum ClientMsg {
Initial(ClientType),
General(ClientGeneralMsg),
InGame(ClientInGameMsg),
NotInGame(ClientNotInGameMsg),
Register(ClientRegisterMsg),
Ping(PingMsg)
}
```
Rather than having a single Stream to handle ALL data, seperate into multiple streams:
- Ping Stream, for seperate PINGS
- Register Stream, only used till the client is registered, then no longer used!
- General Stream, used for msg that can occur always
- NotInGame Stream, used for everything NOT ingame, e.g. Character Screen
- InGame Stream, used for all GAME data, players, terrain, entities, etc...
This version does compile, and gets the client registered (with auth too) but doesnt get to the char screen yet.
This fixes also the ignoring messages problem we had, as we are not sending data to the register stream!
This fixes also the problem that the server had to sleep for the Stream Creation, as the Server is now creating the streams and client has to sleep.
Add a basic random feature to char creation
loading screen bg (part 2)
loading screen changes, random button graphics
Random appearance also pick a random npc name