- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
This stores the components as children of the item that contains them via the DB's `parent_container_item_id` feature, and ensures that things are loaded in a good order with breadth-first search.
Squahed fixes:
- Fix some constraint violations that occurred when swapping inventory items.
- Comment out recipes for modular weapons.
- Make update_item_at_slot_using_persistence_key and is_modular more idiomatic.
- Add changelog entry.
- Document `defer_foreign_keys` usage.
Removed weapons that were duplicated in weapons and npc_weapons from npc_weapons.
Added migration to convert npc_weapons that ended up in anyone's inventory to weapon version.
Consolidated debug boost and possess sticks into one debug_stick, and renamed the admin sword (with name cultist_purp_2h_boss-0) to admin_sword
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
- better logging in network
- we now notify the send of what happened in recv in participant.
- works with veloren master servers
- works in singleplayer, using a actual mid.
- add `mpsc` in whole stack incl tests
- speed up internal read/write with `Bytes` crate
- use `prometheus-hyper` for metrics
- use a metrics cache
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
- Separate `invite` machinery from `group_manip` into it's own thing (includes renaming `group_invite` to `invite` where applicable).
- Move some invite/trade machinery to `ControlEvent`.
- Make `TradePhase` a proper enum instead of a bunch of bools.
- Make `TradeId` a proper newtype.
- Remove trades from `Trades` on accept (previously was only on decline).
- Typo fixes/misc cleanup.
- Add bullet point for trading to the changelog.
- Fix item swapping edge case
- Document more assumptions/edge cases
- fmt and clippy
- s/ServerGeneral::GroupInvite/ServerGeneral::Invite/
- Use `Client::current` in `Client::is_dead`
- Accept/decline buttons that submit the proper messages
- A phase2 screen that renders the (item, quantity) pairs as text
- More checks in the trade state machine server-side.