* Added "migration of migrations" to transfer the data from the __diesel_schema_migrations table to the refinery_schema_history table
* Removed all down migrations as refinery does not support down migrations
* Changed all diesel up migrations to refinery naming format
* Added --sql-log-mode parameter to veloren-server-cli to allow SQL tracing and profiling
* Added /disconnect_all_players admin command
* Added disconnectall CLI command
* Fixes for several potential persistence-related race conditions
- remove overwritten logging setting in server-cli
- add server-cli command to load a random area for testing without a client
- make admin add/remove commands modify ingame players instead of needing to reconnect
- add spans to par_join jobs
- added test command that loads up an area of the world
- add tracy-world-server alias
- set debug directives to info for logging
The auth server no longer allows the protocol to be specified. we enforce `https` for the auth server, so DO NOT provide a auth url with `https://` but without.
correct is now `auth.veloren.net`
incorrect is: `https://auth.veloren.net`
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
- better logging in network
- we now notify the send of what happened in recv in participant.
- works with veloren master servers
- works in singleplayer, using a actual mid.
- add `mpsc` in whole stack incl tests
- speed up internal read/write with `Bytes` crate
- use `prometheus-hyper` for metrics
- use a metrics cache
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
* Remove tweak feature
* Remove const-tweaker
* Update tiny_http
* Update bitvec to 0.21.0
* Downgrade euc to avoid conflict with vek 0.12.0
* Require exactly vek 0.12.0
* Update all other dependencies automatically based on these changes
* Update gilrs to latest at the request of Ada Lovegirls
* Update meshing benchmarks to new criterion API
- before we had a Clock that tried to average multiple ticks and predict the next sleep.
This system is massivly bugged.
a) We know exactly how long the busy time took, so we dont need to predict anything in the first place
b) Preduction was totally unrealistic after a single lag spike
c) When a very slow tick happens, we dont benefit from 10 fast ticks.
- Instead we just try to keep the tick time exact what we expect.
If we can't manage a constant tick time because we are to slow, the systems have to "catch" this via the `dt` anyway.
`stable-0.7.0 (<hash>-<datetime>)` for release versions.
And
`nightly-<date> (<hash>)` for nightly and master versions
Reason is, many players only give information that they are running `0.x.0` but are not giving us the information which day, or commit they are running. So we should make master builds less confusing.
- Completely removed both `log` and `pretty_env_logger` and replaced
with `tracing` and `tracing_subscriber` where necessary.
- Converted all `log::info!(...)` et al. statements to just use the
shorthand macro i.e. `info!`. This was mostly to make renaming easier.
- Make the character screen load with an empty character list from the server, send event to the server for character creation with data, but not yet saving them to the DB.
- Working but messy character saving to DB
- Add the character_data to the client, rather than keep it in the GLobalState.
According to its documentation its specifically tailed down to deliver everything for rust backtrace crate:
https://packages.debian.org/buster/librust-backtrace+libbacktrace-dev
The official requierement would be install `cc` and `ar`, where `ar` is in binutils, and `cc` seems to be in gcc-8 or a subpackage of it. Both would requiere about 100MB additionally for backtraces, while this package should requiere additional 8MB
- Clarify caffeine fueled comment
- Be better at comparing Instant's, and catch the 0 seconds case to say
Goodbye to the user
- Switch println for 'info!'