Included in the initial implementation is an entity browser which lists all entities in the client ECS, an entity component viewer which shows select components belonging to the selected entity including character state information, and a simple frame time graph.
This MR also includes an extraction of the animation hot reloading code which has been reused for egui to allow for hot-reloading of the egui interface to allow rapid development of the UI with realtime feedback upon save as is the case with aninmations. This is feature-gated behind the `hot-egui` feature which is not enabled by default due to the extra startup time that it adds.
Changes:
- Make it use red bags with 18 slots (2 rows * 9)
- Sort items by quality
- Stack ingredients, food, potions
- Move coins to ingedients bag and put it first
- Filter unconsumed good (case when you saw 16 rugged shirts)
Refactoring:
- Split bag creation to separate functions
- Use `serial_test` because tests can't run in parallel as both of them
are accessing fs.
- Take only filename, use hardcoded `assets/tweak` to keep it simple and
support .gitignore
Adding tty makes sure docker attach won't be accidentally detached
by ctrl-C (there are better ways of doing this but this one works
for now)
shell-words more closely emulates Bash's tokenizer rules (but without
doing things like environment variable expansion) which allows us to use
multiline strings as reasons, etc. Unfortunately entering newlines
still won't work the way we've written things since shell-words does not
right now give enough information to incrementally build up a valid
string, just says there was a tokenizing error; but maybe in the future
we can fix that.
Server:
provide a certificate file and key file via the settings. When provided it will then listen on TCP and QUIC, if not provided it will be TCP only.
The certificate must be known by the client, so you might get problems with self-signed certificates.
```ron
quic_files: Some((
cert: "/home/user/veloren_cert.pem",
key: "/home/user/veloren_key.key",
)),
```
Client:
activate the voxygen settin `use_quic: true` to try to connect to the quic backend of a server.
The security model has been updated to reflect this change (for example,
moderators cannot revert a ban by an administrator). Ban history is
also now recorded in the ban file, and much more information about the
ban is stored (whitelists and administrators also have extra
information).
To support the new information without losing important information,
this commit also introduces a new migration path for editable settings
(both from legacy to the new format, and between versions). Examples
of how to do this correctly, and migrate to new versions of a settings
file, are in the settings/ subdirectory.
As part of this effort, editable settings have been revamped to
guarantee atomic saves (due to the increased amount of information in
each file), some latent bugs in networking were fixed, and server-cli
has been updated to go through StructOpt for both calls through TUI
and argv, greatly simplifying parsing logic.
instead just use the same threadpool for everything
helps with debugging problems with GDB
using threadpool.install() to also be used when `into_par_iter()` is called
- Add metrics for which branch of the compression heuristic was taken.
- Reduce the threshold for the heuristic.
- Deduplicate code for dealing with lazy messages.
- Make jpeg dependency only scoped to the compression benchmark.
- Remove commented code.
- locally we open a stream, our local Drain is sending OpenStream
- remote Sink will know this and notify remote Drain
- remote side sends a message
- local sink does not know about the Stream. as there is (and CANT) be a wat to notify local Sink from local Drain (it could introduce race conditions).
One of the possible solutions was, that the remote drain will copy the OpenStream Msg ON the Quic::stream before first data is send. This would work but is complicated.
Instead we now just mark such streams as "potentially open" and we listen for the first DataHeader to get it's SID.
add support for unreliable messages in quic protocol, benchmarks
* Added "migration of migrations" to transfer the data from the __diesel_schema_migrations table to the refinery_schema_history table
* Removed all down migrations as refinery does not support down migrations
* Changed all diesel up migrations to refinery naming format
* Added --sql-log-mode parameter to veloren-server-cli to allow SQL tracing and profiling
* Added /disconnect_all_players admin command
* Added disconnectall CLI command
* Fixes for several potential persistence-related race conditions
remove outdated economic simulation code
remove old values, document
add natural resources to economy
Remove NaturalResources from Place (now in Economy)
find closest site to each chunk
implement natural resources (the distance scale is wrong)
cargo fmt
working distance calculation
this collection of natural resources seem to make sense, too much Wheat though
use natural resources and controlled area to replenish goods
increase the amount of chunks controlled by one guard to 50
add new professions and goods to the list
implement multiple products per worker
remove the old code and rename the new code to the previous name
correctly determine which goods guards will give you access to
correctly estimate the amount of natural resources controlled
adapt to new server API
instrument tooltips
Now I just need to figure out how to store a (reference to) a closure
closures for tooltip content generation
pass site/cave id to the client
Add economic information to the client structure
(not yet exchanged with the server)
Send SiteId to the client, prepare messages for economy request
Make client::sites a HashMap
Specialize the Crafter into Brewer,Bladesmith and Blacksmith
working server request for economic info from within tooltip
fully operational economic tooltips
I need to fix the id clash between caves and towns though
fix overlapping ids between caves and sites
display stock amount
correctly handle invalid (cave) ids in the request
some initial balancing, turn off most info logging
less intrusive way of implementing the dynamic tool tips in map
further tooltip cleanup
further cleanup, dynamic tooltip not fully working as intended
correctly working tooltip visibility logic
cleanup, display labor value
separate economy info request in a separate translation unit
display values as well
nicer display format for economy
add last_exports and coin to the new economy
do not allocate natural resources to Dungeons (making town so much larger)
balancing attempt
print town size statistics
cargo fmt (dead code)
resource tweaks, csv debugging
output a more interesting town (and then all sites)
fix the labor value logic (now we have meaningful prices)
load professions from ron (WIP)
use assets manager in economy
loading professions works
use professions from ron file
fix Labor debug logic
count chunks per type separately
(preparing for better resource control)
better structured resource data
traders, more professions (WIP)
fix exception when starting the simulation
fix replenish function
TODO:
- use area_ratio for resource usage (chunks should be added to stock, ratio on usage?)
- fix trading
documentation clean up
fix merge artifact
Revise trader mechanic
start Coin with a reasonable default
remove the outdated economy code
preserve documentation from removed old structure
output neighboring sites (preparation)
pass list of neighbors to economy
add trade structures
trading stub
Description of purpose by zesterer on Discord
remember prices (needed for planning)
avoid growing the order vector unboundedly
into_iter doesn't clear the Vec, so clear it manually
use drain to process Vecs, avoid clone
fix the test server
implement a test stub (I need to get it faster than 30 seconds to be more useful)
enable info output in test
debug missing and extra goods
use the same logging extension as water, activate feature
update dependencies
determine good prices, good exchange goods
a working set of revisions
a cozy world which economy tests in 2s
first order planning version
fun with package version
buy according to value/priority, not missing amount
introduce min price constant, fix order priority
in depth debugging
with a correct sign the trading plans start to make sense
move the trade planning to a separate function
rename new function
reorganize code into subroutines (much cleaner)
each trading step now has its own function
cut down the number of debugging output
introduce RoadSecurity and Transportation
transport capacity bookkeeping
only plan to pay with valuable goods, you can no longer stockpile unused options
(which interestingly shows a huge impact, to be investigated)
Coin is now listed as a payment (although not used)
proper transportation estimation (although 0)
remove more left overs uncovered by viewing the code in a merge request
use the old default values, handle non-pileable stocks directly before increasing it
(as economy is based on last year's products)
don't order the missing good multiple times
also it uses coin to buy things!
fix warnings and use the transportation from stock again
cargo fmt
prepare evaluation of trade
don't count transportation multiple times
fix merge artifact
operational trade planning
trade itself is still misleading
make clippy happy
clean up
correct labor ratio of merchants (no need to multiply with amount produced)
incomplete merchant labor_value computation
correct last commit
make economy of scale more explicit
make clippy happy (and code cleaner)
more merchant tweaks (more pop=better)
beginning of real trading code
revert the update of dependencies
remove stale comments/unused code
trading implementation complete (but untested)
something is still strange ...
fix sign in trading
another sign fix
some bugfixes and plenty of debugging code
another bug fixed, more to go
fix another invariant (rounding will lead to very small negative value)
introduce Terrain and Territory
fix merge mistakes
only `localhost` are allowed in a release build.
when debug assertions are on, others are also allowed.
This change undoes the changes to the settings, so compared to master, there is no effect
The auth server no longer allows the protocol to be specified. we enforce `https` for the auth server, so DO NOT provide a auth url with `https://` but without.
correct is now `auth.veloren.net`
incorrect is: `https://auth.veloren.net`
- now last digit version is compatible 0.6.0 will connect to 0.6.1
- the TCP DATA Frames no longer contain START field, as it's not needed
- the TCP OPENSTREAM Frames will now contain the BANDWIDTH field
- MID is not Protocol internal
Update network
- update API with Bandwidth
Update veloren
- introduce better runtime and `async` things that are IO bound.
- Remove `uvth` and instead use `tokio::runtime::Runtime::spawn_blocking`
- remove futures_execute from client and server use tokio::runtime::Runtime instead
- give threads a Name
- completly switch to Bytes, even in api. speed up TCP by fak 2
- improve benchmarks
- speed up mpsc metrics
- gracefully handle shutdown by interpreting Ok(0) as tokio::tcpstream closed now.
- fix hotloop in participants by adding `Some(n)` to fix endless handing.
- fix closing bug by closing streams after `recv_mgr` is shutdown even if now shutdown is triggered locally.
- fix prometheus
- no longer throw when a `Stream` is dropped while participant still receives a msg for it.
- fix the bandwith handling, TCP network send speed is up to 1.5GiB/s while recv is 150MiB/s
- add documentation
- tmp require rt-multi-threaded in client for tokio, to not fail cargo check
this is prob stable, i tested over 1 hour.
after that some optimisations in priomgr.
and impl. propper bandwith.
Speed is up to 2GB/s write and 150MB/s recv on a single core
sync add documentation
- better logging in network
- we now notify the send of what happened in recv in participant.
- works with veloren master servers
- works in singleplayer, using a actual mid.
- add `mpsc` in whole stack incl tests
- speed up internal read/write with `Bytes` crate
- use `prometheus-hyper` for metrics
- use a metrics cache
- Implementing a async non-io protocol crate
a) no tokio / no channels
b) I/O is based on abstraction Sink/Drain
c) different Protocols can have a different Drain Type
This allow MPSC to send its content without splitting up messages at all!
It allows UDP to have internal extra frames to care for security
It allows better abstraction for tests
Allows benchmarks on the mpsc variant
Custom Handshakes to allow sth like Quic protocol easily
- reduce the participant managers to 4: channel creations, send, recv and shutdown.
keeping the `mut data` in one manager removes the need for all RwLocks.
reducing complexity and parallel access problems
- more strategic participant shutdown. first send. then wait for remote side to notice recv stop, then remote side will stop send, then local side can stop recv.
- metrics are internally abstracted to fit protocol and network layer
- in this commit network/protocol tests work and network tests work someway, veloren compiles but does not work
- handshake compatible to async_std
switch to `tokio` and `async_channel` crate.
I wanted to do tokio first, but it doesnt feature Sender::close(), thus i included async_channel
Got rid of `futures` and only need `futures_core` and `futures_util`.
Tokio does not support `Stream` and `StreamExt` so for now i need to use `tokio-stream`, i think this will go in `std` in the future
Created `b2b_close_stream_opened_sender_r` as the shutdown procedure does not need a copy of a Sender, it just need to stop it.
Various adjustments, e.g. for `select!` which now requieres a `&mut` for oneshots.
Future things to do:
- Use some better signalling than oneshot<()> in some cases.
- Use a Watch for the Prio propergation (impl. it ofc)
- Use Bounded Channels in order to improve performance
- adjust tests coding
bring tests to work
* Remove tweak feature
* Remove const-tweaker
* Update tiny_http
* Update bitvec to 0.21.0
* Downgrade euc to avoid conflict with vek 0.12.0
* Require exactly vek 0.12.0
* Update all other dependencies automatically based on these changes
* Update gilrs to latest at the request of Ada Lovegirls
* Update meshing benchmarks to new criterion API
- Would be better to remove the iterator and just collect with a loop to avoid extra allocations
- tructure
- A HashSet is probably better
- Usefull -> Useful
- I'd have thought plugin-api-derive is a better name