- Handle/notify the client of errors during character load by returning to character select with the error, clean up client and server states
- Add player_uuid check when loading character data.
- Completely removed both `log` and `pretty_env_logger` and replaced
with `tracing` and `tracing_subscriber` where necessary.
- Converted all `log::info!(...)` et al. statements to just use the
shorthand macro i.e. `info!`. This was mostly to make renaming easier.
- authc no longer uses reqwest
- image only supports PNG
- replace routille with tiny_http
- several other dependencies
- cargo upgrade
- following improvement was measured on R7 1700X:
before:
- cargo build: 3076.73s user / 4:45 total / 589 dependencies
- cargo test: 6118.38s user / 7:30 total / 959 dependencies
after:
- cargo build: 2680.54s user / 4:05 total / 480 dependencies
- cargo test: 5351.81s user / 7:04 total / 791 dependencies
- added xMAC94x to CODEOWNERS for Cargo.toml, he will protect them from now on and hit people with evil looks ;)
- this bug was initially called imbris bug, as it happened on his runners and i couldn't reproduce it locally at fist :)
- When in a Handshake a seperate mpsc::Channel was created for (Cid, Frame) transport
however the protocol could already catch non handshake data any more and push in into this
mpsc::Channel.
Then this channel got dropped and a fresh one was created for the network::Channel.
These droped Frames are ofc a BUG!
I tried multiple things to solve this:
- dont create a new mpsc::Channel, but instead bind it to the Protocol itself and always use 1.
This would work theoretically, but in bParticipant side we are using 1 mpsc::Channel<(Cid, Frame)>
to handle ALL the network::channel.
If now ever Protocol would have it's own, and with that every network::Channel had it's own it would no longer work out
Bad Idea...
- using the first method but creating the mpsc::Channel inside the scheduler instead protocol neither works, as the
scheduler doesnt know the remote_pid yet
- i dont want a hack to say the protocol only listen to 2 messages and then stop no matter what
So i switched over to the simply method now:
- Do everything like before with 2 mpsc::Channels
- after the handshake. close the receiver and listen for all remaining (cid, frame) combinations
- when starting the channel, reapply them to the new sender/listener combination
- added tracing
- switched Protocol RwLock to Mutex, as it's only ever 1
- Additionally changed the layout and introduces the c2w_frame_s and w2s_cid_frame_s name schema
- Fixed a bug in scheduler which WOULD cause a DEADLOCK if handshake would fail
- fixd a but in api_send_send_main, i need to store the stream_p otherwise it's immeadiatly closed and a stream_a.send() isn't guaranteed
- add extra test to verify that a send message is received even if the Stream is already closed
- changed OutGoing to Outgoing
- fixed a bug that `metrics.tick()` was never called
- removed 2 unused nightly features and added `deny_code`
data associated with a character, rather than individually as we were
planning to do with stats/inv/etc... This removes potential for DB locking when we deal with each individually, and we
should have plenty of room for additional writes within the transaction.
big combined updates.
- Add a timer to the stats persistence system and change the frequency
that it runs to 10s
- Seperate the loading of character data for the character list during
selection, and the full data we will grab during state creation. Ideally
additional persisted bits can get returned at the same point and added
to the ecs within the same block.
- Update client code to use persisted stats
- Add a system for stats persistence
- Add a basic scheduler to control duration between execution of
persistence systems
- Make the character screen load with an empty character list from the server, send event to the server for character creation with data, but not yet saving them to the DB.
- Working but messy character saving to DB
- Add the character_data to the client, rather than keep it in the GLobalState.