lagging a bit behind on terrain. Which is fine. Block Places and Block Pickup are not handled in this stream, as they go through the standart route of event handling.
- Fix item swapping edge case
- Document more assumptions/edge cases
- fmt and clippy
- s/ServerGeneral::GroupInvite/ServerGeneral::Invite/
- Use `Client::current` in `Client::is_dead`
In order to keep the performance we made it Internal Mutability and use a `Mutex` per Stream, till `Stream.send` is no longer `&mut self`.
The old solution didn't rely on this, but needed multiple Components instead which zest didn't liked
His reason to reqeust that is, that there might not be a perfect disctinction in the future.
Now we need to send ServerGeneral over streams and do additional checking at various places to verify that not the wrong variant is send.
- Instread we have a dedicated thread that will async wait for new participants to connect and then notify the main thread
- registry no longer sends a view distance with it.
- remove ClientMsg::Command again as it's unused
```
//This is a helper structure, containing all possible data send over
pub enum ClientMsg {
Initial(ClientType),
General(ClientGeneralMsg),
InGame(ClientInGameMsg),
NotInGame(ClientNotInGameMsg),
Register(ClientRegisterMsg),
Ping(PingMsg)
}
```
Rather than having a single Stream to handle ALL data, seperate into multiple streams:
- Ping Stream, for seperate PINGS
- Register Stream, only used till the client is registered, then no longer used!
- General Stream, used for msg that can occur always
- NotInGame Stream, used for everything NOT ingame, e.g. Character Screen
- InGame Stream, used for all GAME data, players, terrain, entities, etc...
This version does compile, and gets the client registered (with auth too) but doesnt get to the char screen yet.
This fixes also the ignoring messages problem we had, as we are not sending data to the register stream!
This fixes also the problem that the server had to sleep for the Stream Creation, as the Server is now creating the streams and client has to sleep.
We MUST handle them and we are not allowed to act on a stream after it failed, as i am to lazy to change the structure to ensure the client to be imeadiatly dropped i added a AtomicBool to it.
- API behavior switched!
- the `Network` no longer holds a copy of participant, thus if the return of `connect` (before `Arc<Participant>, now `Participant`) got dropped, the `Participant::Drop` is triggered!
- you can close a Participant async via `Particiant::disconnect()`, no more need to know the network at this point
- the `Network::Drop` will check and drop not yet disconnected Participants.
- you can compare Participants via PartialEq, if they are true they point to the same endpoint (it checks remote_pid)
- Note: multiple Participants are only supported in theory, wont work yet
Additionally:
- fix some `debug!`
- veloren-client will now drop the participant gracefully on shutdown
- rename `error` to `debug` when 2 times Bparticipant shutdown is called, as it is to be expected in a async runtime
- mostly its the message handling put now in a async wrapper
- add some fixes to pass without warnings and make clippy happy
- some network error handling is ignored, this can be improved but is no blocker
this is the part which prob has less Merge conflics and is easy to rebase
the next commit will have prob alot of merge conflics
followed by a fmt commit
Adds removing extra components and deleting entities clientside when
going back to the character screen. Also, simplifies ClientState by
removing the Dead variant and removing ClientMsg::StateRequest in favor
of more specific ClientMsg variants.