veloren/client/src/lib.rs

593 lines
21 KiB
Rust
Raw Normal View History

2019-08-19 12:39:23 +00:00
#![deny(unsafe_code)]
#![feature(label_break_value)]
pub mod error;
// Reexports
pub use crate::error::Error;
pub use specs::{join::Join, saveload::Marker, Entity as EcsEntity, ReadStorage};
use common::{
comp,
2019-08-08 16:05:38 +00:00
msg::{ClientMsg, ClientState, RequestStateError, ServerError, ServerInfo, ServerMsg},
net::PostBox,
state::{State, Uid},
common: Rework `Chunk` and `Chonk` implementation Previously, voxels in sparsely populated chunks were stored in a `HashMap`. However, during usage oftentimes block accesses are followed by subsequent nearby voxel accesses. Therefore it's possible to provide cache friendliness, but not with `HashMap`. The previous merge request [!469](https://gitlab.com/veloren/veloren/merge_requests/469) proposed to order voxels by their morton order (see https://en.wikipedia.org/wiki/Z-order_curve ). This provided excellent cache friendliness. However, benchmarks showed that the required indexing calculations are quite expensive. Particular results on my _Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz_ were: | Benchmark | Before this commit @ d322384becac | Morton Order @ ec8a7caf42ba | This commit | | ---------------------------------------- | --------------------------------- | --------------------------- | -------------------- | | `full read` (81920 voxels) | 17.7ns per voxel | 8.9ns per voxel | **3.6ns** per voxel | | `constrained read` (4913 voxels) | 67.0ns per voxel | 40.1ns per voxel | **14.1ns** per voxel | | `local read` (125 voxels) | 17.5ns per voxel | 14.7ns per voxel | **3.8ns** per voxel | | `X-direction read` (17 voxels) | 17.8ns per voxel | 25.9ns per voxel | **4.2ns** per voxel | | `Y-direction read` (17 voxels) | 18.4ns per voxel | 33.3ns per voxel | **4.5ns** per voxel | | `Z-direction read` (17 voxels) | 18.6ns per voxel | 38.2ns per voxel | **5.4ns** per voxel | | `long Z-direction read` (65 voxels) | 18.0ns per voxel | 37.7ns per voxel | **5.1ns** per voxel | | `full write (dense)` (81920 voxels) | 17.9ns per voxel | **10.3ns** per voxel | 12.4ns per voxel | This commit (instead of utilizing morton order) replaces `HashMap` in the `Chunk` implementation by the following data structure: The volume is spatially subdivided into groups of `4*4*4` blocks. Since a `Chunk` is of total size `32*32*16`, this implies that there are `8*8*4` groups. (These numbers are generic in the actual code such that there are always `256` groups. I.e. the group size is chosen depending on the desired total size of the `Chunk`.) There's a single vector `self.vox` which consecutively stores these groups. Each group might or might not be contained in `self.vox`. A group that is not contained represents that the full group consists only of `self.default` voxels. This saves a lot of memory because oftentimes a `Chunk` consists of either a lot of air or a lot of stone. To track whether a group is contained in `self.vox`, there's an index buffer `self.indices : [u8; 256]`. It contains for each group * (a) the order in which it has been inserted into `self.vox`, if the group is contained in `self.vox` or * (b) 255, otherwise. That case represents that the whole group consists only of `self.default` voxels. (Note that 255 is a valid insertion order for case (a) only if `self.vox` is full and then no other group has the index 255. Therefore there's no ambiguity.) Rationale: The index buffer should be small because: * Small size increases the probability that it will always be in cache. * The index buffer is allocated for every `Chunk` and an almost empty `Chunk` shall not consume too much memory. The number of 256 groups is particularly nice because it means that the index buffer can consist of `u8`s. This keeps the space requirement for the index buffer as low as 4 cache lines.
2019-09-06 13:23:38 +00:00
terrain::{block::Block, TerrainChunk, TerrainChunkSize},
common: Rework volume API See the doc comments in `common/src/vol.rs` for more information on the API itself. The changes include: * Consistent `Err`/`Error` naming. * Types are named `...Error`. * `enum` variants are named `...Err`. * Rename `VolMap{2d, 3d}` -> `VolGrid{2d, 3d}`. This is in preparation to an upcoming change where a “map” in the game related sense will be added. * Add volume iterators. There are two types of them: * _Position_ iterators obtained from the trait `IntoPosIterator` using the method `fn pos_iter(self, lower_bound: Vec3<i32>, upper_bound: Vec3<i32>) -> ...` which returns an iterator over `Vec3<i32>`. * _Volume_ iterators obtained from the trait `IntoVolIterator` using the method `fn vol_iter(self, lower_bound: Vec3<i32>, upper_bound: Vec3<i32>) -> ...` which returns an iterator over `(Vec3<i32>, &Self::Vox)`. Those traits will usually be implemented by references to volume types (i.e. `impl IntoVolIterator<'a> for &'a T` where `T` is some type which usually implements several volume traits, such as `Chunk`). * _Position_ iterators iterate over the positions valid for that volume. * _Volume_ iterators do the same but return not only the position but also the voxel at that position, in each iteration. * Introduce trait `RectSizedVol` for the use case which we have with `Chonk`: A `Chonk` is sized only in x and y direction. * Introduce traits `RasterableVol`, `RectRasterableVol` * `RasterableVol` represents a volume that is compile-time sized and has its lower bound at `(0, 0, 0)`. The name `RasterableVol` was chosen because such a volume can be used with `VolGrid3d`. * `RectRasterableVol` represents a volume that is compile-time sized at least in x and y direction and has its lower bound at `(0, 0, z)`. There's no requirement on he lower bound or size in z direction. The name `RectRasterableVol` was chosen because such a volume can be used with `VolGrid2d`.
2019-09-03 22:23:29 +00:00
vol::RectVolSize,
2019-07-17 22:10:42 +00:00
ChatType,
};
2019-08-11 19:54:20 +00:00
use hashbrown::HashMap;
common: Rework `Chunk` and `Chonk` implementation Previously, voxels in sparsely populated chunks were stored in a `HashMap`. However, during usage oftentimes block accesses are followed by subsequent nearby voxel accesses. Therefore it's possible to provide cache friendliness, but not with `HashMap`. The previous merge request [!469](https://gitlab.com/veloren/veloren/merge_requests/469) proposed to order voxels by their morton order (see https://en.wikipedia.org/wiki/Z-order_curve ). This provided excellent cache friendliness. However, benchmarks showed that the required indexing calculations are quite expensive. Particular results on my _Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz_ were: | Benchmark | Before this commit @ d322384becac | Morton Order @ ec8a7caf42ba | This commit | | ---------------------------------------- | --------------------------------- | --------------------------- | -------------------- | | `full read` (81920 voxels) | 17.7ns per voxel | 8.9ns per voxel | **3.6ns** per voxel | | `constrained read` (4913 voxels) | 67.0ns per voxel | 40.1ns per voxel | **14.1ns** per voxel | | `local read` (125 voxels) | 17.5ns per voxel | 14.7ns per voxel | **3.8ns** per voxel | | `X-direction read` (17 voxels) | 17.8ns per voxel | 25.9ns per voxel | **4.2ns** per voxel | | `Y-direction read` (17 voxels) | 18.4ns per voxel | 33.3ns per voxel | **4.5ns** per voxel | | `Z-direction read` (17 voxels) | 18.6ns per voxel | 38.2ns per voxel | **5.4ns** per voxel | | `long Z-direction read` (65 voxels) | 18.0ns per voxel | 37.7ns per voxel | **5.1ns** per voxel | | `full write (dense)` (81920 voxels) | 17.9ns per voxel | **10.3ns** per voxel | 12.4ns per voxel | This commit (instead of utilizing morton order) replaces `HashMap` in the `Chunk` implementation by the following data structure: The volume is spatially subdivided into groups of `4*4*4` blocks. Since a `Chunk` is of total size `32*32*16`, this implies that there are `8*8*4` groups. (These numbers are generic in the actual code such that there are always `256` groups. I.e. the group size is chosen depending on the desired total size of the `Chunk`.) There's a single vector `self.vox` which consecutively stores these groups. Each group might or might not be contained in `self.vox`. A group that is not contained represents that the full group consists only of `self.default` voxels. This saves a lot of memory because oftentimes a `Chunk` consists of either a lot of air or a lot of stone. To track whether a group is contained in `self.vox`, there's an index buffer `self.indices : [u8; 256]`. It contains for each group * (a) the order in which it has been inserted into `self.vox`, if the group is contained in `self.vox` or * (b) 255, otherwise. That case represents that the whole group consists only of `self.default` voxels. (Note that 255 is a valid insertion order for case (a) only if `self.vox` is full and then no other group has the index 255. Therefore there's no ambiguity.) Rationale: The index buffer should be small because: * Small size increases the probability that it will always be in cache. * The index buffer is allocated for every `Chunk` and an almost empty `Chunk` shall not consume too much memory. The number of 256 groups is particularly nice because it means that the index buffer can consist of `u8`s. This keeps the space requirement for the index buffer as low as 4 cache lines.
2019-09-06 13:23:38 +00:00
use log::warn;
use std::{
net::SocketAddr,
2019-06-11 18:39:25 +00:00
sync::Arc,
2019-06-15 10:36:26 +00:00
time::{Duration, Instant},
};
2019-07-12 18:51:22 +00:00
use uvth::{ThreadPool, ThreadPoolBuilder};
use vek::*;
2019-01-02 17:23:31 +00:00
const SERVER_TIMEOUT: Duration = Duration::from_secs(20);
pub enum Event {
2019-07-17 22:10:42 +00:00
Chat {
chat_type: ChatType,
message: String,
},
Disconnect,
2019-01-02 17:23:31 +00:00
}
pub struct Client {
client_state: ClientState,
thread_pool: ThreadPool,
pub server_info: ServerInfo,
2019-01-02 17:23:31 +00:00
postbox: PostBox<ClientMsg, ServerMsg>,
last_server_ping: Instant,
last_ping_delta: f64,
2019-01-23 20:01:58 +00:00
tick: u64,
state: State,
entity: EcsEntity,
2019-06-23 19:49:15 +00:00
view_distance: Option<u32>,
2019-06-05 16:32:33 +00:00
loaded_distance: Option<u32>,
pending_chunks: HashMap<Vec2<i32>, Instant>,
2019-01-02 17:23:31 +00:00
}
impl Client {
/// Create a new `Client`.
#[allow(dead_code)]
pub fn new<A: Into<SocketAddr>>(addr: A, view_distance: Option<u32>) -> Result<Self, Error> {
let client_state = ClientState::Connected;
let mut postbox = PostBox::to(addr)?;
// Wait for initial sync
2019-09-04 23:03:49 +00:00
let (state, entity, server_info) = match postbox.next_message() {
Some(ServerMsg::InitialSync {
ecs_state,
entity_uid,
server_info,
}) => {
2019-07-21 17:45:31 +00:00
// TODO: Voxygen should display this.
if server_info.git_hash != common::util::GIT_HASH.to_string() {
log::warn!(
"Git hash mismatch between client and server: {} vs {}",
server_info.git_hash,
common::util::GIT_HASH
);
}
let state = State::from_state_package(ecs_state);
let entity = state
.ecs()
.entity_from_uid(entity_uid)
.ok_or(Error::ServerWentMad)?;
(state, entity, server_info)
}
Some(ServerMsg::Error(ServerError::TooManyPlayers)) => {
return Err(Error::TooManyPlayers)
}
_ => return Err(Error::ServerWentMad),
};
postbox.send_message(ClientMsg::Ping);
2019-07-12 18:51:22 +00:00
let mut thread_pool = ThreadPoolBuilder::new()
.name("veloren-worker".into())
.build();
// We reduce the thread count by 1 to keep rendering smooth
2019-07-12 17:35:11 +00:00
thread_pool.set_num_threads((num_cpus::get() - 1).max(1));
Ok(Self {
client_state,
thread_pool,
server_info,
2019-01-15 15:13:11 +00:00
postbox,
last_server_ping: Instant::now(),
last_ping_delta: 0.0,
2019-01-23 20:01:58 +00:00
tick: 0,
state,
entity,
view_distance,
2019-06-05 16:32:33 +00:00
loaded_distance: None,
pending_chunks: HashMap::new(),
})
2019-01-02 17:23:31 +00:00
}
#[allow(dead_code)]
pub fn with_thread_pool(mut self, thread_pool: ThreadPool) -> Self {
self.thread_pool = thread_pool;
self
}
/// Request a state transition to `ClientState::Registered`.
pub fn register(&mut self, player: comp::Player, password: String) -> Result<(), Error> {
2019-08-08 16:05:38 +00:00
self.postbox
.send_message(ClientMsg::Register { player, password });
2019-08-08 16:01:15 +00:00
self.client_state = ClientState::Pending;
loop {
2019-08-08 15:23:58 +00:00
match self.postbox.next_message() {
Some(ServerMsg::StateAnswer(Err((RequestStateError::Denied, _)))) => {
break Err(Error::InvalidAuth)
}
Some(ServerMsg::StateAnswer(Ok(ClientState::Registered))) => break Ok(()),
2019-08-08 16:09:14 +00:00
_ => {}
2019-08-08 15:23:58 +00:00
}
}
}
/// Request a state transition to `ClientState::Character`.
pub fn request_character(
&mut self,
name: String,
body: comp::Body,
main: Option<comp::item::Tool>,
) {
self.postbox
.send_message(ClientMsg::Character { name, body, main });
self.client_state = ClientState::Pending;
}
/// Request a state transition to `ClientState::Character`.
pub fn request_logout(&mut self) {
self.postbox
.send_message(ClientMsg::RequestState(ClientState::Connected));
self.client_state = ClientState::Pending;
}
/// Request a state transition to `ClientState::Character`.
pub fn request_remove_character(&mut self) {
self.postbox
.send_message(ClientMsg::RequestState(ClientState::Registered));
self.client_state = ClientState::Pending;
}
pub fn set_view_distance(&mut self, view_distance: u32) {
self.view_distance = Some(view_distance.max(1).min(25));
self.postbox
.send_message(ClientMsg::SetViewDistance(self.view_distance.unwrap()));
// Can't fail
}
pub fn use_inventory_slot(&mut self, x: usize) {
self.postbox.send_message(ClientMsg::UseInventorySlot(x))
}
2019-07-25 22:52:28 +00:00
pub fn swap_inventory_slots(&mut self, a: usize, b: usize) {
self.postbox
.send_message(ClientMsg::SwapInventorySlots(a, b))
}
2019-07-26 17:08:40 +00:00
pub fn drop_inventory_slot(&mut self, x: usize) {
self.postbox.send_message(ClientMsg::DropInventorySlot(x))
}
pub fn pick_up(&mut self, entity: EcsEntity) {
if let Some(uid) = self.state.ecs().read_storage::<Uid>().get(entity).copied() {
self.postbox.send_message(ClientMsg::PickUp(uid.id()));
}
}
pub fn is_mounted(&self) -> bool {
self.state
.ecs()
.read_storage::<comp::Mounting>()
.get(self.entity)
.is_some()
}
pub fn view_distance(&self) -> Option<u32> {
self.view_distance
}
2019-06-05 16:32:33 +00:00
pub fn loaded_distance(&self) -> Option<u32> {
self.loaded_distance
}
2019-06-11 18:39:25 +00:00
pub fn current_chunk(&self) -> Option<Arc<TerrainChunk>> {
2019-06-15 10:36:26 +00:00
let chunk_pos = Vec2::from(
self.state
.read_storage::<comp::Pos>()
2019-06-15 10:36:26 +00:00
.get(self.entity)
.cloned()?
.0,
)
common: Rework volume API See the doc comments in `common/src/vol.rs` for more information on the API itself. The changes include: * Consistent `Err`/`Error` naming. * Types are named `...Error`. * `enum` variants are named `...Err`. * Rename `VolMap{2d, 3d}` -> `VolGrid{2d, 3d}`. This is in preparation to an upcoming change where a “map” in the game related sense will be added. * Add volume iterators. There are two types of them: * _Position_ iterators obtained from the trait `IntoPosIterator` using the method `fn pos_iter(self, lower_bound: Vec3<i32>, upper_bound: Vec3<i32>) -> ...` which returns an iterator over `Vec3<i32>`. * _Volume_ iterators obtained from the trait `IntoVolIterator` using the method `fn vol_iter(self, lower_bound: Vec3<i32>, upper_bound: Vec3<i32>) -> ...` which returns an iterator over `(Vec3<i32>, &Self::Vox)`. Those traits will usually be implemented by references to volume types (i.e. `impl IntoVolIterator<'a> for &'a T` where `T` is some type which usually implements several volume traits, such as `Chunk`). * _Position_ iterators iterate over the positions valid for that volume. * _Volume_ iterators do the same but return not only the position but also the voxel at that position, in each iteration. * Introduce trait `RectSizedVol` for the use case which we have with `Chonk`: A `Chonk` is sized only in x and y direction. * Introduce traits `RasterableVol`, `RectRasterableVol` * `RasterableVol` represents a volume that is compile-time sized and has its lower bound at `(0, 0, 0)`. The name `RasterableVol` was chosen because such a volume can be used with `VolGrid3d`. * `RectRasterableVol` represents a volume that is compile-time sized at least in x and y direction and has its lower bound at `(0, 0, z)`. There's no requirement on he lower bound or size in z direction. The name `RectRasterableVol` was chosen because such a volume can be used with `VolGrid2d`.
2019-09-03 22:23:29 +00:00
.map2(TerrainChunkSize::RECT_SIZE, |e: f32, sz| {
2019-06-15 10:36:26 +00:00
(e as u32).div_euclid(sz) as i32
});
2019-06-11 18:39:25 +00:00
self.state.terrain().get_key_arc(chunk_pos).cloned()
}
pub fn inventories(&self) -> ReadStorage<comp::Inventory> {
self.state.read_storage()
}
/// Send a chat message to the server.
#[allow(dead_code)]
pub fn send_chat(&mut self, msg: String) {
2019-07-17 22:10:42 +00:00
self.postbox.send_message(ClientMsg::chat(msg))
}
/// Remove all cached terrain
#[allow(dead_code)]
pub fn clear_terrain(&mut self) {
self.state.clear_terrain();
self.pending_chunks.clear();
}
pub fn place_block(&mut self, pos: Vec3<i32>, block: Block) {
self.postbox.send_message(ClientMsg::PlaceBlock(pos, block));
}
pub fn remove_block(&mut self, pos: Vec3<i32>) {
self.postbox.send_message(ClientMsg::BreakBlock(pos));
}
pub fn collect_block(&mut self, pos: Vec3<i32>) {
self.postbox.send_message(ClientMsg::CollectBlock(pos));
}
/// Execute a single client tick, handle input and update the game state by the given duration.
#[allow(dead_code)]
2019-06-09 14:20:20 +00:00
pub fn tick(
&mut self,
controller: comp::Controller,
dt: Duration,
) -> Result<Vec<Event>, Error> {
2019-01-02 17:23:31 +00:00
// This tick function is the centre of the Veloren universe. Most client-side things are
// managed from here, and as such it's important that it stays organised. Please consult
// the core developers before making significant changes to this code. Here is the
// approximate order of things. Please update it as this code changes.
//
// 1) Collect input from the frontend, apply input effects to the state of the game
// 2) Handle messages from the server
// 3) Go through any events (timer-driven or otherwise) that need handling and apply them
2019-01-02 17:23:31 +00:00
// to the state of the game
// 4) Perform a single LocalState tick (i.e: update the world and entities in the world)
// 5) Go through the terrain update queue and apply all changes to the terrain
// 6) Sync information to the server
// 7) Finish the tick, passing actions of the main thread back to the frontend
// 1) Handle input from frontend.
// Pass character actions from frontend input to the player's entity.
2019-06-17 17:52:06 +00:00
if let ClientState::Character | ClientState::Dead = self.client_state {
self.state.write_component(self.entity, controller.clone());
self.postbox.send_message(ClientMsg::Controller(controller));
}
// 2) Build up a list of events for this frame, to be passed to the frontend.
let mut frontend_events = Vec::new();
2019-01-02 17:23:31 +00:00
2019-08-28 13:55:35 +00:00
// Prepare for new events
2019-08-25 22:22:43 +00:00
{
let ecs = self.state.ecs_mut();
for (entity, _) in (&ecs.entities(), &ecs.read_storage::<comp::Body>()).join() {
2019-08-28 12:46:20 +00:00
let mut last_character_states =
2019-08-25 22:22:43 +00:00
ecs.write_storage::<comp::Last<comp::CharacterState>>();
if let Some(client_character_state) =
ecs.read_storage::<comp::CharacterState>().get(entity)
{
2019-08-28 12:46:20 +00:00
if last_character_states
2019-08-25 22:22:43 +00:00
.get(entity)
.map(|&l| !client_character_state.is_same_state(&l.0))
2019-08-25 22:22:43 +00:00
.unwrap_or(true)
{
2019-08-28 12:46:20 +00:00
let _ = last_character_states
2019-08-25 22:22:43 +00:00
.insert(entity, comp::Last(*client_character_state));
}
}
}
}
2019-08-28 13:55:35 +00:00
// Handle new messages from the server.
frontend_events.append(&mut self.handle_new_messages()?);
// 3) Update client local data
// 4) Tick the client's LocalState
self.state.tick(dt);
// 5) Terrain
let pos = self
.state
.read_storage::<comp::Pos>()
.get(self.entity)
.cloned();
if let (Some(pos), Some(view_distance)) = (pos, self.view_distance) {
let chunk_pos = self.state.terrain().pos_key(pos.0.map(|e| e as i32));
// Remove chunks that are too far from the player.
let mut chunks_to_remove = Vec::new();
self.state.terrain().iter().for_each(|(key, _)| {
2019-07-02 19:00:57 +00:00
if (chunk_pos - key)
2019-06-23 19:49:15 +00:00
.map(|e: i32| (e.abs() as u32).checked_sub(2).unwrap_or(0))
.magnitude_squared()
> view_distance.pow(2)
{
chunks_to_remove.push(key);
}
});
for key in chunks_to_remove {
self.state.remove_chunk(key);
}
// Request chunks from the server.
2019-06-05 16:32:33 +00:00
let mut all_loaded = true;
'outer: for dist in 0..=view_distance as i32 {
2019-06-23 19:49:15 +00:00
// Only iterate through chunks that need to be loaded for circular vd
// The (dist - 2) explained:
// -0.5 because a chunk is visible if its corner is within the view distance
// -0.5 for being able to move to the corner of the current chunk
// -1 because chunks are not meshed if they don't have all their neighbors
// (notice also that view_distance is decreased by 1)
// (this subtraction on vd is ommitted elsewhere in order to provide a buffer layer of loaded chunks)
let top = if 2 * (dist - 2).max(0).pow(2) > (view_distance - 1).pow(2) as i32 {
((view_distance - 1).pow(2) as f32 - (dist - 2).pow(2) as f32)
.sqrt()
.round() as i32
+ 1
} else {
dist
};
for i in -top..=top {
let keys = [
chunk_pos + Vec2::new(dist, i),
chunk_pos + Vec2::new(i, dist),
chunk_pos + Vec2::new(-dist, i),
chunk_pos + Vec2::new(i, -dist),
];
for key in keys.iter() {
if self.state.terrain().get_key(*key).is_none() {
if !self.pending_chunks.contains_key(key) {
2019-06-05 16:32:33 +00:00
if self.pending_chunks.len() < 4 {
self.postbox
2019-06-23 19:49:15 +00:00
.send_message(ClientMsg::TerrainChunkRequest { key: *key });
self.pending_chunks.insert(*key, Instant::now());
2019-06-05 16:32:33 +00:00
} else {
break 'outer;
}
}
2019-06-05 16:32:33 +00:00
all_loaded = false;
}
}
}
2019-06-05 16:32:33 +00:00
if all_loaded {
self.loaded_distance = Some((dist - 1).max(0) as u32);
}
}
// If chunks are taking too long, assume they're no longer pending.
let now = Instant::now();
self.pending_chunks
.retain(|_, created| now.duration_since(*created) < Duration::from_secs(3));
}
// Send a ping to the server once every second
if Instant::now().duration_since(self.last_server_ping) > Duration::from_secs(1) {
self.postbox.send_message(ClientMsg::Ping);
self.last_server_ping = Instant::now();
}
// 6) Update the server about the player's physics attributes.
2019-06-29 20:40:40 +00:00
if let ClientState::Character = self.client_state {
2019-07-02 19:00:57 +00:00
if let (Some(pos), Some(vel), Some(ori)) = (
2019-06-29 20:40:40 +00:00
self.state.read_storage().get(self.entity).cloned(),
self.state.read_storage().get(self.entity).cloned(),
self.state.read_storage().get(self.entity).cloned(),
) {
2019-07-02 19:00:57 +00:00
self.postbox
.send_message(ClientMsg::PlayerPhysics { pos, vel, ori });
}
}
common: Rework `Chunk` and `Chonk` implementation Previously, voxels in sparsely populated chunks were stored in a `HashMap`. However, during usage oftentimes block accesses are followed by subsequent nearby voxel accesses. Therefore it's possible to provide cache friendliness, but not with `HashMap`. The previous merge request [!469](https://gitlab.com/veloren/veloren/merge_requests/469) proposed to order voxels by their morton order (see https://en.wikipedia.org/wiki/Z-order_curve ). This provided excellent cache friendliness. However, benchmarks showed that the required indexing calculations are quite expensive. Particular results on my _Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz_ were: | Benchmark | Before this commit @ d322384becac | Morton Order @ ec8a7caf42ba | This commit | | ---------------------------------------- | --------------------------------- | --------------------------- | -------------------- | | `full read` (81920 voxels) | 17.7ns per voxel | 8.9ns per voxel | **3.6ns** per voxel | | `constrained read` (4913 voxels) | 67.0ns per voxel | 40.1ns per voxel | **14.1ns** per voxel | | `local read` (125 voxels) | 17.5ns per voxel | 14.7ns per voxel | **3.8ns** per voxel | | `X-direction read` (17 voxels) | 17.8ns per voxel | 25.9ns per voxel | **4.2ns** per voxel | | `Y-direction read` (17 voxels) | 18.4ns per voxel | 33.3ns per voxel | **4.5ns** per voxel | | `Z-direction read` (17 voxels) | 18.6ns per voxel | 38.2ns per voxel | **5.4ns** per voxel | | `long Z-direction read` (65 voxels) | 18.0ns per voxel | 37.7ns per voxel | **5.1ns** per voxel | | `full write (dense)` (81920 voxels) | 17.9ns per voxel | **10.3ns** per voxel | 12.4ns per voxel | This commit (instead of utilizing morton order) replaces `HashMap` in the `Chunk` implementation by the following data structure: The volume is spatially subdivided into groups of `4*4*4` blocks. Since a `Chunk` is of total size `32*32*16`, this implies that there are `8*8*4` groups. (These numbers are generic in the actual code such that there are always `256` groups. I.e. the group size is chosen depending on the desired total size of the `Chunk`.) There's a single vector `self.vox` which consecutively stores these groups. Each group might or might not be contained in `self.vox`. A group that is not contained represents that the full group consists only of `self.default` voxels. This saves a lot of memory because oftentimes a `Chunk` consists of either a lot of air or a lot of stone. To track whether a group is contained in `self.vox`, there's an index buffer `self.indices : [u8; 256]`. It contains for each group * (a) the order in which it has been inserted into `self.vox`, if the group is contained in `self.vox` or * (b) 255, otherwise. That case represents that the whole group consists only of `self.default` voxels. (Note that 255 is a valid insertion order for case (a) only if `self.vox` is full and then no other group has the index 255. Therefore there's no ambiguity.) Rationale: The index buffer should be small because: * Small size increases the probability that it will always be in cache. * The index buffer is allocated for every `Chunk` and an almost empty `Chunk` shall not consume too much memory. The number of 256 groups is particularly nice because it means that the index buffer can consist of `u8`s. This keeps the space requirement for the index buffer as low as 4 cache lines.
2019-09-06 13:23:38 +00:00
/*
// Output debug metrics
2019-06-05 19:51:49 +00:00
if log_enabled!(log::Level::Info) && self.tick % 600 == 0 {
2019-06-04 12:49:57 +00:00
let metrics = self
.state
.terrain()
.iter()
.fold(ChonkMetrics::default(), |a, (_, c)| a + c.get_metrics());
info!("{:?}", metrics);
}
common: Rework `Chunk` and `Chonk` implementation Previously, voxels in sparsely populated chunks were stored in a `HashMap`. However, during usage oftentimes block accesses are followed by subsequent nearby voxel accesses. Therefore it's possible to provide cache friendliness, but not with `HashMap`. The previous merge request [!469](https://gitlab.com/veloren/veloren/merge_requests/469) proposed to order voxels by their morton order (see https://en.wikipedia.org/wiki/Z-order_curve ). This provided excellent cache friendliness. However, benchmarks showed that the required indexing calculations are quite expensive. Particular results on my _Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz_ were: | Benchmark | Before this commit @ d322384becac | Morton Order @ ec8a7caf42ba | This commit | | ---------------------------------------- | --------------------------------- | --------------------------- | -------------------- | | `full read` (81920 voxels) | 17.7ns per voxel | 8.9ns per voxel | **3.6ns** per voxel | | `constrained read` (4913 voxels) | 67.0ns per voxel | 40.1ns per voxel | **14.1ns** per voxel | | `local read` (125 voxels) | 17.5ns per voxel | 14.7ns per voxel | **3.8ns** per voxel | | `X-direction read` (17 voxels) | 17.8ns per voxel | 25.9ns per voxel | **4.2ns** per voxel | | `Y-direction read` (17 voxels) | 18.4ns per voxel | 33.3ns per voxel | **4.5ns** per voxel | | `Z-direction read` (17 voxels) | 18.6ns per voxel | 38.2ns per voxel | **5.4ns** per voxel | | `long Z-direction read` (65 voxels) | 18.0ns per voxel | 37.7ns per voxel | **5.1ns** per voxel | | `full write (dense)` (81920 voxels) | 17.9ns per voxel | **10.3ns** per voxel | 12.4ns per voxel | This commit (instead of utilizing morton order) replaces `HashMap` in the `Chunk` implementation by the following data structure: The volume is spatially subdivided into groups of `4*4*4` blocks. Since a `Chunk` is of total size `32*32*16`, this implies that there are `8*8*4` groups. (These numbers are generic in the actual code such that there are always `256` groups. I.e. the group size is chosen depending on the desired total size of the `Chunk`.) There's a single vector `self.vox` which consecutively stores these groups. Each group might or might not be contained in `self.vox`. A group that is not contained represents that the full group consists only of `self.default` voxels. This saves a lot of memory because oftentimes a `Chunk` consists of either a lot of air or a lot of stone. To track whether a group is contained in `self.vox`, there's an index buffer `self.indices : [u8; 256]`. It contains for each group * (a) the order in which it has been inserted into `self.vox`, if the group is contained in `self.vox` or * (b) 255, otherwise. That case represents that the whole group consists only of `self.default` voxels. (Note that 255 is a valid insertion order for case (a) only if `self.vox` is full and then no other group has the index 255. Therefore there's no ambiguity.) Rationale: The index buffer should be small because: * Small size increases the probability that it will always be in cache. * The index buffer is allocated for every `Chunk` and an almost empty `Chunk` shall not consume too much memory. The number of 256 groups is particularly nice because it means that the index buffer can consist of `u8`s. This keeps the space requirement for the index buffer as low as 4 cache lines.
2019-09-06 13:23:38 +00:00
*/
// 7) Finish the tick, pass control back to the frontend.
2019-01-23 20:01:58 +00:00
self.tick += 1;
Ok(frontend_events)
2019-01-02 17:23:31 +00:00
}
2019-01-23 20:01:58 +00:00
/// Clean up the client after a tick.
#[allow(dead_code)]
2019-01-23 20:01:58 +00:00
pub fn cleanup(&mut self) {
// Cleanup the local state
self.state.cleanup();
}
/// Handle new server messages.
fn handle_new_messages(&mut self) -> Result<Vec<Event>, Error> {
let mut frontend_events = Vec::new();
let new_msgs = self.postbox.new_messages();
if new_msgs.len() > 0 {
for msg in new_msgs {
match msg {
ServerMsg::Error(e) => match e {
ServerError::TooManyPlayers => return Err(Error::ServerWentMad),
2019-08-08 03:56:02 +00:00
ServerError::InvalidAuth => return Err(Error::InvalidAuth),
//TODO: ServerError::InvalidAlias => return Err(Error::InvalidAlias),
},
ServerMsg::Shutdown => return Err(Error::ServerShutdown),
ServerMsg::InitialSync { .. } => return Err(Error::ServerWentMad),
ServerMsg::Ping => self.postbox.send_message(ClientMsg::Pong),
ServerMsg::Pong => {
self.last_ping_delta = Instant::now()
.duration_since(self.last_server_ping)
.as_secs_f64()
}
ServerMsg::ChatMsg { chat_type, message } => {
frontend_events.push(Event::Chat { chat_type, message })
}
ServerMsg::SetPlayerEntity(uid) => {
self.entity = self.state.ecs().entity_from_uid(uid).unwrap()
} // TODO: Don't unwrap here!
ServerMsg::EcsSync(sync_package) => {
self.state.ecs_mut().sync_with_package(sync_package)
}
ServerMsg::EntityPos { entity, pos } => {
if let Some(entity) = self.state.ecs().entity_from_uid(entity) {
self.state.write_component(entity, pos);
}
}
ServerMsg::EntityVel { entity, vel } => {
if let Some(entity) = self.state.ecs().entity_from_uid(entity) {
self.state.write_component(entity, vel);
}
}
ServerMsg::EntityOri { entity, ori } => {
if let Some(entity) = self.state.ecs().entity_from_uid(entity) {
self.state.write_component(entity, ori);
}
}
ServerMsg::EntityCharacterState {
entity,
character_state,
2019-07-02 19:00:57 +00:00
} => {
if let Some(entity) = self.state.ecs().entity_from_uid(entity) {
self.state.write_component(entity, character_state);
}
2019-07-02 19:00:57 +00:00
}
ServerMsg::InventoryUpdate(inventory) => {
self.state.write_component(self.entity, inventory)
}
ServerMsg::TerrainChunkUpdate { key, chunk } => {
if let Ok(chunk) = chunk {
self.state.insert_chunk(key, *chunk);
}
self.pending_chunks.remove(&key);
}
ServerMsg::TerrainBlockUpdates(mut blocks) => blocks
.drain()
.for_each(|(pos, block)| self.state.set_block(pos, block)),
ServerMsg::StateAnswer(Ok(state)) => {
self.client_state = state;
}
ServerMsg::StateAnswer(Err((error, state))) => {
2019-08-08 16:01:15 +00:00
if error == RequestStateError::Denied {
warn!("Connection denied!");
return Err(Error::InvalidAuth);
}
2019-07-02 18:56:29 +00:00
warn!(
"StateAnswer: {:?}. Server thinks client is in state {:?}.",
error, state
);
}
ServerMsg::ForceState(state) => {
self.client_state = state;
}
ServerMsg::Disconnect => {
frontend_events.push(Event::Disconnect);
}
}
}
} else if let Some(err) = self.postbox.error() {
return Err(err.into());
// We regularily ping in the tick method
} else if Instant::now().duration_since(self.last_server_ping) > SERVER_TIMEOUT {
return Err(Error::ServerTimeout);
}
Ok(frontend_events)
}
/// Get the player's entity.
#[allow(dead_code)]
pub fn entity(&self) -> EcsEntity {
self.entity
}
/// Get the client state
#[allow(dead_code)]
pub fn get_client_state(&self) -> ClientState {
self.client_state
}
/// Get the current tick number.
#[allow(dead_code)]
pub fn get_tick(&self) -> u64 {
self.tick
}
#[allow(dead_code)]
pub fn get_ping_ms(&self) -> f64 {
self.last_ping_delta * 1000.0
}
/// Get a reference to the client's worker thread pool. This pool should be used for any
/// computationally expensive operations that run outside of the main thread (i.e., threads that
/// block on I/O operations are exempt).
#[allow(dead_code)]
pub fn thread_pool(&self) -> &ThreadPool {
&self.thread_pool
}
/// Get a reference to the client's game state.
#[allow(dead_code)]
pub fn state(&self) -> &State {
&self.state
}
/// Get a mutable reference to the client's game state.
#[allow(dead_code)]
pub fn state_mut(&mut self) -> &mut State {
&mut self.state
}
/// Get a vector of all the players on the server
2019-06-02 14:35:21 +00:00
pub fn get_players(&mut self) -> Vec<comp::Player> {
// TODO: Don't clone players.
self.state
.ecs()
.read_storage::<comp::Player>()
.join()
2019-07-02 19:00:57 +00:00
.cloned()
.collect()
}
}
impl Drop for Client {
fn drop(&mut self) {
self.postbox.send_message(ClientMsg::Disconnect);
}
2019-01-02 17:23:31 +00:00
}