This results in an extremely visually noticeable improvement in latency
when adding or removing sprite data and makes the game feel more
responsive.
This happens, for instance, when picking up a sprite like an apple or
flower from the environment. We check to make sure that for items
with lighting (like Velorite) or changes that otherwise affect meshing
(like changing from fluid to nonfluid) this doesn't trigger.
This improves the defragment operation for chonks by letting them remove
chunks at the top that match above, and bottom that match below. This
reduces the chunks / chonk from around 5.9 to around 3.4 at origin. From
my investigations, adding something for water would probably get us a
full 50% reduction, if we could collapse intermediate chunks; block
types other than rock / air / water never appear to have full chunks of
the same block, so any additional optimization will require changes to
the subchunk compression format or changes to the actual chunks we
generate.
After generating a chonk, we now find the highest frequency block (in
terms of the number of groups that uniformly consist of that block) and
replace the chunk's default with that one. We also resort the data in
the process to be in the same order as the original array index. This
improves our memory savings from 3x to almost 7x, and brings us within a
factor of 3 or so of what I hope a true average will be.
The defragmentation is not totally optimal and can probably be improved
from a performance perspective, but given how much of a hard bottleneck
RAM is this seems worthwhile. Also, this doesn't suffer from the issues
the previous solution did.
3x - 5x depending on terrain. We can do a lot better but this is a good
start.
Also, added chunk group count to metrics. This correlates with memory
usage specifically by chunk voxel data in a much more direct way than
chonk or chunk count do, so this should provide extra useful information
(especially for our average overhead per chonk / chunk).
In the process, also try to address a few edge cases related to block
detection, such as adding back previously solid sprites and removing
filters that may be vestiges of earlier logic.
- replace serde_derive by feature of serde
incl. source code modifications to compile
- reduce futures-timer to "2.0" to be same as async_std
- update notify
- removed mio, bincode and lz4 compress in common as networking is now in own crate
btw there is a better lz4 compress crate, which is newer than 2017
- update prometheus to 0.9
- can't update uvth yet due to usues
- hashbrown to 7.2 to only need a single version
- libsqlite3 update
- image didn't change as there is a problem with `image 0.23`
- switch old directories with newer directories-next
- no num upgrade as we still depend on num 0.2 anyways
- rodio and cpal upgrade
- const-tewaker update
- dispatch (untested) update
- git2 update
- iterations update
Previously, voxels in sparsely populated chunks were stored in a `HashMap`.
However, during usage oftentimes block accesses are followed by subsequent
nearby voxel accesses. Therefore it's possible to provide cache friendliness,
but not with `HashMap`.
The previous merge request [!469](https://gitlab.com/veloren/veloren/merge_requests/469)
proposed to order voxels by their morton order (see https://en.wikipedia.org/wiki/Z-order_curve ).
This provided excellent cache friendliness. However, benchmarks showed that
the required indexing calculations are quite expensive. Particular results
on my _Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz_ were:
| Benchmark | Before this commit @ d322384bec | Morton Order @ ec8a7caf42 | This commit |
| ---------------------------------------- | --------------------------------- | --------------------------- | -------------------- |
| `full read` (81920 voxels) | 17.7ns per voxel | 8.9ns per voxel | **3.6ns** per voxel |
| `constrained read` (4913 voxels) | 67.0ns per voxel | 40.1ns per voxel | **14.1ns** per voxel |
| `local read` (125 voxels) | 17.5ns per voxel | 14.7ns per voxel | **3.8ns** per voxel |
| `X-direction read` (17 voxels) | 17.8ns per voxel | 25.9ns per voxel | **4.2ns** per voxel |
| `Y-direction read` (17 voxels) | 18.4ns per voxel | 33.3ns per voxel | **4.5ns** per voxel |
| `Z-direction read` (17 voxels) | 18.6ns per voxel | 38.2ns per voxel | **5.4ns** per voxel |
| `long Z-direction read` (65 voxels) | 18.0ns per voxel | 37.7ns per voxel | **5.1ns** per voxel |
| `full write (dense)` (81920 voxels) | 17.9ns per voxel | **10.3ns** per voxel | 12.4ns per voxel |
This commit (instead of utilizing morton order) replaces `HashMap` in the
`Chunk` implementation by the following data structure:
The volume is spatially subdivided into groups of `4*4*4` blocks. Since a
`Chunk` is of total size `32*32*16`, this implies that there are `8*8*4`
groups. (These numbers are generic in the actual code such that there are
always `256` groups. I.e. the group size is chosen depending on the desired
total size of the `Chunk`.)
There's a single vector `self.vox` which consecutively stores these groups.
Each group might or might not be contained in `self.vox`. A group that is
not contained represents that the full group consists only of `self.default`
voxels. This saves a lot of memory because oftentimes a `Chunk` consists of
either a lot of air or a lot of stone.
To track whether a group is contained in `self.vox`, there's an index buffer
`self.indices : [u8; 256]`. It contains for each group
* (a) the order in which it has been inserted into `self.vox`, if the group
is contained in `self.vox` or
* (b) 255, otherwise. That case represents that the whole group consists
only of `self.default` voxels.
(Note that 255 is a valid insertion order for case (a) only if `self.vox` is
full and then no other group has the index 255. Therefore there's no
ambiguity.)
Rationale:
The index buffer should be small because:
* Small size increases the probability that it will always be in cache.
* The index buffer is allocated for every `Chunk` and an almost empty `Chunk`
shall not consume too much memory.
The number of 256 groups is particularly nice because it means that the index
buffer can consist of `u8`s. This keeps the space requirement for the index
buffer as low as 4 cache lines.
See the doc comments in `common/src/vol.rs` for more information on
the API itself.
The changes include:
* Consistent `Err`/`Error` naming.
* Types are named `...Error`.
* `enum` variants are named `...Err`.
* Rename `VolMap{2d, 3d}` -> `VolGrid{2d, 3d}`. This is in preparation
to an upcoming change where a “map” in the game related sense will
be added.
* Add volume iterators. There are two types of them:
* _Position_ iterators obtained from the trait `IntoPosIterator`
using the method
`fn pos_iter(self, lower_bound: Vec3<i32>, upper_bound: Vec3<i32>) -> ...`
which returns an iterator over `Vec3<i32>`.
* _Volume_ iterators obtained from the trait `IntoVolIterator`
using the method
`fn vol_iter(self, lower_bound: Vec3<i32>, upper_bound: Vec3<i32>) -> ...`
which returns an iterator over `(Vec3<i32>, &Self::Vox)`.
Those traits will usually be implemented by references to volume
types (i.e. `impl IntoVolIterator<'a> for &'a T` where `T` is some
type which usually implements several volume traits, such as `Chunk`).
* _Position_ iterators iterate over the positions valid for that
volume.
* _Volume_ iterators do the same but return not only the position
but also the voxel at that position, in each iteration.
* Introduce trait `RectSizedVol` for the use case which we have with
`Chonk`: A `Chonk` is sized only in x and y direction.
* Introduce traits `RasterableVol`, `RectRasterableVol`
* `RasterableVol` represents a volume that is compile-time sized and has
its lower bound at `(0, 0, 0)`. The name `RasterableVol` was chosen
because such a volume can be used with `VolGrid3d`.
* `RectRasterableVol` represents a volume that is compile-time sized at
least in x and y direction and has its lower bound at `(0, 0, z)`.
There's no requirement on he lower bound or size in z direction.
The name `RectRasterableVol` was chosen because such a volume can be
used with `VolGrid2d`.