* Properly set view_distance field in Client when sending it to the
server in request_character/request_spectator.
* Removed invalid check I had included in Client::set_view_distance
* ViewDistances::clamp now clamps min to 1 for both types of view distance.
entities are synced from and displayed in.
NOTE: Syncing entities work at the granularity regions which are
multi-chunk squares but the display of entities in voxygen is limited in
a circle with the radiues of the supplied distance.
Additional details and changes:
* Added `ViewDistances` struct in `common` that contains separate
terrain and entity view distances (the entity view distance will be
clamped by the terrain view distance in uses of this).
* View distance requests from the client to the server now use this
type.
* When requesting the character or spectate state the client now passes
its desired view distances. This is exposed as a new parameter on
`Client::request_character`/`Client::request_spectate`. And the client
no longer needs to send a view distance request after entering these
states. This also allows us to avoid needing to initialize `Presence`
with a default view distance value on the server.
* Removed `DerefFlaggedStorage` from `Presence` and `RegionSubscription` since the
change tracking isn't used for these components.
* Add sliders in voxygen graphics and network tabs for this new setting.
Show the clamped value as well as the selected value next to the
slider.
* Rename existing "Entities View Distance" slider (which AFAIK controls
the distance at which different LOD levels apply to figures) to
"Entities Detail Distance" so we can use the former name for this new
slider.
Trade canceling related tweaks, make kill_npcs not leave clutter (and actually remove entities in the first place), and misc tweaks
See merge request veloren/veloren!3555
* Add mass
* Add density
* Add collider.
This one is strange as always, I don't know what's wrong, but debug hitbox
changes only after death. Real one seems to work.
Unfortuatly rayon has a bug that if you Threadpool.spawn from inside a parallel iterator from inside a Threadpool.install, that the parallel iterator will BLOCK till the Threadpool.spawn finished, which causes many many lag spikes.
I assume this might be the case for the pictures in the gantt chart where a system took unusual long or had a long pause that was unexplained.
I also raise the number of threads by 1, as this rayon thread will prob be useless in all cases and have no real work to do.
EDIT: it turns out the tests are sporadicly failing and this soluction doesnt work
See that we spawn 2 jobs in the first loop, the loop seems to NOT complete until those jobs are executed
Next step is to do everything with plain rayon coding