Our app changes redux state very, very often. As our undo/redo history grows, the calls to persist state start to take in the 100ms range, due to a the deep cloning of the history. This causes very noticeable performance lag.
The deep cloning is required because we need to blacklist certain items in redux from being persisted (e.g. the app's connection status).
Debouncing the whole process of persistence is a simple and effective solution. Unfortunately, `redux-persist` dropped `debounce` between v4 and v5, replacing it with `throttle`. `throttle`, instead of delaying the expensive action until a period of X ms of inactivity, simply ensures the action is executed at least every X ms. Of course, this does not fix our performance issue.
The patch is very simple. It adds a `debounce` argument - a number of milliseconds - and debounces `redux-persist`'s `update()` method (provided by `createPersistoid`) by that many ms.
Before this, I also tried writing a custom storage adapter for `redux-persist` to debounce the calls to `localStorage.setItem()`. While this worked and was far less invasive, it doesn't actually address the issue. It turns out `setItem()` is a very fast part of the process.
We use `redux-deep-persist` to simplify the `redux-persist` configuration, which can get complicated when you need to blacklist or whitelist deeply nested state. There is also a patch here for that library because it uses the same types as `redux-persist`.
Unfortunately, the last release of `redux-persist` used a package `flat-stream` which was malicious and has been removed from npm. The latest commits to `redux-persist` (about 1 year ago) do not build; we cannot use the master branch. And between the last release and last commit, the changes have all been breaking.
Patching this last release (about 3 years old at this point) directly is far simpler than attempting to fix the upstream library's master branch or figuring out an alternative to the malicious and now non-existent dependency.
The new mask is only visible properly at max opacity but at max opacity the brush preview becomes fully opaque blocking the view. So the mask brush preview no remains at 0.5 no matter what the Brush opacity is.
This is the same as PR #1537 except that it removes a redundant
`scripts` argument from `setup.py` that appeared at some point.
I also had to unpin the github dependencies in `requirements.in` in
order to get conda CI tests to pass. However, dependencies are still
pinned in `requirements-base.txt` and the environment files, and install
itself is working. So I think we are good.
1. removed redundant `data_files` argument from setup.py
2. upped requirement to Python >= 3.9. This is due to a feature
used in `argparse` that is only available in 3.9 or higher.
* add test-invoke-pip.yml
* update requirements-base.txt to fix tests
* install requirements-base.txt separate
since it requires to have torch already installed
also restore origin requirements-base.txt after suc. test in my fork
* restore origin requirements
add `basicsr>=1.4.2` to requirements-base.txt
remove second installation step
* re-add previously overseen req in lin-cuda
* fix typo in setup.py - `scripts/preload_models.py`
* use GFBGAN from branch `basicsr-1.4.2`
* remove `basicsr>=1.4.2` from base reqs
* add INVOKEAI_ROOT to env
* disable upgrade of `pip`, `setuptools` and `wheel`
* try to use a venv which should not contain `wheel`
* add relative path to pip command
* use `configure_invokeai.py --no-interactive --yes`
* set grpcio to `<1.51.0`
* revert changes to use venv
* remove `--prefer-binary`
* disable step to create models.yaml
since this will not be used anymore with new `configure_invokeai.py`
* use `pip install --no-binary=":all:"`
* another try to use venv
* try uninstalling wheel before installing reqs
* dont use requirements.txt as filename
* update cache-dependency-path
* add facexlib to requirements-base.txt
* first install requirements-base.txt
* first install `-e .`, then install requirements
I know that this is obviously the wrong order, but still have a feeling
* add facexlib to requirements.in
* remove `-e .` from reqs and install after reqs
* unpin torch and torchvision in requirements.in
* fix model dl path
* fix curl output path
* create directory before downloading model
* set INVOKEAI_ROOT_PATH
https://docs.github.com/en/actions/learn-github-actions/environment-variables#naming-conventions-for-environment-variables
* INVOKEAI_ROOT ${{ env.GITHUB_WORKSPACE }}/invokeai
* fix matrix stable-diffusion-model-dl-path
* fix INVOKEAI_ROOT
* fix INVOKEAI_ROOT
* add --root and --outdir to run-tests step
* create models.yaml from example
* fix scripts variable in setup.py
by removing unused scripts
* fix archive-results path
* fix workflow to reflect latest code changes
* fix copy paste error
* fix job name
* fix matrix.stable-diffusion-model
* restructure matrix
* fix `activate conda env` step
* update the environment yamls
use same 4 git packages as for pip
* rename job in test-invoke-conda
* add tqdm to environment-lin-amd.yml
* fix python commands in test-invoke-conda.yml
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
This corrects behavior of --no-interactive, which was in fact
asking for interaction!
New behavior:
If you pass --no-interactive it will behave exactly as it did before
and completely skip the downloading of SD models.
If you pass --yes it will do almost the same, but download the
recommended models. The combination of the two arguments is the same
as --no-interactive.