I've written up the install procedure for xFormers on Linux systems.
I need help with the Windows install; I don't know what the build
dependencies (compiler, etc) are. This section of the docs is currently
empty.
Please see `docs/installation/070_INSTALL_XFORMERS.md`
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
- ldm/invoke/CLI.py
- scripts/load_models.py
- scripts/preload_models.py
- update test-invoke-pip.yml:
- remove pr type "converted_to_draft"
- remove reference to dev/diffusers
- remove no more needed requirements from matrix
- add pytorch to matrix
- install via `pip3 install --use-pep517 .`
- use the created executables
- this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
- Added new documentation for textual inversion training process
- Move `main.py` into the deprecated scripts folder
- Fix bug in `textual_inversion.py` which was causing it to not load
the globals module correctly.
- Sort models alphabetically in console front end
- Only show diffusers models in console front end
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
For symmetry, `--xformers` will enable support, but this is already the
default if xformers is available.
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
--xformers will enable support, but this is already the
default.
- During trigger token processing, emit better status messages indicating
which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
token is in fact a local embed.
- When a ckpt or safetensors file uses an external autoencoder and we
don't know which diffusers model corresponds to this (if any!), then
we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.