This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
- This PR introduces a CLI prompt for the proper configuration file to
use when converting a ckpt file, in order to support both inpainting
and v2 models files.
- When user tries to directly !import a v2 model, it prints out a proper
warning that v2 ckpts are not directly supported.
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
- If CLI asked to convert the currently loaded model, the model would crash
on the first rendering. CLI will now refuse to convert a model loaded
in memory (probably a good idea in any case).
- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
file when importing an inpainting a .ckpt or .safetensors file that
has "inpainting" in the name. Otherwise it offers `v1-inference.yaml`
as the default.
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.
- Fixes#2548
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.
- The model_manager convert_and_import() method now accepts the path
to the checkpoint file's configuration file, using the parameter
`original_config_file`. For inpainting files this should be set to
the full path to `v1-inpainting-inference.yaml`.
- If no configuration file is provided in the call, then the presence
of an inpainting file will be inferred at the
`ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
for the string "inpaint" in the path. AUTO1111 does something
similar to this, but it is brittle and not recommended.
- This PR also changes the model manager model_names() method to return
the model names in case folded sort order.
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.
- Now it emits appropriate error messages telling user what the problem
is.
- This fixes an edge case crash when the textual inversion frontend
tried to display the list of models and no default model defined
in models.yaml
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
This allows the --log_tokenization option to be used as a command line argument (or from invokeai.init), making it possible to view tokenization information in the terminal when using the web interface.
- Rename configure_invokeai.py to invokeai_configure.py to be
consistent with installed script name
- Remove warning message about half-precision models not being
available during the model download process.
- adjust estimated file size reported by configure
- guesstimate disk space needed for "all" models
- fix up the "latest" tag to be named 'v2.3-latest'
`torch` wasn't seeing the environment variable. I suspect this is because it was imported before the variable was set, so was running with a different environment.
Many `torch` ops are supported on MPS so this wasn't noticed immediately, but some samplers like k_dpm_2 still use unsupported operations and need this fallback.
* remove non maintained Dockerfile
* adapt Docker related files to latest changes
- also build the frontend when building the image
- skip user response if INVOKE_MODEL_RECONFIGURE is set
- split INVOKE_MODEL_RECONFIGURE to support more than one argument
* rename `docker-build` dir to `docker`
* update build-container.yml
- rename image to invokeai
- add cpu flavor
- add metadata to build summary
- enable caching
- remove build-cloud-img.yml
* fix yarn cache path, link copyjob
Crashes would occur in the invokeai-configure script if no HF token
was found in cache and the user declines to provide one when prompted.
The reason appears to be that on Linux systems getpass_asterisk()
raises an EOFError when no input is provided
On windows10, getpass_asterisk() does not raise the EOFError, but
returns an empty string instead. This patch detects this and raises
the exception so that the control logic is preserved.
- Added modest adaptive behavior; if the screen is wide enough the three
checklists of models will be arranged in a horizontal row.
- Added color support
* refactor ckpt_to_diffuser to allow converted pipeline to remain in memory
- This idea was introduced by Damian
- Note that although I attempted to use the updated HuggingFace module
pipelines/stable_diffusion/convert_from_ckpt.py, it was unable to
convert safetensors files for reasons I didn't dig into.
- Default is to extract EMA weights.
* add --ckpt_convert option to load legacy ckpt files as diffusers models
- not quite working - I'm getting artifacts and glitches in the
converted diffuser models
- leave as draft for time being
* do not include safety checker in converted files
* add ability to control which vae is used
API now allows the caller to pass an external VAE model to the
checkpoint conversion process. In this way, if an external VAE is
specified in the checkpoint's config stanza, this VAE will be used
when constructing the diffusers model.
Tested with both regular and inpainting 1.X models.
Not tested with SD 2.X models!
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
1. only load triton on linux machines
2. require pip >= 23.0 so that editable installs can run without setup.py
3. model files default to SD-1.5, not 2.1
4. use diffusers model of inpainting rather than ckpt
5. selected a new set of initial models based on # of likes at huggingface
- implement the following pattern for finding data files under both
regular and editable install conditions:
import invokeai.foo.bar as bar
path = bar.__path__[0]
- this *seems* to work reliably with Python 3.9. Testing on 3.10 needs
to be performed.
- fixes a spurious "unknown model name" error when trying to edit the
short name of an existing model.
- relaxes naming requirements to include the ':' and '/' characters
in model names
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.
* Update --hires_fix
Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
- install.sh is now a thin wrapper around the pythonized install script
- install.bat not done yet - to follow
- user messaging is tailored to the current platform (paste shortcuts, file paths, etc)
- emit invoke.sh/invoke.bat scripts to the runtime dir
- improve launch scripts (add help option, etc)
- only emit the platform-specific scripts
if the config directory is missing, initialize it using the standard
process of copying it over, instead of failing to create the config file
this can happen if the user is re-running the config script in a directory which
already has the init file, but no configs dir
This allows reliable distribution of the initial 'configs' directory
with the Python package, and enables the configuration script to be running
from anywhere, as long as the virtual environment is available on the sys.path
- This replaces the original clipseg library with the transformers
version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do
a fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device
the clipseg model will be loaded into, and it will reside in CPU.
However, performance is more than acceptable.
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
- ldm/invoke/CLI.py
- scripts/load_models.py
- scripts/preload_models.py
- update test-invoke-pip.yml:
- remove pr type "converted_to_draft"
- remove reference to dev/diffusers
- remove no more needed requirements from matrix
- add pytorch to matrix
- install via `pip3 install --use-pep517 .`
- use the created executables
- this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
--xformers will enable support, but this is already the
default.
- During trigger token processing, emit better status messages indicating
which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
token is in fact a local embed.
- When a ckpt or safetensors file uses an external autoencoder and we
don't know which diffusers model corresponds to this (if any!), then
we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.
- After successfully converting a ckt file to diffusers, model_manager
will attempt to create an equivalent 'vae' entry to the resulting
diffusers stanza.
- This is a bit of a hack, as it relies on a hard-coded dictionary
to map ckpt VAEs to diffusers VAEs. The correct way to do this
would be to convert the VAE to a diffusers model and then point
to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
I did it the easy way first and will work on the better solution later.
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
weights file or diffusers directory. This is now fixed.