Commit Graph

1125 Commits

Author SHA1 Message Date
blessedcoolant
d5f524a156
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883
Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. (#2756) 2023-02-22 06:12:24 +13:00
Lincoln Stein
27a2e27c3a fix crash when installed models < number columns
1. Fixed display crash when the number of installed models is less than
   the number of desired columns to display them.

2. Added --ckpt_convert option to init file.
2023-02-21 12:09:34 -05:00
Jonathan
da04b11a31
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 10:52:13 -06:00
Lincoln Stein
3795b40f63 implemented the following fixes:
Enhancements:
1. Directory-based imports will not attempt to import components of diffusers models.
2. Diffuser directory imports now supported
3. Files that end with .ckpt that are not Stable Diffusion models (such as VAEs) are
   skipped during import.

Bugs identified in Psychedelicious's review:
1. The invokeai-configure form now tracks the current contents of `invokeai.init` correctly.
2. The autoencoders are no longer treated like installable models, but instead are
   mandatory support models. They will no longer appear in `models.yaml`

Bugs identified in Damian's review:
1. If invokeai-model-install is started before the root directory is initialized, it will
   call invokeai-configure to fix the matter.
2. Fix bug that was causing empty `models.yaml` under certain conditions.
3. Made import textbox smaller
4. Hide the "convert to diffusers" options if nothing to import.
2023-02-21 11:47:41 -05:00
Lincoln Stein
9436f2e3d1 alphabetize trigger strings 2023-02-21 06:23:34 -05:00
Lincoln Stein
4c2a588e1f
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 22:40:31 -05:00
Lincoln Stein
91f7abb398 replace repeated triggers with <filename> 2023-02-20 22:33:13 -05:00
Damian Stewart
6420b81a5d Merge remote-tracking branch 'upstream/main' into refactor_use_compel 2023-02-20 23:34:38 +01:00
Lincoln Stein
833079140b
Merge branch 'main' into enhance/update-menu 2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 17:15:33 -05:00
Damian Stewart
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00
blessedcoolant
a30c91f398
Merge branch 'main' into bugfix/textual-inversion-training 2023-02-21 09:58:19 +13:00
Lincoln Stein
3fa1771cc9
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 15:20:15 -05:00
Lincoln Stein
1d9845557f reduced verbosity of embed loading messages 2023-02-20 15:18:55 -05:00
Lincoln Stein
55dce6cfdd remove more dead code 2023-02-20 15:08:07 -05:00
Lincoln Stein
58be915446
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-20 14:48:41 -05:00
Lincoln Stein
47ddc00c6a in textual inversion training, skip over non-image files
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed restore ability of textual inversion manager to read .pt files
- Fixes longstanding bug in the token vector size code which caused
  .pt files to be assigned the wrong token vector length. These
  were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
Lincoln Stein
cf53bba99e
Merge branch 'main' into bugfix/save-intermediates 2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a fix crash in CLI when --save_intermediates called
Fixes #2733
2023-02-20 12:50:32 -05:00
Jonathan
b21bd6f428
Fix crash on calling diffusers' prepare_attention_mask
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in a batch size.
2023-02-20 11:12:47 -06:00
Kevin Turner
cb6903dfd0
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 08:03:11 -08:00
blessedcoolant
58e5bf5a58
Merge branch 'main' into bugfix/embedding-compatibility-test 2023-02-21 04:09:18 +13:00
blessedcoolant
cc7733af1c
Merge branch 'main' into enhance/update-menu 2023-02-21 03:54:40 +13:00
Lincoln Stein
e852ad0a51 fix bug that prevented converted files from being written into models.yaml` 2023-02-20 08:48:54 -05:00
Lincoln Stein
702da71515 swap y/n values for broken model reconfiguration prompt 2023-02-20 08:34:46 -05:00
Lincoln Stein
cfd897874b
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 07:42:35 -05:00
Lincoln Stein
1249147c57
Merge branch 'main' into enhance/update-menu 2023-02-20 07:38:56 -05:00
Lincoln Stein
eec5c3bbb1
Merge branch 'main' into main 2023-02-20 07:38:08 -05:00
Jonathan
ca8d9fb885
Add symmetry to generation (#2675)
Added symmetry to Invoke based on discussions with @damian0815. This can currently only be activated via the CLI with the `--h_symmetry_time_pct` and `--v_symmetry_time_pct` options. Those take values from 0.0-1.0, exclusive, indicating the percentage through generation at which symmetry is applied as a one-time operation. To have symmetry in either axis applied after the first step, use a very low value like 0.001.
2023-02-20 07:33:19 -05:00
Lincoln Stein
7d77fb9691 fixed --default_only behavior 2023-02-20 01:29:39 -05:00
Lincoln Stein
a4c0dfb33c fix broken --ckpt_convert option
- not sure why, but at some pont --ckpt_convert (which converts legacy checkpoints)
  into diffusers in memory, stopped working due to float16/float32 issues.

- this commit repairs the problem

- also removed some debugging messages I found in passing
2023-02-20 01:12:02 -05:00
Kevin Turner
2dded68267 add --sequential_guidance option for low-RAM tradeoff 2023-02-19 21:21:14 -08:00
Lincoln Stein
172ce3dc25 correctly detect when an embedding is incompatible with the current model
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a bit
  more convenient.
2023-02-19 22:30:57 -05:00
Kevin Turner
6c8d4b091e dev(InvokeAIDiffuserComponent): mollify type checker's concern about the optional argument 2023-02-19 16:58:54 -08:00
Lincoln Stein
7beebc3659 resolved conflicts; ran black and isort 2023-02-19 19:48:01 -05:00
Lincoln Stein
5461318eda clean up diagnostic messages 2023-02-19 19:38:29 -05:00
Kevin Turner
d0abe13b60 performance(InvokeAIDiffuserComponent): add low-memory path for calculating conditioned and unconditioned predictions sequentially
Proof of concept. Still needs to be wired up to options or heuristics.
2023-02-19 16:04:54 -08:00
Kevin Turner
aca9d74489 refactor(InvokeAIDiffuserComponent): rename internal methods
Prefix with `_` as is tradition.
2023-02-19 15:33:16 -08:00
Lincoln Stein
ca10d0652f show title of add models screen 2023-02-19 16:55:09 -05:00
Lincoln Stein
e1a85d8184 fix incorrect passing of precision to model installer 2023-02-19 16:24:31 -05:00
Lincoln Stein
9d8236c59d tested and working on Ubuntu
- You can now achieve several effects:

   `invokeai-configure`
   This will use console-based UI to initialize invokeai.init,
   download support models, and choose and download SD models

   `invokeai-configure --yes`
   Without activating the GUI, populate invokeai.init with default values,
   download support models and download the "recommended" SD models

   `invokeai-configure --default_only`
   As above, but only download the default SD model (currently SD-1.5)

   `invokeai-model-install`
   Select and install models. This can be used to download arbitrary
   models from the Internet, install HuggingFace models using their repo_id,
   or watch a directory for models to load at startup time

   `invokeai-model-install --yes`
   Import the recommended SD models without a GUI

   `invokeai-model-install --default_only`
   As above, but only import the default model
2023-02-19 16:08:58 -05:00
Damian Stewart
ded3f13a33 move all prompting stuff to use compel 2023-02-19 20:42:29 +01:00
Lincoln Stein
e5646d7241 both forms functional; need integration 2023-02-19 13:12:05 -05:00
Jonathan
d3c1b747ee
Fix behavior when encountering a bad embedding (#2721)
When encountering a bad embedding, InvokeAI was asking about reconfiguring models. This is because the embedding load error was never handled - it now is.
2023-02-19 14:04:59 +00:00
Kevin Turner
671c5943e4 Merge remote-tracking branch 'origin/main' into api/add-trigger-string-retrieval
# Conflicts:
#	ldm/generate.py
2023-02-18 17:44:59 -08:00
Lincoln Stein
d01b7ea2d2 remove debug statement & actually do merge 2023-02-18 11:19:06 -05:00
Lincoln Stein
4fa91724d9 fix conversion of checkpoints into incompatible diffusers models
- The checkpoint conversion script was generating diffusers models
  with the safety checker set to null. This resulted in models
  that could not be merged with ones that have the safety checker
  activated.

- This PR fixes the issue by incorporating the safety checker into
  all 1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-18 11:07:38 -05:00
Damian Stewart
d5d2e1d7a3
Merge branch 'main' into fix/expected-torch-device 2023-02-18 15:23:08 +01:00
Lincoln Stein
587faa3e52 preparation for startup option editor 2023-02-18 08:51:26 -05:00
Iman Karim
2bf2f627e4 Fix for issue #2707 2023-02-18 11:40:12 +01:00
blessedcoolant
11a70e9764
Merge branch 'main' into patch-14 2023-02-18 18:45:05 +13:00
Kevin Turner
6b702c32ca fix(xformers): shush about not having Triton available.
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 17:41:27 -08:00
Kevin Turner
b8212e4dea fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index.
Also tighten up the typing of `device` attributes in general.
2023-02-17 16:56:15 -08:00
Lincoln Stein
1ae8986451 add log_tokenization to globals 2023-02-17 16:47:32 -05:00
Lincoln Stein
b305c240de fix syntax errors introduced by github web-ui edits 2023-02-17 16:44:20 -05:00
spezialspezial
7a518218e5
AttributeError: 'Namespace' object has no attribute 'log_tokenization'
Could be fixed here or alternatively declared in file globals.py
2023-02-17 22:11:49 +01:00
Lincoln Stein
fc14ac7faa
Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-17 15:53:57 -05:00
Lincoln Stein
95e2739c47
Merge branch 'main' into bugfix/merge-gui 2023-02-17 15:42:53 -05:00
Lincoln Stein
f129393a2e document add_difference on-screen 2023-02-17 15:42:06 -05:00
Lincoln Stein
c55bbd1a85
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-17 15:00:33 -05:00
Lincoln Stein
4888d0d832 fix slider and interpolations
- fix alpha slider to show values from 0.01 to 0.99
- fix interpolation list to show 'difference' method for 3 models,
-   and weighted_sum, sigmoid and inverse_sigmoid methods for 2
2023-02-17 14:46:26 -05:00
Lincoln Stein
47de3fb007 correct display of 'add_difference' method when three models defined
- due to typo, the add_difference method was being displayed as "['add_difference']"
2023-02-17 14:41:02 -05:00
Lincoln Stein
159ce2ea08
Merge branch 'main' into bugfix/convert-v2-models 2023-02-16 23:00:58 -05:00
Lincoln Stein
65a7432b5a disable xformers if cuda not available 2023-02-16 22:20:30 -05:00
Lincoln Stein
2fa14200aa
Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-16 22:12:39 -05:00
Lincoln Stein
d69156c616 remove superseded code 2023-02-16 22:05:00 -05:00
Lincoln Stein
f3351a5e47 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 21:51:15 -05:00
Lincoln Stein
f3f4c68acc fix model download and autodetection bugs
- Corrected error that caused --full-precision argument to be ignored
  when models downloaded using the --yes argument.

- Improved autodetection of v1 inpainting files; no longer relies on the
  file having 'inpaint' in the name.
2023-02-16 21:37:50 -05:00
Kevin Turner
8a0d45ac5a
new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
Lincoln Stein
314ed7d8f6
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 03:24:02 -05:00
Lincoln Stein
b1341bc611 fully functional and ready for review
- quashed multiple bugs in model conversion and importing
- found old issue in handling of resume of interrupted downloads
- will require extensive testing
2023-02-16 03:22:25 -05:00
Lincoln Stein
07be605dcb mostly working 2023-02-16 01:30:59 -05:00
Lincoln Stein
fe318775c3 bring in url download bugfix from PR 2630 2023-02-16 00:37:17 -05:00
Lincoln Stein
1bb07795d8 model installer downloads starter models + user-provided paths and repo_ids
- Ability to scan directory not yet implemented
- Can't download from Civitai due to incomplete URL download implementation
2023-02-16 00:34:15 -05:00
blessedcoolant
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
Jonathan
cab41f0538
Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
Lincoln Stein
e87a2fe14b model installer frontend done - needs to be hooked to backend 2023-02-15 01:07:39 -05:00
Eugene Brodsky
265ccaca4a
Merge branch 'main' into enhance/update-menu 2023-02-14 20:48:36 -05:00
Jonathan
093174942b
Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
Lincoln Stein
f299f40763 convert existing model display to column format 2023-02-14 16:32:54 -05:00
Lincoln Stein
7545e38655 frontend design done; functionality not hooked up yet 2023-02-14 00:02:19 -05:00
Lincoln Stein
d38e7170fe fix broken !import_model downloads
1. Now works with sites that produce lots of redirects, such as CIVITAI
2. Derive name of destination model file from HTTP Content-Disposition header,
   if present.
3. Swap \\ for / in file paths provided by users, to hopefully fix issues with
   Windows.
2023-02-13 22:14:24 -05:00
Lincoln Stein
e29399e032 don't even try to load incompatible embeddings 2023-02-13 17:00:52 -05:00
Lincoln Stein
bc18a94d8c add ability to retrieve current list of embedding trigger strings
This PR adds a new attributer to ldm.generate, `embedding_trigger_strings`:

```
gen = Generate(...)
strings = gen.embedding_trigger_strings
strings = gen.embedding_trigger_strings()
```

The trigger strings will change when the model is updated to show only
those strings which are compatible with the current
model. Dynamically-downloaded triggers from the HF Concepts Library
will only show up after they are used for the first time. However, the
full list of concepts available for download can be retrieved
programatically like this:

```
from ldm.invoke.concepts_lib import HuggingFAceConceptsLibrary
concepts = HuggingFaceConceptsLibrary()
trigger_strings = concepts.list_concepts()
```
2023-02-13 14:11:36 -05:00
Lincoln Stein
5d2bdd478c
Merge branch 'main' into bugfix/convert-v2-models 2023-02-13 13:15:05 -05:00
Lincoln Stein
9cacba916b
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-13 09:31:34 -05:00
Lincoln Stein
fbbbba2fac correct crash on edge case 2023-02-13 07:40:15 -05:00
Lincoln Stein
47f94bde04
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-12 23:59:31 -05:00
Lincoln Stein
197e6b95e3 add missing file 2023-02-12 23:59:18 -05:00
Lincoln Stein
8e47ca8d57
Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
Lincoln Stein
714fff39ba add new console frontend to initial model selection, and other improvements
1. The invokeai-configure script has now been refactored. The work of
   selecting and downloading initial models at install time is now done
   by a script named invokeai-initial-models (module
   name is ldm.invoke.config.initial_model_select)

   The calling arguments for invokeai-configure have not changed, so
   nothing should break. After initializing the root directory, the
   script calls invokeai-initial-models to let the user select the
   starting models to install.

2. invokeai-initial-models puts up a console GUI with checkboxes to
   indicate which models to install. It respects the --default_only
   and --yes arguments so that CI will continue to work.

3. User can now edit the VAE assigned to diffusers models in the CLI.

4. Fixed a bug that caused a crash during model loading when the VAE
   is set to None, rather than being empty.
2023-02-12 23:52:44 -05:00
Eugene Brodsky
89239d1c54 (updater) style 'pip' progress to use dark background 2023-02-12 19:10:11 -05:00
Lincoln Stein
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
Lincoln Stein
6ae7560f66
Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
Lincoln Stein
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
Jonathan
9eed1919c2
Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
Lincoln Stein
24e32f6ae2 add 'update' action to launcher script
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
  user to update to latest release version, main development version,
  or an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.
2023-02-11 22:32:48 -05:00
Lincoln Stein
c660dcdfcd improve ability to bulk import .ckpt and .safetensors
This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
2023-02-11 17:59:12 -05:00