Commit Graph

1118 Commits

Author SHA1 Message Date
7a518218e5 AttributeError: 'Namespace' object has no attribute 'log_tokenization'
Could be fixed here or alternatively declared in file globals.py
2023-02-17 22:11:49 +01:00
fc14ac7faa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-17 15:53:57 -05:00
95e2739c47 Merge branch 'main' into bugfix/merge-gui 2023-02-17 15:42:53 -05:00
f129393a2e document add_difference on-screen 2023-02-17 15:42:06 -05:00
c55bbd1a85 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-17 15:00:33 -05:00
4888d0d832 fix slider and interpolations
- fix alpha slider to show values from 0.01 to 0.99
- fix interpolation list to show 'difference' method for 3 models,
-   and weighted_sum, sigmoid and inverse_sigmoid methods for 2
2023-02-17 14:46:26 -05:00
47de3fb007 correct display of 'add_difference' method when three models defined
- due to typo, the add_difference method was being displayed as "['add_difference']"
2023-02-17 14:41:02 -05:00
159ce2ea08 Merge branch 'main' into bugfix/convert-v2-models 2023-02-16 23:00:58 -05:00
65a7432b5a disable xformers if cuda not available 2023-02-16 22:20:30 -05:00
2fa14200aa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-16 22:12:39 -05:00
d69156c616 remove superseded code 2023-02-16 22:05:00 -05:00
f3351a5e47 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 21:51:15 -05:00
f3f4c68acc fix model download and autodetection bugs
- Corrected error that caused --full-precision argument to be ignored
  when models downloaded using the --yes argument.

- Improved autodetection of v1 inpainting files; no longer relies on the
  file having 'inpaint' in the name.
2023-02-16 21:37:50 -05:00
8a0d45ac5a new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
314ed7d8f6 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 03:24:02 -05:00
b1341bc611 fully functional and ready for review
- quashed multiple bugs in model conversion and importing
- found old issue in handling of resume of interrupted downloads
- will require extensive testing
2023-02-16 03:22:25 -05:00
07be605dcb mostly working 2023-02-16 01:30:59 -05:00
fe318775c3 bring in url download bugfix from PR 2630 2023-02-16 00:37:17 -05:00
1bb07795d8 model installer downloads starter models + user-provided paths and repo_ids
- Ability to scan directory not yet implemented
- Can't download from Civitai due to incomplete URL download implementation
2023-02-16 00:34:15 -05:00
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
cab41f0538 Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
e87a2fe14b model installer frontend done - needs to be hooked to backend 2023-02-15 01:07:39 -05:00
265ccaca4a Merge branch 'main' into enhance/update-menu 2023-02-14 20:48:36 -05:00
093174942b Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
f299f40763 convert existing model display to column format 2023-02-14 16:32:54 -05:00
7545e38655 frontend design done; functionality not hooked up yet 2023-02-14 00:02:19 -05:00
d38e7170fe fix broken !import_model downloads
1. Now works with sites that produce lots of redirects, such as CIVITAI
2. Derive name of destination model file from HTTP Content-Disposition header,
   if present.
3. Swap \\ for / in file paths provided by users, to hopefully fix issues with
   Windows.
2023-02-13 22:14:24 -05:00
e29399e032 don't even try to load incompatible embeddings 2023-02-13 17:00:52 -05:00
bc18a94d8c add ability to retrieve current list of embedding trigger strings
This PR adds a new attributer to ldm.generate, `embedding_trigger_strings`:

```
gen = Generate(...)
strings = gen.embedding_trigger_strings
strings = gen.embedding_trigger_strings()
```

The trigger strings will change when the model is updated to show only
those strings which are compatible with the current
model. Dynamically-downloaded triggers from the HF Concepts Library
will only show up after they are used for the first time. However, the
full list of concepts available for download can be retrieved
programatically like this:

```
from ldm.invoke.concepts_lib import HuggingFAceConceptsLibrary
concepts = HuggingFaceConceptsLibrary()
trigger_strings = concepts.list_concepts()
```
2023-02-13 14:11:36 -05:00
5d2bdd478c Merge branch 'main' into bugfix/convert-v2-models 2023-02-13 13:15:05 -05:00
9cacba916b Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-13 09:31:34 -05:00
fbbbba2fac correct crash on edge case 2023-02-13 07:40:15 -05:00
47f94bde04 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-12 23:59:31 -05:00
197e6b95e3 add missing file 2023-02-12 23:59:18 -05:00
8e47ca8d57 Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
714fff39ba add new console frontend to initial model selection, and other improvements
1. The invokeai-configure script has now been refactored. The work of
   selecting and downloading initial models at install time is now done
   by a script named invokeai-initial-models (module
   name is ldm.invoke.config.initial_model_select)

   The calling arguments for invokeai-configure have not changed, so
   nothing should break. After initializing the root directory, the
   script calls invokeai-initial-models to let the user select the
   starting models to install.

2. invokeai-initial-models puts up a console GUI with checkboxes to
   indicate which models to install. It respects the --default_only
   and --yes arguments so that CI will continue to work.

3. User can now edit the VAE assigned to diffusers models in the CLI.

4. Fixed a bug that caused a crash during model loading when the VAE
   is set to None, rather than being empty.
2023-02-12 23:52:44 -05:00
89239d1c54 (updater) style 'pip' progress to use dark background 2023-02-12 19:10:11 -05:00
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
6ae7560f66 Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
9eed1919c2 Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
24e32f6ae2 add 'update' action to launcher script
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
  user to update to latest release version, main development version,
  or an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.
2023-02-11 22:32:48 -05:00
c660dcdfcd improve ability to bulk import .ckpt and .safetensors
This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
2023-02-11 17:59:12 -05:00
9e0250c0b4 Merge branch 'main' into webui-model-conversion 2023-02-12 11:13:13 +13:00
4f7af55bc3 if importing a v2 ckpt model, convert to diffusers 2023-02-11 16:35:45 -05:00
717d53a773 Merge branch 'main' into bugfix/convert-v2-models 2023-02-11 14:27:52 -05:00
b71e675e8d support conversion of v2 models
- This PR introduces a CLI prompt for the proper configuration file to
  use when converting a ckpt file, in order to support both inpainting
  and v2 models files.

- When user tries to directly !import a v2 model, it prints out a proper
  warning that v2 ckpts are not directly supported.
2023-02-11 09:39:41 -05:00
d3c850104b pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
c00155f6a4 pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
11e422cf29 Ignore two files names instead of the entire folder
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
2023-02-12 00:13:22 +13:00