Commit Graph

437 Commits

Author SHA1 Message Date
671c5943e4 Merge remote-tracking branch 'origin/main' into api/add-trigger-string-retrieval
# Conflicts:
#	ldm/generate.py
2023-02-18 17:44:59 -08:00
d01b7ea2d2 remove debug statement & actually do merge 2023-02-18 11:19:06 -05:00
4fa91724d9 fix conversion of checkpoints into incompatible diffusers models
- The checkpoint conversion script was generating diffusers models
  with the safety checker set to null. This resulted in models
  that could not be merged with ones that have the safety checker
  activated.

- This PR fixes the issue by incorporating the safety checker into
  all 1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-18 11:07:38 -05:00
d5d2e1d7a3 Merge branch 'main' into fix/expected-torch-device 2023-02-18 15:23:08 +01:00
11a70e9764 Merge branch 'main' into patch-14 2023-02-18 18:45:05 +13:00
6b702c32ca fix(xformers): shush about not having Triton available.
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 17:41:27 -08:00
b8212e4dea fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index.
Also tighten up the typing of `device` attributes in general.
2023-02-17 16:56:15 -08:00
1ae8986451 add log_tokenization to globals 2023-02-17 16:47:32 -05:00
b305c240de fix syntax errors introduced by github web-ui edits 2023-02-17 16:44:20 -05:00
7a518218e5 AttributeError: 'Namespace' object has no attribute 'log_tokenization'
Could be fixed here or alternatively declared in file globals.py
2023-02-17 22:11:49 +01:00
fc14ac7faa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-17 15:53:57 -05:00
95e2739c47 Merge branch 'main' into bugfix/merge-gui 2023-02-17 15:42:53 -05:00
f129393a2e document add_difference on-screen 2023-02-17 15:42:06 -05:00
4888d0d832 fix slider and interpolations
- fix alpha slider to show values from 0.01 to 0.99
- fix interpolation list to show 'difference' method for 3 models,
-   and weighted_sum, sigmoid and inverse_sigmoid methods for 2
2023-02-17 14:46:26 -05:00
47de3fb007 correct display of 'add_difference' method when three models defined
- due to typo, the add_difference method was being displayed as "['add_difference']"
2023-02-17 14:41:02 -05:00
159ce2ea08 Merge branch 'main' into bugfix/convert-v2-models 2023-02-16 23:00:58 -05:00
2fa14200aa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-16 22:12:39 -05:00
8a0d45ac5a new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
cab41f0538 Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
093174942b Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
d38e7170fe fix broken !import_model downloads
1. Now works with sites that produce lots of redirects, such as CIVITAI
2. Derive name of destination model file from HTTP Content-Disposition header,
   if present.
3. Swap \\ for / in file paths provided by users, to hopefully fix issues with
   Windows.
2023-02-13 22:14:24 -05:00
bc18a94d8c add ability to retrieve current list of embedding trigger strings
This PR adds a new attributer to ldm.generate, `embedding_trigger_strings`:

```
gen = Generate(...)
strings = gen.embedding_trigger_strings
strings = gen.embedding_trigger_strings()
```

The trigger strings will change when the model is updated to show only
those strings which are compatible with the current
model. Dynamically-downloaded triggers from the HF Concepts Library
will only show up after they are used for the first time. However, the
full list of concepts available for download can be retrieved
programatically like this:

```
from ldm.invoke.concepts_lib import HuggingFAceConceptsLibrary
concepts = HuggingFaceConceptsLibrary()
trigger_strings = concepts.list_concepts()
```
2023-02-13 14:11:36 -05:00
5d2bdd478c Merge branch 'main' into bugfix/convert-v2-models 2023-02-13 13:15:05 -05:00
8e47ca8d57 Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
6ae7560f66 Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
9eed1919c2 Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
c660dcdfcd improve ability to bulk import .ckpt and .safetensors
This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
2023-02-11 17:59:12 -05:00
9e0250c0b4 Merge branch 'main' into webui-model-conversion 2023-02-12 11:13:13 +13:00
4f7af55bc3 if importing a v2 ckpt model, convert to diffusers 2023-02-11 16:35:45 -05:00
717d53a773 Merge branch 'main' into bugfix/convert-v2-models 2023-02-11 14:27:52 -05:00
b71e675e8d support conversion of v2 models
- This PR introduces a CLI prompt for the proper configuration file to
  use when converting a ckpt file, in order to support both inpainting
  and v2 models files.

- When user tries to directly !import a v2 model, it prints out a proper
  warning that v2 ckpts are not directly supported.
2023-02-11 09:39:41 -05:00
c00155f6a4 pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
11e422cf29 Ignore two files names instead of the entire folder
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
2023-02-12 00:13:22 +13:00
7f695fed39 Ignore safetensor or ckpt files inside diffusers model folders.
Basically skips the path if the path has the word diffusers anywhere inside it.
2023-02-12 00:03:42 +13:00
d589ad96aa fix two bugs in conversion of inpaint models from ckpt to diffusers models
- If CLI asked to convert the currently loaded model, the model would crash
  on the first rendering. CLI will now refuse to convert a model loaded
  in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
  file when importing an inpainting a .ckpt or .safetensors file that
  has "inpainting" in the name. Otherwise it offers `v1-inference.yaml`
  as the default.
2023-02-10 15:06:37 -05:00
950f450665 Merge branch 'main' into main 2023-02-10 11:52:26 +13:00
158d1ef384 bump version number; update contributors 2023-02-09 13:01:08 -05:00
4a1b4d63ef Change denoise_str default to 0.75 2023-02-09 20:21:09 +13:00
0503680efa Change denoise_str to an arg instead of a class variable 2023-02-09 20:16:23 +13:00
1f76b30e54 adding support for ESRGAN denoising strength, which allows for improved detail retention when upscaling photorelistic faces 2023-02-08 22:36:35 -06:00
12d15a1a3f Up version to 2.3.0-rc7 2023-02-08 15:55:35 -05:00
4ecf016ace Merge branch 'main' into 2.3-documentation-fixes 2023-02-08 12:47:27 -05:00
67f892455f fix crash when --prompt="prompt" is used in CLI
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.

- Fixes #2548
2023-02-07 22:09:34 -05:00
39b0288595 Merge branch 'main' into 2.3.0rc6 2023-02-07 17:43:38 -05:00
13d12a0ceb Merge branch 'main' into 2.3-documentation-fixes 2023-02-07 17:08:10 -05:00
f0f3520bca Switch to using max for attention slicing in all cases for the time being. (#2569) 2023-02-07 19:28:57 +01:00
bdf683ec41 Merge branch 'main' into enhance/convert-inpaint-models 2023-02-07 06:59:35 -05:00