Commit Graph

1229 Commits

Author SHA1 Message Date
1a7371ea17 remove unused embeddings code 2023-03-01 21:09:22 -05:00
850d1ee984 move models and modules under invokeai/backend/ldm 2023-03-01 18:24:18 -05:00
2c7928b163 remove pycaches from repo 2023-02-28 23:25:35 -05:00
5b6c61fc75 move models and generator into backend 2023-02-28 08:32:11 -05:00
8c9764476c first phase of source tree restructure
This is the first phase of a big shifting of files and directories
in the source tree.

You will need to run `pip install -e .` before the code will work again!

Here's what's in the current commit:

1) Remove a lot of dead code that dealt with checkpoint and safetensor loading.
2) Entire ckpt_generator hierarchy is now gone!
3) ldm.invoke.generator.*   => invokeai.generator.*
4) ldm.model.*              => invokeai.model.*
5) ldm.invoke.model_manager => invokeai.model.model_manager

6) In addition, a number of frequently-accessed classes can be imported
   from the invokeai.model and invokeai.generator modules:

   from invokeai.generator import ( Generator, PipelineIntermediateState,
                                    StableDiffusionGeneratorPipeline, infill_methods)

   from invokeai.models import ( ModelManager, SDLegacyType
                                 InvokeAIDiffuserComponent, AttentionMapSaver,
                                 DDIMSampler, KSampler, PLMSSampler,
                                 PostprocessingSettings )
2023-02-27 23:52:46 -05:00
a3076cf951 perf(invoke_ai_web_server): encode intermediate result previews as jpeg
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-25 21:23:25 -08:00
17b039e85d doc(invoke_ai_web_server): put docstrings inside their functions
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-25 20:21:47 -08:00
7b7b31637c Merge branch 'main' into refactor_use_compel 2023-02-23 07:43:30 +13:00
7ffaa17551 fix(ui): use prompt bug when prompt has colon
This bug is related to the format in which we stored prompts for some time: an array of weighted subprompts.

This caused some strife when recalling a prompt if the prompt had colons in it, due to our recently introduced handling of negative prompts.

Currently there is no need to store a prompt as anything other than a string, so we revert to doing that.

Compatibility with structured prompts is maintained via helper hook.
2023-02-22 20:33:58 +11:00
97eac58a50 fix blend tokenizaiton reporting; fix LDM checkpoint support 2023-02-22 10:29:42 +01:00
cedbe8fcd7 fix .blend 2023-02-22 09:04:23 +01:00
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00
ded3f13a33 move all prompting stuff to use compel 2023-02-19 20:42:29 +01:00
dd849d2e91 Fix Localization Porting Bugs 2023-02-18 19:32:55 +13:00
5b7e4a5f5d Add Error Handling For Merging 2023-02-18 12:17:22 +13:00
ebe0071ed2 feat: [WebUI] Model Merging 2023-02-18 10:13:56 +13:00
bcb1fbe031 add tooltips & status messages to model conversion 2023-02-15 22:28:36 +11:00
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
b1a53c8ef0 {Model Manager] Backend update to support custom save locations and configs 2023-02-12 11:10:47 +13:00
d0e6a57e48 make inpaint model conversion work
Fixed a couple of bugs:

1. The original config file for the ckpt file is derived from the entry in
   `models.yaml` rather than relying on the user to select. The implication
   of this is that V2 ckpt models need to be assigned `v2-inference-v.yaml`
   when they are first imported. Otherwise they won't convert right. Note
   that currently V2 ckpts are imported with `v1-inference.yaml`, which
   isn't right either.

2. Fixed a backslash in the output diffusers path, which was causing
   load failures on Linux.

Remaining issues:

1. The radio buttons for selecting the model type are
   nonfunctional. It feels to me like these should be moved into the
   dialogue for importing ckpt/safetensors files, because this is
   where the algorithm needs help from the user.

2. The output diffusers model is written into the same directory as
   the input ckpt file. The CLI does it differently and stores the
   diffusers model in `ROOTDIR/models/converted-ckpts`. We should
   settle on one way or the other.
2023-02-11 15:53:41 -05:00
96926d6648 v2 Conversion Support & Radio Picker
Converted the picker options to a Radio Group and also updated the backend to use the appropriate config if it is a v2 model that needs to be converted.
2023-02-12 05:00:29 +13:00
310501cd8a Add support for custom config files 2023-02-11 23:34:24 +13:00
94c31f672f Add Initial Checks for Inpainting
The conversion itself is broken. But that's another issue.
2023-02-11 20:41:18 +13:00
f3153d45bc Initial Implementation - Model Conversion Backend 2023-02-11 03:53:15 +13:00
9601febef8 Add denoise_str to ESRGARN - frontend server 2023-02-09 20:16:47 +13:00
2432adb38f In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
9c8fcaaf86 Beautify & Cleanup WebUI Logs 2023-02-05 22:55:57 +13:00
c061c1b1b6 fix frontend path
point to package's path instead of searching for it
2023-01-31 08:15:20 +01:00
9ad4c03277 Various fixes
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.
2023-01-30 18:42:17 -05:00