Commit Graph

18 Commits

Author SHA1 Message Date
Lincoln Stein
201eb22d76 prevent two models from being marked default in models.yaml 2022-11-11 04:41:02 +00:00
Lincoln Stein
71ee44a827 prevent crash when switching to an invalid model 2022-11-09 10:16:37 -05:00
Lincoln Stein
ab2b5a691d fix model_cache memory management issues 2022-11-01 17:23:20 -04:00
Lincoln Stein
80f2cfe3e3 set default max_models to 2 internally as well as as arg 2022-10-31 09:05:38 -04:00
Lincoln Stein
0c8f0e3386 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:53:16 -04:00
Lincoln Stein
13f26a99b8 documentation and usability fixes 2022-10-29 10:37:38 -04:00
Lincoln Stein
ef68a419f1 preload_models.py script downloads the weight files
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
  license terms the very first time this is run. After that, everything
  works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
  for textual inversion, but I don't think so.
2022-10-29 01:02:45 -04:00
Lincoln Stein
83a3cc9eb4 start support for 1.5 inpainting model, not complete 2022-10-25 00:30:48 -04:00
Lincoln Stein
f25c1f900f add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-23 09:33:15 -04:00
Lincoln Stein
83e6ab08aa further improvements to model loading
- code for committing config changes to models.yaml now in module
  rather than in invoke script
- model marked "default" is now loaded if model not specified on
  command line
- uncache changed models when edited, so that they reload properly
- removed liaon from models.yaml and added stable-diffusion-1.5
2022-10-21 00:28:54 -04:00
Lincoln Stein
a705a5a0aa enhance support for model switching and editing
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
2022-10-15 15:46:29 -04:00
Lincoln Stein
6afc0f9b38 add ability to import and edit alternative models online
- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
2022-10-13 23:48:07 -04:00
Lincoln Stein
916f5bfbb2 gracefully recover from failed model load 2022-10-13 12:27:04 -04:00
Lincoln Stein
1c102c71fc final fixups to memory_cache
- fixed backwards calculation of minimum available memory
- only execute m.padding adjustment code once upon load
2022-10-12 15:56:06 -04:00
Lincoln Stein
aa6aa68753 proposed fix to work on mps systems 2022-10-12 11:08:27 -04:00
Lincoln Stein
b537e92789 move tokenizer into cpu cache as well 2022-10-12 03:03:29 -04:00
Lincoln Stein
488334710b enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.

Coauthored by: @ArDiouscuros
2022-10-12 02:37:42 -04:00
Lincoln Stein
b9e910b5f4 add mostly functional model caching module 2022-10-11 17:24:10 -04:00