Commit Graph

1229 Commits

Author SHA1 Message Date
a705a5a0aa enhance support for model switching and editing
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
2022-10-15 15:46:29 -04:00
fe2a2cfc8b Merge branch 'development' into model-switching 2022-10-14 13:18:59 -04:00
32dab7d4bf close #1094, dangling gfpgan_strength reference 2022-10-14 07:45:10 -04:00
1c501333e8 minor doc fixes 2022-10-14 07:30:26 -04:00
ce5e57d828 Generalize facetool strength argument 2022-10-14 00:03:06 -04:00
e98fe9c22d fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 00:01:59 -04:00
6afc0f9b38 add ability to import and edit alternative models online
- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
2022-10-13 23:48:07 -04:00
916f5bfbb2 gracefully recover from failed model load 2022-10-13 12:27:04 -04:00
7f491fd2d2 Reword deprecation warning for dream.py 2022-10-13 12:12:05 -04:00
203a6d8a00 Forward dream.py to invoke.py using the same interpreter, add deprecation warning 2022-10-13 12:12:05 -04:00
cac3f5fc61 fix for "1 leaked semaphore objects to clean up at shutdown" on M1
Implements fix by @Any-Winter-4079 referenced in https://github.com/invoke-ai/InvokeAI/issues/1016#issuecomment-1276825640
2022-10-13 13:33:59 +02:00
7e33560010 Hires Addition
Updated ImageMetaDataViewer with correct values
Updated tooltip text
Add arguments for Hires & Seamless Metadata
2022-10-13 23:57:24 +13:00
057fc95aa3 Print out the device type which is used
Print out the device type which is used for generating images.
2022-10-12 20:36:43 -04:00
1c102c71fc final fixups to memory_cache
- fixed backwards calculation of minimum available memory
- only execute m.padding adjustment code once upon load
2022-10-12 15:56:06 -04:00
aa6aa68753 proposed fix to work on mps systems 2022-10-12 11:08:27 -04:00
b537e92789 move tokenizer into cpu cache as well 2022-10-12 03:03:29 -04:00
7c06849c4d Merge branch 'model-switching' of github.com:invoke-ai/InvokeAI into model-switching 2022-10-12 02:39:57 -04:00
488334710b enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.

Coauthored by: @ArDiouscuros
2022-10-12 02:37:42 -04:00
19341e95a6 enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.
2022-10-12 02:19:12 -04:00
c82e94811b Update Stable_Diffusion_AI_Notebook.ipynb 2022-10-11 21:42:31 -04:00
c15a902e8d Update Stable_Diffusion_AI_Notebook.ipynb
Making Stable_Diffusion_AI_Notebook.ipynb work smoothly on Google Colab
2022-10-11 21:42:31 -04:00
b9e910b5f4 add mostly functional model caching module 2022-10-11 17:24:10 -04:00
101cac6a21 reintroduce fix for m1 from PR#579 missing after merge
Make results reproducible (so runs with the same seed produce the same result).
Implements fix by @wbowling referenced in https://github.com/invoke-ai/InvokeAI/issues/397#issuecomment-1240679294
2022-10-11 23:00:20 +02:00
06f542ed7a Update .gitignore 2022-10-11 16:28:48 +13:00
9eff9e5752 update mac instructions to use invokeai for env name 2022-10-10 17:45:18 -04:00
12413b0be6 Fix safari display:grid lag 2022-10-10 03:13:56 +02:00
789714b0b1 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 15:38:22 -04:00
bb7629d2b8 fix environment-mac.yml as per #964 2022-10-09 15:34:19 -04:00
c5344acb25 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 15:30:23 -04:00
6e2fd2affe rebuild frontend 2022-10-09 14:52:00 -04:00
8faa06fb15 Merge branch 'main' into development
- this syncs documentation and code
2022-10-09 14:47:27 -04:00
ce8c238ac4 final tweak to embedded screenshots in WEB.md 2022-10-09 11:45:16 -04:00
f6c37e46e1 make initial screenshot even larger 2022-10-09 11:43:42 -04:00
2d69efccef fixup image sizes in WEB.md 2022-10-09 11:42:59 -04:00
f9d2aafaeb fix image links in documentation 2022-10-09 11:42:03 -04:00
22514aec2e Minor CSS & Link Updates
Also updated dist files with new CSS
2022-10-09 11:40:16 -04:00
5a22a83f4c add missing doc files 2022-10-09 11:38:39 -04:00
b1d43eae46 almost ready for public release
- merged release-candidate-2
- fix up documentation
- add web tutorial
2022-10-09 11:37:00 -04:00
0b8cdb6964 Fix markdown typo in WEB.md 2022-10-09 08:57:14 -04:00
aed5ad22fb Updates INSTALL_MAC.md 2022-10-09 08:49:29 -04:00
dc9c16b93d Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-09 08:48:23 -04:00
f6e858a548 Update README.md
Small writing error
2022-10-09 08:45:55 -04:00
4c2db171ca Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-09 08:45:33 -04:00
1255127e49 rebuild frontend 2022-10-09 08:37:51 -04:00
1cb74a6357 [WebUI] Masonry Layout for Gallery 2022-10-09 08:36:28 -04:00
5e2b250426 Images grow to fit space in gallery 2022-10-09 08:36:17 -04:00
ad190cfbb2 Smaller Gallery Images 2022-10-09 08:36:03 -04:00
542ceb051b Rework Gallery DIsplay 2022-10-09 08:34:57 -04:00
3473669458 WebUI Bug Fixes & Tweaks 2022-10-09 08:33:18 -04:00
3170c83d8d [WebUI] Masonry Layout for Gallery 2022-10-09 08:32:06 -04:00