Commit Graph

20 Commits

Author SHA1 Message Date
Lincoln Stein
21961f0c32 Revert "Use array slicing to calc ddim timesteps"
This reverts commit 1f0c5b4cf1.
2022-11-07 15:37:53 -05:00
wfng92
1f0c5b4cf1 Use array slicing to calc ddim timesteps 2022-11-03 20:11:04 -04:00
Lincoln Stein
ddb007af65
Merge branch 'development' into fix-high-step-count 2022-10-21 06:55:17 -04:00
wfng92
1c2bd275fe Fix img2img DDIM index out of bound
Added a [community solution](https://github.com/CompVis/stable-diffusion/issues/111#issuecomment-1229483511) to fix index out of bound when doing img2img generation with `ddim` sampler. Also, restored `steps_out` to be `ddim_timesteps + 1` since the removal was meant to fix the [1000 steps issue](https://github.com/CompVis/stable-diffusion/issues/111)
2022-10-17 22:32:15 -04:00
Lincoln Stein
fe2a2cfc8b
Merge branch 'development' into model-switching 2022-10-14 13:18:59 -04:00
Lincoln Stein
6afc0f9b38 add ability to import and edit alternative models online
- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
2022-10-13 23:48:07 -04:00
Jan Skurovec
cac3f5fc61 fix for "1 leaked semaphore objects to clean up at shutdown" on M1
Implements fix by @Any-Winter-4079 referenced in https://github.com/invoke-ai/InvokeAI/issues/1016#issuecomment-1276825640
2022-10-13 13:33:59 +02:00
Lincoln Stein
488334710b enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.

Coauthored by: @ArDiouscuros
2022-10-12 02:37:42 -04:00
Arthur Holstvoogd
595d15455a Fix generation of image with s>1000 2022-10-06 15:49:35 +02:00
Mihai
071f65a892
Enable even larger images with one simple torch.nn.functional.silu import (#653)
Fixes:
File "stable-diffusion/ldm/modules/diffusionmodules/model.py", line 37, in nonlinearity
    return x*torch.sigmoid(x)
RuntimeError: CUDA out of memory. Tried to allocate 1.56 GiB [..]

Now up to 1536x1280 is possible on 8GB VRAM.
Also remove unused SiLU class.
2022-09-17 18:03:52 -04:00
Mihai
dedf8a3692
Remove pointless del statements in diffusionmodules.model. (#520) 2022-09-12 17:39:06 -04:00
Mihai
0bc6779361
Disable autocast for cpu to fix error. Remove unused precision arg. (#518)
When running on just cpu (intel), a call to torch.layer_norm would error with RuntimeError: expected scalar type BFloat16 but found Float
Fix buggy device handling in model.py.
Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.
2022-09-12 16:55:21 -04:00
Any-Winter-4079
25d9ccc509 Update model.py 2022-09-11 22:37:45 -04:00
Lincoln Stein
10db192cc4 changes to dogettx optimizations to run on m1
* Author @any-winter-4079
* Author @dogettx
Thanks to many individuals who contributed time and hardware to
benchmarking and debugging these changes.
2022-09-09 09:51:41 -04:00
Lincoln Stein
653144694f
work around unexplained crash when timesteps=1000 (#440)
* work around unexplained crash when timesteps=1000

* this fix seems to work
2022-09-08 20:41:37 -04:00
Lincoln Stein
4f02b72c9c prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
Sean McLellan
8952196bbf Add personalization 2022-08-23 18:26:28 -04:00
Robin Rombach
2ff270f4e0 stable diffusion 2022-08-10 16:30:49 +02:00
ablattmann
171cf29fb5 add configs for training unconditional/class-conditional ldms 2021-12-22 15:57:23 +01:00
ablattmann
e66308c7f2 add code 2021-12-21 03:23:41 +01:00