At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.
This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.
Addresses issue #1083.
- !import_model <path/to/model/weights> will import a new model,
prompt the user for its name and description, write it to the
models.yaml file, and load it.
- !edit_model <model_name> will bring up a previously-defined model
and prompt the user to edit its descriptive fields.
Example of !import_model
<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu Diffusion v1.3
height: 512
weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
</pre>
Example of !edit_model
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
- This PR enables two new commands in the invoke.py script
!models -- list the available models and their cache status
!switch <model> -- switch to the indicated model
Example:
invoke> !models
laion400m not loaded Latent Diffusion LAION400M model
stable-diffusion-1.4 active Stable Diffusion inference model version 1.4
waifu-1.3 cached Waifu anime model version 1.3
invoke> !switch waifu-1.3
>> Caching model stable-diffusion-1.4 in system RAM
>> Retrieving model waifu-1.3 from system RAM cache
The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.
More details:
- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
to be adjusted, but invoke.py does not currently have a
command-line argument that allows you to set it. The
minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml
Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
more consistent.
- When generating an image greater than defaults, will only warn about possible
VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
moving the import line for the web backend into the section where it is
called.
Coauthored by: @ArDiouscuros
- txt2img2img back to using DDIM as img2img sampler; results produced
by some k* samplers are just not reliable enough for good user
experience
- img2img progress message clarifies why img2img steps taken != steps requested
- warn of potential problems when user tries to run img2img on a small init image
- img2img confirmed working with all samplers
- inpainting working on ddim & plms. Changes to k-diffusion
module seem to be needed for inpainting support.
- switched k-diffuser noise schedule to original karras schedule,
which reduces the step number needed for good results
* revert inadvertent change of conda env name (#528)
* Refactor generate.py and dream.py
* config file path (models.yaml) is parsed inside Generate() to simplify
API
* Better handling of keyboard interrupts in file loading mode vs
interactive
* Removed oodles of unused variables.
* move nonfunctional inpainting out of the scripts directory
* fix ugly ddim tqdm formatting
* start refactoring -not yet functional
* first phase of refactor done - not sure weighted prompts working
* Second phase of refactoring. Everything mostly working.
* The refactoring has moved all the hard-core inference work into
ldm.dream.generator.*, where there are submodules for txt2img and
img2img. inpaint will go in there as well.
* Some additional refactoring will be done soon, but relatively
minor work.
* fix -save_orig flag to actually work
* add @neonsecret attention.py memory optimization
* remove unneeded imports
* move token logging into conditioning.py
* add placeholder version of inpaint; porting in progress
* fix crash in img2img
* inpainting working; not tested on variations
* fix crashes in img2img
* ported attention.py memory optimization #117 from basujindal branch
* added @torch_no_grad() decorators to img2img, txt2img, inpaint closures
* Final commit prior to PR against development
* fixup crash when generating intermediate images in web UI
* rename ldm.simplet2i to ldm.generate
* add backward-compatibility simplet2i shell with deprecation warning
* add back in mps exception, addresses @vargol comment in #354
* replaced Conditioning class with exported functions
* fix wrong type of with_variations attribute during intialization
* changed "image_iterator()" to "get_make_image()"
* raise NotImplementedError for calling get_make_image() in parent class
* Update ldm/generate.py
better error message
Co-authored-by: Kevin Gibbons <bakkot@gmail.com>
* minor stylistic fixes and assertion checks from code review
* moved get_noise() method into img2img class
* break get_noise() into two methods, one for txt2img and the other for img2img
* inpainting works on non-square images now
* make get_noise() an abstract method in base class
* much improved inpainting
Co-authored-by: Kevin Gibbons <bakkot@gmail.com>