InvokeAI/backend/invoke_ai_web_server.py
Kevin Turner 6fdbc1978d
use 🧨diffusers model (#1583)
* initial commit of DiffusionPipeline class

* spike: proof of concept using diffusers for txt2img

* doc: type hints for Generator

* refactor(model_cache): factor out load_ckpt

* model_cache: add ability to load a diffusers model pipeline

and update associated things in Generate & Generator to not instantly fail when that happens

* model_cache: fix model default image dimensions

* txt2img: support switching diffusers schedulers

* diffusers: let the scheduler do its scaling of the initial latents

Remove IPNDM scheduler; it is not behaving.

* web server: update image_progress callback for diffusers data

* diffusers: restore prompt weighting feature

* diffusers: fix set-sampler error following model switch

* diffusers: use InvokeAIDiffuserComponent for conditioning

* cross_attention_control: stub (no-op) implementations for diffusers

* model_cache: let offload_model work with DiffusionPipeline, sorta.

* models.yaml.example: add diffusers-format model, set as default

* test-invoke-conda: use diffusers-format model
test-invoke-conda: put huggingface-token where the library can use it

* environment-mac: upgrade to diffusers 0.7 (from 0.6)

this was already done for linux; mac must have been lost in the merge.

* preload_models: explicitly load diffusers models

In non-interactive mode too, as long as you're logged in.

* fix(model_cache): don't check `model.config` in diffusers format

clean-up from recent merge.

* diffusers integration: support img2img

* dev: upgrade to diffusers 0.8 (from 0.7.1)

We get to remove some code by using methods that were factored out in the base class.

* refactor: remove backported img2img.get_timesteps

now that we can use it directly from diffusers 0.8.1

* ci: use diffusers model

* dev: upgrade to diffusers 0.9 (from 0.8.1)

* lint: correct annotations for Python 3.9.

* lint: correct AttributeError.name reference for Python 3.9.

* CI: prefer diffusers-1.4 because it no longer requires a token

The RunwayML models still do.

* build: there's yet another place to update requirements?

* configure: try to download models even without token

Models in the CompVis and stabilityai repos no longer require them. (But runwayml still does.)

* configure: add troubleshooting info for config-not-found

* fix(configure): prepend root to config path

* fix(configure): remove second `default: true` from models example

* CI: simplify test-on-push logic now that we don't need secrets

The "test on push but only in forks" logic was only necessary when tests didn't work for PRs-from-forks.

* create an embedding_manager for diffusers

* internal: avoid importing diffusers DummyObject

see https://github.com/huggingface/diffusers/issues/1479

* fix "config attributes…not expected" diffusers warnings.

* fix deprecated scheduler construction

* work around an apparent MPS torch bug that causes conditioning to have no effect

* 🚧 post-rebase repair

* preliminary support for outpainting (no masking yet)

* monkey-patch diffusers.attention and use Invoke lowvram code

* add always_use_cpu arg to bypass MPS

* add cross-attention control support to diffusers (fails on MPS)

For unknown reasons MPS produces garbage output with .swap(). Use
--always_use_cpu arg to invoke.py for now to test this code on MPS.

* diffusers support for the inpainting model

* fix debug_image to not crash with non-RGB images.

* inpainting for the normal model [WIP]

This seems to be performing well until the LAST STEP, at which point it dissolves to confetti.

* fix off-by-one bug in cross-attention-control (#1774)

prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.

* refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary

* inpainting for the normal model. I think it works this time.

* diffusers: reset num_vectors_per_token

sync with 44a0055571

* diffusers: txt2img2img (hires_fix)

with so much slicing and dicing of pipeline methods to stitch them together

* refactor(diffusers): reduce some code duplication amongst the different tasks

* fixup! refactor(diffusers): reduce some code duplication amongst the different tasks

* diffusers: enable DPMSolver++ scheduler

* diffusers: upgrade to diffusers 0.10, add Heun scheduler

* diffusers(ModelCache): stopgap to make from_cpu compatible with diffusers

* CI: default to diffusers-1.5 now that runwayml token requirement is gone

* diffusers: update to 0.10 (and transformers to 4.25)

* diffusers: use xformers when available

diffusers no longer auto-enables this as of 0.10.2.

* diffusers: make masked img2img behave better with multi-step schedulers

re-randomizing the noise each step was confusing them.

* diffusers: work more better with more models.

fixed relative path problem with local models.

fixed models on hub not always having a `fp16` branch.

* diffusers: stopgap fix for attention_maps_callback crash after recent merge

* fixup import merge conflicts

correction for 061c5369a2

* test: add tests/inpainting inputs for masked img2img

* diffusers(AddsMaskedGuidance): partial fix for k-schedulers

Prevents them from crashing, but results are still hot garbage.

* fix --safety_checker arg parsing

and add note to diffusers loader about where safety checker gets called

* generate: fix import error

* CI: don't try to read the old init location

* diffusers: support loading an alternate VAE

* CI: remove sh-syntax if-statement so it doesn't crash powershell

* CI: fold strings in yaml because backslash is not line-continuation in powershell

* attention maps callback stuff for diffusers

* build: fix syntax error in environment-mac

* diffusers: add INITIAL_MODELS with diffusers-compatible repos

* re-enable the embedding manager; closes #1778

* Squashed commit of the following:

commit e4a956abc37fcb5cf188388b76b617bc5c8fda7d
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:43:07 2022 +0100

    import new load handling from EmbeddingManager and cleanup

commit c4abe91a5ba0d415b45bf734068385668b7a66e6
Merge: 032e856e 1efc6397
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:09:53 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers_with_textual_inversion_manager

commit 032e856eefb3bbc39534f5daafd25764bcfcef8b
Merge: 8b4f0fe9 bc515e24
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:08:01 2022 +0100

    Merge remote-tracking branch 'upstream/dev/diffusers' into dev/diffusers_with_textual_inversion_manager

commit 1efc6397fc6e61c1aff4b0258b93089d61de5955
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:04:28 2022 +0100

    cleanup and add performance notes

commit e400f804ac471a0ca2ba432fd658778b20c7bdab
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:45:07 2022 +0100

    fix bug and update unit tests

commit deb9ae0ae1016750e93ce8275734061f7285a231
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:28:29 2022 +0100

    textual inversion manager seems to work

commit 162e02505dec777e91a983c4d0fb52e950d25ff0
Merge: cbad4583 12769b3d
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:58:03 2022 +0100

    Merge branch 'main' into feature_textual_inversion_mgr

commit cbad45836c6aace6871a90f2621a953f49433131
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:54:10 2022 +0100

    use position embeddings

commit 070344c69b0e0db340a183857d0a787b348681d3
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:53:47 2022 +0100

    Don't crash CLI on exceptions

commit b035ac8c6772dfd9ba41b8eeb9103181cda028f8
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:11:55 2022 +0100

    add missing position_embeddings

commit 12769b3d3562ef71e0f54946b532ad077e10043c
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:33:25 2022 +0100

    debugging why it don't work

commit bafb7215eabe1515ca5e8388fd3bb2f3ac5362cf
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:21:33 2022 +0100

    debugging why it don't work

commit 664a6e9e14
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit 8b4f0fe9d6e4e2643b36dfa27864294785d7ba4e
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit ffbe1ab11163ba712e353d89404e301d0e0c6cdf
Merge: 6e4dad60 023df37e
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:37:31 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers

commit 023df37eff
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:36:54 2022 +0100

    cleanup

commit 05fac594ea
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:07:49 2022 +0100

    tweak error checking

commit 009f32ed39
Author: damian <null@damianstewart.com>
Date:   Thu Dec 15 21:29:47 2022 +0100

    unit tests passing for embeddings with vector length >1

commit beb1b08d9a
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:39:09 2022 +0100

    more explicit equality tests when overwriting

commit 44d8a5a7c8
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:30:13 2022 +0100

    wip textual inversion manager (unit tests passing for 1v embedding overwriting)

commit 417c2b57d9
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 12:30:55 2022 +0100

    wip textual inversion manager (unit tests passing for base stuff + padding)

commit 2e80872e3b
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 10:57:57 2022 +0100

    wip new TextualInversionManager

* stop using WeightedFrozenCLIPEmbedder

* store diffusion models locally

- configure_invokeai.py reconfigured to store diffusion models rather than
  CompVis models
- hugging face caching model is used, but cache is set to ~/invokeai/models/repo_id
- models.yaml does **NOT** use path, just repo_id
- "repo_name" changed to "repo_id" to following hugging face conventions
- Models are loaded with full precision pending further work.

* allow non-local files during development

* path takes priority over repo_id

* MVP for model_cache and configure_invokeai

- Feature complete (almost)

- configure_invokeai.py downloads both .ckpt and diffuser models,
  along with their VAEs. Both types of download are controlled by
  a unified INITIAL_MODELS.yaml file.

- model_cache can load both type of model and switches back and forth
  in CPU. No memory leaks detected

TO DO:

  1. I have not yet turned on the LocalOnly flag for diffuser models, so
     the code will check the Hugging Face repo for updates before using the
     locally cached models. This will break firewalled systems. I am thinking
     of putting in a global check for internet connectivity at startup time
     and setting the LocalOnly flag based on this. It would be good to check
     updates if there is connectivity.

  2. I have not gone completely through INITIAL_MODELS.yaml to check which
     models are available as diffusers and which are not. So models like
     PaperCut and VoxelArt may not load properly. The runway and stability
     models are checked, as well as the Trinart models.

  3. Add stanzas for SD 2.0 and 2.1 in INITIAL_MODELS.yaml

REMAINING PROBLEMS NOT DIRECTLY RELATED TO MODEL_CACHE:

  1. When loading a .ckpt file there are lots of messages like this:

     Warning! ldm.modules.attention.CrossAttention is no longer being
     maintained. Please use InvokeAICrossAttention instead.

     I'm not sure how to address this.

  2. The ckpt models ***don't actually run*** due to the lack of special-case
     support for them in the generator objects. For example, here's the hard
     crash you get when you run txt2img against the legacy waifu-diffusion-1.3
     model:
```
     >> An error occurred:
     Traceback (most recent call last):
       File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 140, in main
           main_loop(gen, opt)
      File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 371, in main_loop
         gen.prompt2image(
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
	 results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
         image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
         pipeline_output = pipeline.image_from_embeddings(
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in __getattr__
         raise AttributeError("'{}' object has no attribute '{}'".format(
     AttributeError: 'LatentDiffusion' object has no attribute 'image_from_embeddings'
```

  3. The inpainting diffusion model isn't working. Here's the output of "banana
     sushi" when inpainting-1.5 is loaded:

```
    Traceback (most recent call last):
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
        results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
        image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
        pipeline_output = pipeline.image_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 301, in image_from_embeddings
        result_latents, result_attention_map_saver = self.latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 330, in latents_from_embeddings
        result: PipelineIntermediateState = infer_latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 185, in __call__
        for result in self.generator_method(*args, **kwargs):
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 367, in generate_latents_from_embeddings
        step_output = self.step(batched_t, latents, guidance_scale,
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 409, in step
        step_output = self.scheduler.step(noise_pred, timestep, latents, **extra_step_kwargs)
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_lms_discrete.py", line 223, in step
        pred_original_sample = sample - sigma * model_output
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1
```

* proper support for float32/float16

- configure script now correctly detects user's preference for
  fp16/32 and downloads the correct diffuser version. If fp16
  version not available, falls back to fp32 version.

- misc code cleanup and simplification in model_cache

* add on-the-fly conversion of .ckpt to diffusers models

1. On-the-fly conversion code can be found in the file ldm/invoke/ckpt_to_diffusers.py.

2. A new !optimize command has been added to the CLI. Should be ported to Web GUI.

User experience on the CLI is this:

```
invoke> !optimize /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
INFO: Converting legacy weights file /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt to optimized diffuser model.
      This operation will take 30-60s to complete.
Success. Optimized model is now located at /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
Writing new config file entry for sd-v1-4...

>> New configuration:
sd-v1-4:
  description: Optimized version of sd-v1-4
  format: diffusers
  path: /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4

OK to import [n]? y
>> Verifying that new model loads...
>> Current VRAM usage:  2.60G
>> Offloading stable-diffusion-2.1 to CPU
>> Loading diffusers model from /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
  | Using faster float16 precision
You have disabled the safety checker for <class 'ldm.invoke.generator.diffusers_pipeline.StableDiffusionGeneratorPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion \
license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances,\
 disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
  | training width x height = (512 x 512)
>> Model loaded in 3.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
Keep model loaded? [y]
```

* add parallel set of generator files for ckpt legacy generation

* generation using legacy ckpt models now working

* diffusers: fix missing attention_maps_callback

fix for 23eb80b404

* associate legacy CrossAttention with .ckpt models

* enable autoconvert

New --autoconvert CLI option will scan a designated directory for
new .ckpt files, convert them into diffuser models, and import
them into models.yaml.

Works like this:

   invoke.py --autoconvert /path/to/weights/directory

In ModelCache added two new methods:

  autoconvert_weights(config_path, weights_directory_path, models_directory_path)
  convert_and_import(ckpt_path, diffuser_path)

* diffusers: update to diffusers 0.11 (from 0.10.2)

* fix vae loading & width/height calculation

* refactor: encapsulate these conditioning data into one container

* diffusers: fix some noise-scaling issues by pushing the noise-mixing down to the common function

* add support for safetensors and accelerate

* set local_files_only when internet unreachable

* diffusers: fix error-handling path when model repo has no fp16 branch

* fix generatorinpaint error

Fixes :
  "ModuleNotFoundError: No module named 'ldm.invoke.generatorinpaint'
   https://github.com/invoke-ai/InvokeAI/pull/1583#issuecomment-1363634318

* quench diffuser safety-checker warning

* diffusers: support stochastic DDIM eta parameter

* fix conda env creation on macos

* fix cross-attention with diffusers 0.11

* diffusers: the VAE needs to be tiling as well as the U-Net

* diffusers: comment on subfolders

* diffusers: embiggen!

* diffusers: make model_cache.list_models serializable

* diffusers(inpaint): restore scaling functionality

* fix requirements clash between numba and numpy 1.24

* diffusers: allow inpainting model to do non-inpainting tasks

* start expanding model_cache functionality

* add import_ckpt_model() and import_diffuser_model() methods to model_manager

- in addition, model_cache.py is now renamed to model_manager.py

* allow "recommended" flag to be optional in INITIAL_MODELS.yaml

* configure_invokeai now downloads VAE diffusers in advance

* rename ModelCache to ModelManager

* remove support for `repo_name` in models.yaml

* check for and refuse to load embeddings trained on incompatible models

* models.yaml.example: s/repo_name/repo_id

and remove extra INITIAL_MODELS now that the main one has diffusers models in it.

* add MVP textual inversion script

* refactor(InvokeAIDiffuserComponent): factor out _combine()

* InvokeAIDiffuserComponent: implement threshold

* InvokeAIDiffuserComponent: diagnostic logs for threshold

...this does not look right

* add a curses-based frontend to textual inversion

- not quite working yet
- requires npyscreen installed
- on windows will also have the windows-curses requirement, but not added
  to requirements yet

* add curses-based interface for textual inversion

* fix crash in convert_and_import()

- This corrects a "local variable referenced before assignment" error
  in model_manager.convert_and_import()

* potential workaround for no 'state_dict' key error

- As reported in https://github.com/huggingface/diffusers/issues/1876

* create TI output dir if needed

* Update environment-lin-cuda.yml (#2159)

Fixing line 42 to be the proper order to define the transformers requirement: ~= instead of =~

* diffusers: update sampler-to-scheduler mapping

based on https://github.com/huggingface/diffusers/issues/277#issuecomment-1371428672

* improve user exp for ckt to diffusers conversion

- !optimize_models command now operates on an existing ckpt file entry in models.yaml
- replaces existing entry, rather than adding a new one
- offers to delete the ckpt file after conversion

* web: adapt progress callback to deal with old generator or new diffusers pipeline

* clean-up model_manager code

- add_model() verified to work for .ckpt local paths,
  .ckpt remote URLs, diffusers local paths, and
  diffusers repo_ids

- convert_and_import() verified to work for local and
  remove .ckpt files

* handle edge cases for import_model() and convert_model()

* add support for safetensor .ckpt files

* fix name error

* code cleanup with pyflake

* improve model setting behavior

- If the user enters an invalid model name at startup time, will not
  try to load it, warn, and use default model
- CLI UI enhancement: include currently active model in the command
  line prompt.

* update test-invoke-pip.yml
- fix model cache path to point to runwayml/stable-diffusion-v1-5
- remove `skip-sd-weights` from configure_invokeai.py args

* exclude dev/diffusers from "fail for draft PRs"

* disable "fail on PR jobs"

* re-add `--skip-sd-weights` since no space

* update workflow environments
- include `INVOKE_MODEL_RECONFIGURE: '--yes'`

* clean up model load failure handling

- Allow CLI to run even when no model is defined or loadable.
- Inhibit stack trace when model load fails - only show last error
- Give user *option* to run configure_invokeai.py when no models
  successfully load.
- Restart invokeai after reconfiguration.

* further edge-case handling

1) only one model in models.yaml file, and that model is broken
2) no models in models.yaml
3) models.yaml doesn't exist at all

* fix incorrect model status listing

- "cached" was not being returned from list_models()
- normalize handling of exceptions during model loading:
   - Passing an invalid model name to generate.set_model() will return
     a KeyError
   - All other exceptions are returned as the appropriate Exception

* CI: do download weights (if not already cached)

* diffusers: fix scheduler loading in offline mode

* CI: fix model name (no longer has `diffusers-` prefix)

* Update txt2img2img.py (#2256)

* fixes to share models with HuggingFace cache system

- If HF_HOME environment variable is defined, then all huggingface models
  are stored in that directory following the standard conventions.
- For seamless interoperability, set HF_HOME to ~/.cache/huggingface
- If HF_HOME not defined, then models are stored in ~/invokeai/models.
  This is equivalent to setting HF_HOME to ~/invokeai/models

A future commit will add a migration mechanism so that this change doesn't
break previous installs.

* feat - make model storage compatible with hugging face caching system

This commit alters the InvokeAI model directory to be compatible with
hugging face, making it easier to share diffusers (and other models)
across different programs.

- If the HF_HOME environment variable is not set, then models are
  cached in ~/invokeai/models in a format that is identical to the
  HuggingFace cache.

- If HF_HOME is set, then models are cached wherever HF_HOME points.

- To enable sharing with other HuggingFace library clients, set
  HF_HOME to ~/.cache/huggingface to set the default cache location
  or to ~/invokeai/models to have huggingface cache inside InvokeAI.

* fixes to share models with HuggingFace cache system

    - If HF_HOME environment variable is defined, then all huggingface models
      are stored in that directory following the standard conventions.
    - For seamless interoperability, set HF_HOME to ~/.cache/huggingface
    - If HF_HOME not defined, then models are stored in ~/invokeai/models.
      This is equivalent to setting HF_HOME to ~/invokeai/models

    A future commit will add a migration mechanism so that this change doesn't
    break previous installs.

* fix error "no attribute CkptInpaint"

* model_manager.list_models() returns entire model config stanza+status

* Initial Draft - Model Manager Diffusers

* added hash function to diffusers

* implement sha256 hashes on diffusers models

* Add Model Manager Support for Diffusers

* fix various problems with model manager

- in cli import functions, fix not enough values to unpack from
  _get_name_and_desc()
- fix crash when using old-style vae: value with new-style diffuser

* rebuild frontend

* fix dictconfig-not-serializable issue

* fix NoneType' object is not subscriptable crash in model_manager

* fix "str has no attribute get" error in model_manager list_models()

* Add path and repo_id support for Diffusers Model Manager

Also fixes bugs

* Fix tooltip IT localization not working

* Add Version Number To WebUI

* Optimize Model Search

* Fix incorrect font on the Model Manager UI

* Fix image degradation on merge fixes - [Experimental]

This change should effectively fix a couple of things.

- Fix image degradation on subsequent merges of the canvas layers.
- Fix the slight transparent border that is left behind when filling the bounding box with a color.
- Fix the left over line of color when filling a bounding box with color.

So far there are no side effects for this. If any, please report.

* Add local model filtering for Diffusers / Checkpoints

* Go to home on modal close for the Add Modal UI

* Styling Fixes

* Model Manager Diffusers Localization Update

* Add Safe Tensor scanning to Model Manager

* Fix model edit form dispatching string values instead of numbers.

* Resolve VAE handling / edge cases for supplied repos

* defer injecting tokens for textual inversions until they're used for the first time

* squash a console warning

* implement model migration check

* add_model() overwrites previous config rather than merges

* fix model config file attribute merging

* fix precision handling in textual inversion script

* allow ckpt conversion script to work with safetensors .ckpts

Applied patch here:
beb932c5d1

* fix name "args" is not defined crash in textual_inversion_training

* fix a second NameError: name 'args' is not defined crash

* fix loading of the safety checker from the global cache dir

* add installation step to textual inversion frontend

- After a successful training run, the script will copy learned_embeds.bin
  to a subfolder of the embeddings directory.
- User given the option to delete the logs and intermediate checkpoints
  (which together use 7-8G of space)
- If textual inversion training fails, reports the error gracefully.

* don't crash out on incompatible embeddings

- put try: blocks around places where the system tries to load an embedding
  which is incompatible with the currently loaded model

* add support for checkpoint resuming

* textual inversion preferences are saved and restored between sessions

- Preferences are stored in a file named text-inversion-training/preferences.conf
- Currently the resume-from-checkpoint option is not working correctly. Possible
  bug in textual_inversion_training.py?

* copy learned_embeddings.bin into right location

* add front end for diffusers model merging

- Front end doesn't do anything yet!!!!
- Made change to model name parsing in CLI to support ability to have merged models
  with the "+" character in their names.

* improve inpainting experience

- recommend ckpt version of inpainting-1.5 to user
- fix get_noise() bug in ckpt version of omnibus.py

* update environment*yml

* tweak instructions to install HuggingFace token

* bump version number

* enhance update scripts

- update scripts will now fetch new INITIAL_MODELS.yaml so that
  configure_invokeai.py will know about the diffusers versions.

* enhance invoke.sh/invoke.bat launchers

- added configure_invokeai.py to menu
- menu defaults to browser-based invoke

* remove conda workflow (#2321)

* fix `token_ids has shape torch.Size([79]) - expected [77]`

* update CHANGELOG.md with 2.3.* info

- Add information on how formats have changed and the upgrade process.
- Add short bug list.

Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: Wybartel-luxmc <37852506+Wybartel-luxmc@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: mickr777 <115216705+mickr777@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2023-01-15 09:22:46 -05:00

1740 lines
64 KiB
Python

import base64
import glob
import io
import json
import math
import mimetypes
import os
import shutil
import traceback
from threading import Event
from uuid import uuid4
import eventlet
from PIL import Image
from PIL.Image import Image as ImageType
from flask import Flask, redirect, send_from_directory, request, make_response
from flask_socketio import SocketIO
from werkzeug.utils import secure_filename
from backend.modules.get_canvas_generation_mode import (
get_canvas_generation_mode,
)
from backend.modules.parameters import parameters_to_command
from ldm.generate import Generate
from ldm.invoke.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.conditioning import get_tokens_for_prompt, get_prompt_structure
from ldm.invoke.generator.diffusers_pipeline import PipelineIntermediateState
from ldm.invoke.generator.inpaint import infill_methods
from ldm.invoke.globals import Globals
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from ldm.invoke.prompt_parser import split_weighted_subprompts, Blend
# Loading Arguments
opt = Args()
args = opt.parse_args()
# Set the root directory for static files and relative paths
args.root_dir = os.path.expanduser(args.root_dir or "..")
if not os.path.isabs(args.outdir):
args.outdir = os.path.join(args.root_dir, args.outdir)
# normalize the config directory relative to root
if not os.path.isabs(opt.conf):
opt.conf = os.path.normpath(os.path.join(Globals.root,opt.conf))
class InvokeAIWebServer:
def __init__(self, generate: Generate, gfpgan, codeformer, esrgan) -> None:
self.host = args.host
self.port = args.port
self.generate = generate
self.gfpgan = gfpgan
self.codeformer = codeformer
self.esrgan = esrgan
self.canceled = Event()
self.ALLOWED_EXTENSIONS = {"png", "jpg", "jpeg"}
def allowed_file(self, filename: str) -> bool:
return (
"." in filename
and filename.rsplit(".", 1)[1].lower() in self.ALLOWED_EXTENSIONS
)
def run(self):
self.setup_app()
self.setup_flask()
def setup_flask(self):
# Fix missing mimetypes on Windows
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
# Socket IO
logger = True if args.web_verbose else False
engineio_logger = True if args.web_verbose else False
max_http_buffer_size = 10000000
socketio_args = {
"logger": logger,
"engineio_logger": engineio_logger,
"max_http_buffer_size": max_http_buffer_size,
"ping_interval": (50, 50),
"ping_timeout": 60,
}
if opt.cors:
_cors = opt.cors
# convert list back into comma-separated string,
# be defensive here, not sure in what form this arrives
if isinstance(_cors, list):
_cors = ",".join(_cors)
if "," in _cors:
_cors = _cors.split(",")
socketio_args["cors_allowed_origins"] = _cors
frontend_path = self.find_frontend()
self.app = Flask(
__name__, static_url_path="", static_folder=frontend_path
)
self.socketio = SocketIO(self.app, **socketio_args)
# Keep Server Alive Route
@self.app.route("/flaskwebgui-keep-server-alive")
def keep_alive():
return {"message": "Server Running"}
# Outputs Route
self.app.config["OUTPUTS_FOLDER"] = os.path.abspath(args.outdir)
@self.app.route("/outputs/<path:file_path>")
def outputs(file_path):
return send_from_directory(self.app.config["OUTPUTS_FOLDER"], file_path)
# Base Route
@self.app.route("/")
def serve():
if args.web_develop:
return redirect("http://127.0.0.1:5173")
else:
return send_from_directory(self.app.static_folder, "index.html")
@self.app.route("/upload", methods=["POST"])
def upload():
try:
data = json.loads(request.form["data"])
filename = ""
# check if the post request has the file part
if "file" in request.files:
file = request.files["file"]
# If the user does not select a file, the browser submits an
# empty file without a filename.
if file.filename == "":
return make_response("No file selected", 400)
filename = file.filename
elif "dataURL" in data:
file = dataURL_to_bytes(data["dataURL"])
if "filename" not in data or data["filename"] == "":
return make_response("No filename provided", 400)
filename = data["filename"]
else:
return make_response("No file or dataURL", 400)
kind = data["kind"]
if kind == "init":
path = self.init_image_path
elif kind == "temp":
path = self.temp_image_path
elif kind == "result":
path = self.result_path
elif kind == "mask":
path = self.mask_image_path
else:
return make_response(f"Invalid upload kind: {kind}", 400)
if not self.allowed_file(filename):
return make_response(
f'Invalid file type, must be one of: {", ".join(self.ALLOWED_EXTENSIONS)}',
400,
)
secured_filename = secure_filename(filename)
uuid = uuid4().hex
truncated_uuid = uuid[:8]
split = os.path.splitext(secured_filename)
name = f"{split[0]}.{truncated_uuid}{split[1]}"
file_path = os.path.join(path, name)
if "dataURL" in data:
with open(file_path, "wb") as f:
f.write(file)
else:
file.save(file_path)
mtime = os.path.getmtime(file_path)
pil_image = Image.open(file_path)
if "cropVisible" in data and data["cropVisible"] == True:
visible_image_bbox = pil_image.getbbox()
pil_image = pil_image.crop(visible_image_bbox)
pil_image.save(file_path)
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(file_path), self.thumbnail_image_path
)
response = {
"url": self.get_url_from_image_path(file_path),
"thumbnail": self.get_url_from_image_path(thumbnail_path),
"mtime": mtime,
"width": width,
"height": height,
}
return make_response(response, 200)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
return make_response("Error uploading file", 500)
self.load_socketio_listeners(self.socketio)
if args.gui:
print(">> Launching Invoke AI GUI")
try:
from flaskwebgui import FlaskUI
FlaskUI(
app=self.app,
socketio=self.socketio,
server="flask_socketio",
width=1600,
height=1000,
port=self.port
).run()
except KeyboardInterrupt:
import sys
sys.exit(0)
else:
useSSL = args.certfile or args.keyfile
print(">> Started Invoke AI Web Server!")
if self.host == "0.0.0.0":
print(
f"Point your browser at http{'s' if useSSL else ''}://localhost:{self.port} or use the host's DNS name or IP address."
)
else:
print(
">> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address."
)
print(
f">> Point your browser at http{'s' if useSSL else ''}://{self.host}:{self.port}"
)
if not useSSL:
self.socketio.run(app=self.app, host=self.host, port=self.port)
else:
self.socketio.run(
app=self.app,
host=self.host,
port=self.port,
certfile=args.certfile,
keyfile=args.keyfile,
)
def find_frontend(self):
my_dir = os.path.dirname(__file__)
# LS: setup.py seems to put the frontend in different places on different systems, so
# this is fragile and needs to be replaced with a better way of finding the front end.
for candidate in (os.path.join(my_dir,'..','frontend','dist'), # pip install -e .
os.path.join(my_dir,'../../../../frontend','dist'), # pip install . (Linux, Mac)
os.path.join(my_dir,'../../../frontend','dist'), # pip install . (Windows)
):
if os.path.exists(candidate):
return candidate
assert "Frontend files cannot be found. Cannot continue"
def setup_app(self):
self.result_url = "outputs/"
self.init_image_url = "outputs/init-images/"
self.mask_image_url = "outputs/mask-images/"
self.intermediate_url = "outputs/intermediates/"
self.temp_image_url = "outputs/temp-images/"
self.thumbnail_image_url = "outputs/thumbnails/"
# location for "finished" images
self.result_path = args.outdir
# temporary path for intermediates
self.intermediate_path = os.path.join(self.result_path, "intermediates/")
# path for user-uploaded init images and masks
self.init_image_path = os.path.join(self.result_path, "init-images/")
self.mask_image_path = os.path.join(self.result_path, "mask-images/")
# path for temp images e.g. gallery generations which are not committed
self.temp_image_path = os.path.join(self.result_path, "temp-images/")
# path for thumbnail images
self.thumbnail_image_path = os.path.join(self.result_path, "thumbnails/")
# txt log
self.log_path = os.path.join(self.result_path, "invoke_log.txt")
# make all output paths
[
os.makedirs(path, exist_ok=True)
for path in [
self.result_path,
self.intermediate_path,
self.init_image_path,
self.mask_image_path,
self.temp_image_path,
self.thumbnail_image_path,
]
]
def load_socketio_listeners(self, socketio):
@socketio.on("requestSystemConfig")
def handle_request_capabilities():
print(f">> System config requested")
config = self.get_system_config()
config["model_list"] = self.generate.model_manager.list_models()
config["infill_methods"] = infill_methods()
socketio.emit("systemConfig", config)
@socketio.on('searchForModels')
def handle_search_models(search_folder: str):
try:
if not search_folder:
socketio.emit(
"foundModels",
{'search_folder': None, 'found_models': None},
)
else:
search_folder, found_models = self.generate.model_manager.search_models(search_folder)
socketio.emit(
"foundModels",
{'search_folder': search_folder, 'found_models': found_models},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("addNewModel")
def handle_add_model(new_model_config: dict):
try:
model_name = new_model_config['name']
del new_model_config['name']
model_attributes = new_model_config
if len(model_attributes['vae']) == 0:
del model_attributes['vae']
update = False
current_model_list = self.generate.model_manager.list_models()
if model_name in current_model_list:
update = True
print(f">> Adding New Model: {model_name}")
self.generate.model_manager.add_model(
model_name=model_name, model_attributes=model_attributes, clobber=True)
self.generate.model_manager.commit(opt.conf)
new_model_list = self.generate.model_manager.list_models()
socketio.emit(
"newModelAdded",
{"new_model_name": model_name,
"model_list": new_model_list, 'update': update},
)
print(f">> New Model Added: {model_name}")
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("deleteModel")
def handle_delete_model(model_name: str):
try:
print(f">> Deleting Model: {model_name}")
self.generate.model_manager.del_model(model_name)
self.generate.model_manager.commit(opt.conf)
updated_model_list = self.generate.model_manager.list_models()
socketio.emit(
"modelDeleted",
{"deleted_model_name": model_name,
"model_list": updated_model_list},
)
print(f">> Model Deleted: {model_name}")
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("requestModelChange")
def handle_set_model(model_name: str):
try:
print(f">> Model change requested: {model_name}")
model = self.generate.set_model(model_name)
model_list = self.generate.model_manager.list_models()
if model is None:
socketio.emit(
"modelChangeFailed",
{"model_name": model_name, "model_list": model_list},
)
else:
socketio.emit(
"modelChanged",
{"model_name": model_name, "model_list": model_list},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("requestEmptyTempFolder")
def empty_temp_folder():
try:
temp_files = glob.glob(os.path.join(self.temp_image_path, "*"))
for f in temp_files:
try:
os.remove(f)
thumbnail_path = os.path.join(
self.thumbnail_image_path,
os.path.splitext(os.path.basename(f))[0] + ".webp",
)
os.remove(thumbnail_path)
except Exception as e:
socketio.emit("error", {"message": f"Unable to delete {f}: {str(e)}"})
pass
socketio.emit("tempFolderEmptied")
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("requestSaveStagingAreaImageToGallery")
def save_temp_image_to_gallery(url):
try:
image_path = self.get_image_path_from_url(url)
new_path = os.path.join(self.result_path, os.path.basename(image_path))
shutil.copy2(image_path, new_path)
if os.path.splitext(new_path)[1] == ".png":
metadata = retrieve_metadata(new_path)
else:
metadata = {}
pil_image = Image.open(new_path)
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(new_path), self.thumbnail_image_path
)
image_array = [
{
"url": self.get_url_from_image_path(new_path),
"thumbnail": self.get_url_from_image_path(thumbnail_path),
"mtime": os.path.getmtime(new_path),
"metadata": metadata,
"width": width,
"height": height,
"category": "result",
}
]
socketio.emit(
"galleryImages",
{"images": image_array, "category": "result"},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("requestLatestImages")
def handle_request_latest_images(category, latest_mtime):
try:
base_path = (
self.result_path if category == "result" else self.init_image_path
)
paths = []
for ext in ("*.png", "*.jpg", "*.jpeg"):
paths.extend(glob.glob(os.path.join(base_path, ext)))
image_paths = sorted(
paths, key=lambda x: os.path.getmtime(x), reverse=True
)
image_paths = list(
filter(
lambda x: os.path.getmtime(x) > latest_mtime,
image_paths,
)
)
image_array = []
for path in image_paths:
try:
if os.path.splitext(path)[1] == ".png":
metadata = retrieve_metadata(path)
else:
metadata = {}
pil_image = Image.open(path)
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(path), self.thumbnail_image_path
)
image_array.append(
{
"url": self.get_url_from_image_path(path),
"thumbnail": self.get_url_from_image_path(
thumbnail_path
),
"mtime": os.path.getmtime(path),
"metadata": metadata.get("sd-metadata"),
"dreamPrompt": metadata.get("Dream"),
"width": width,
"height": height,
"category": category,
}
)
except Exception as e:
socketio.emit("error", {"message": f"Unable to load {path}: {str(e)}"})
pass
socketio.emit(
"galleryImages",
{"images": image_array, "category": category},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("requestImages")
def handle_request_images(category, earliest_mtime=None):
try:
page_size = 50
base_path = (
self.result_path if category == "result" else self.init_image_path
)
paths = []
for ext in ("*.png", "*.jpg", "*.jpeg"):
paths.extend(glob.glob(os.path.join(base_path, ext)))
image_paths = sorted(
paths, key=lambda x: os.path.getmtime(x), reverse=True
)
if earliest_mtime:
image_paths = list(
filter(
lambda x: os.path.getmtime(x) < earliest_mtime,
image_paths,
)
)
areMoreImagesAvailable = len(image_paths) >= page_size
image_paths = image_paths[slice(0, page_size)]
image_array = []
for path in image_paths:
try:
if os.path.splitext(path)[1] == ".png":
metadata = retrieve_metadata(path)
else:
metadata = {}
pil_image = Image.open(path)
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(path), self.thumbnail_image_path
)
image_array.append(
{
"url": self.get_url_from_image_path(path),
"thumbnail": self.get_url_from_image_path(
thumbnail_path
),
"mtime": os.path.getmtime(path),
"metadata": metadata.get("sd-metadata"),
"dreamPrompt": metadata.get("Dream"),
"width": width,
"height": height,
"category": category,
}
)
except Exception as e:
print(f">> Unable to load {path}")
socketio.emit("error", {"message": f"Unable to load {path}: {str(e)}"})
pass
socketio.emit(
"galleryImages",
{
"images": image_array,
"areMoreImagesAvailable": areMoreImagesAvailable,
"category": category,
},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("generateImage")
def handle_generate_image_event(
generation_parameters, esrgan_parameters, facetool_parameters
):
try:
# truncate long init_mask/init_img base64 if needed
printable_parameters = {
**generation_parameters,
}
if "init_img" in generation_parameters:
printable_parameters["init_img"] = (
printable_parameters["init_img"][:64] + "..."
)
if "init_mask" in generation_parameters:
printable_parameters["init_mask"] = (
printable_parameters["init_mask"][:64] + "..."
)
print(
f">> Image generation requested: {printable_parameters}\nESRGAN parameters: {esrgan_parameters}\nFacetool parameters: {facetool_parameters}"
)
self.generate_images(
generation_parameters,
esrgan_parameters,
facetool_parameters,
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("runPostprocessing")
def handle_run_postprocessing(original_image, postprocessing_parameters):
try:
print(
f'>> Postprocessing requested for "{original_image["url"]}": {postprocessing_parameters}'
)
progress = Progress()
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
original_image_path = self.get_image_path_from_url(
original_image["url"]
)
image = Image.open(original_image_path)
try:
seed = original_image["metadata"]["image"]["seed"]
except (KeyError) as e:
seed = "unknown_seed"
pass
if postprocessing_parameters["type"] == "esrgan":
progress.set_current_status("common:statusUpscalingESRGAN")
elif postprocessing_parameters["type"] == "gfpgan":
progress.set_current_status("common:statusRestoringFacesGFPGAN")
elif postprocessing_parameters["type"] == "codeformer":
progress.set_current_status("common:statusRestoringFacesCodeFormer")
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
if postprocessing_parameters["type"] == "esrgan":
image = self.esrgan.process(
image=image,
upsampler_scale=postprocessing_parameters["upscale"][0],
strength=postprocessing_parameters["upscale"][1],
seed=seed,
)
elif postprocessing_parameters["type"] == "gfpgan":
image = self.gfpgan.process(
image=image,
strength=postprocessing_parameters["facetool_strength"],
seed=seed,
)
elif postprocessing_parameters["type"] == "codeformer":
image = self.codeformer.process(
image=image,
strength=postprocessing_parameters["facetool_strength"],
fidelity=postprocessing_parameters["codeformer_fidelity"],
seed=seed,
device="cpu"
if str(self.generate.device) == "mps"
else self.generate.device,
)
else:
raise TypeError(
f'{postprocessing_parameters["type"]} is not a valid postprocessing type'
)
progress.set_current_status("common:statusSavingImage")
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
postprocessing_parameters["seed"] = seed
metadata = self.parameters_to_post_processed_image_metadata(
parameters=postprocessing_parameters,
original_image_path=original_image_path,
)
command = parameters_to_command(postprocessing_parameters)
(width, height) = image.size
path = self.save_result_image(
image,
command,
metadata,
self.result_path,
postprocessing=postprocessing_parameters["type"],
)
thumbnail_path = save_thumbnail(
image, os.path.basename(path), self.thumbnail_image_path
)
self.write_log_message(
f'[Postprocessed] "{original_image_path}" > "{path}": {postprocessing_parameters}'
)
progress.mark_complete()
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
socketio.emit(
"postprocessingResult",
{
"url": self.get_url_from_image_path(path),
"thumbnail": self.get_url_from_image_path(thumbnail_path),
"mtime": os.path.getmtime(path),
"metadata": metadata,
"dreamPrompt": command,
"width": width,
"height": height,
},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
@socketio.on("cancel")
def handle_cancel():
print(f">> Cancel processing requested")
self.canceled.set()
# TODO: I think this needs a safety mechanism.
@socketio.on("deleteImage")
def handle_delete_image(url, thumbnail, uuid, category):
try:
print(f'>> Delete requested "{url}"')
from send2trash import send2trash
path = self.get_image_path_from_url(url)
thumbnail_path = self.get_image_path_from_url(thumbnail)
send2trash(path)
send2trash(thumbnail_path)
socketio.emit(
"imageDeleted",
{"url": url, "uuid": uuid, "category": category},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
# App Functions
def get_system_config(self):
model_list: dict = self.generate.model_manager.list_models()
active_model_name = None
for model_name, model_dict in model_list.items():
if model_dict["status"] == "active":
active_model_name = model_name
return {
"model": "stable diffusion",
"model_weights": active_model_name,
"model_hash": self.generate.model_hash,
"app_id": APP_ID,
"app_version": APP_VERSION,
}
def generate_images(
self, generation_parameters, esrgan_parameters, facetool_parameters
):
try:
self.canceled.clear()
step_index = 1
prior_variations = (
generation_parameters["with_variations"]
if "with_variations" in generation_parameters
else []
)
actual_generation_mode = generation_parameters["generation_mode"]
original_bounding_box = None
progress = Progress(generation_parameters=generation_parameters)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
"""
TODO:
If a result image is used as an init image, and then deleted, we will want to be
able to use it as an init image in the future. Need to handle this case.
"""
"""
Prepare for generation based on generation_mode
"""
if generation_parameters["generation_mode"] == "unifiedCanvas":
"""
generation_parameters["init_img"] is a base64 image
generation_parameters["init_mask"] is a base64 image
So we need to convert each into a PIL Image.
"""
truncated_outpaint_image_b64 = generation_parameters["init_img"][:64]
truncated_outpaint_mask_b64 = generation_parameters["init_mask"][:64]
init_img_url = generation_parameters["init_img"]
original_bounding_box = generation_parameters["bounding_box"].copy()
initial_image = dataURL_to_image(
generation_parameters["init_img"]
).convert("RGBA")
"""
The outpaint image and mask are pre-cropped by the UI, so the bounding box we pass
to the generator should be:
{
"x": 0,
"y": 0,
"width": original_bounding_box["width"],
"height": original_bounding_box["height"]
}
"""
generation_parameters["bounding_box"]["x"] = 0
generation_parameters["bounding_box"]["y"] = 0
# Convert mask dataURL to an image and convert to greyscale
mask_image = dataURL_to_image(
generation_parameters["init_mask"]
).convert("L")
actual_generation_mode = get_canvas_generation_mode(
initial_image, mask_image
)
"""
Apply the mask to the init image, creating a "mask" image with
transparency where inpainting should occur. This is the kind of
mask that prompt2image() needs.
"""
alpha_mask = initial_image.copy()
alpha_mask.putalpha(mask_image)
generation_parameters["init_img"] = initial_image
generation_parameters["init_mask"] = alpha_mask
# Remove the unneeded parameters for whichever mode we are doing
if actual_generation_mode == "inpainting":
generation_parameters.pop("seam_size", None)
generation_parameters.pop("seam_blur", None)
generation_parameters.pop("seam_strength", None)
generation_parameters.pop("seam_steps", None)
generation_parameters.pop("tile_size", None)
generation_parameters.pop("force_outpaint", None)
elif actual_generation_mode == "img2img":
generation_parameters["height"] = original_bounding_box["height"]
generation_parameters["width"] = original_bounding_box["width"]
generation_parameters.pop("init_mask", None)
generation_parameters.pop("seam_size", None)
generation_parameters.pop("seam_blur", None)
generation_parameters.pop("seam_strength", None)
generation_parameters.pop("seam_steps", None)
generation_parameters.pop("tile_size", None)
generation_parameters.pop("force_outpaint", None)
generation_parameters.pop("infill_method", None)
elif actual_generation_mode == "txt2img":
generation_parameters["height"] = original_bounding_box["height"]
generation_parameters["width"] = original_bounding_box["width"]
generation_parameters.pop("strength", None)
generation_parameters.pop("fit", None)
generation_parameters.pop("init_img", None)
generation_parameters.pop("init_mask", None)
generation_parameters.pop("seam_size", None)
generation_parameters.pop("seam_blur", None)
generation_parameters.pop("seam_strength", None)
generation_parameters.pop("seam_steps", None)
generation_parameters.pop("tile_size", None)
generation_parameters.pop("force_outpaint", None)
generation_parameters.pop("infill_method", None)
elif generation_parameters["generation_mode"] == "img2img":
init_img_url = generation_parameters["init_img"]
init_img_path = self.get_image_path_from_url(init_img_url)
generation_parameters["init_img"] = Image.open(init_img_path).convert('RGB')
def image_progress(sample, step):
if self.canceled.is_set():
raise CanceledException
nonlocal step_index
nonlocal generation_parameters
nonlocal progress
generation_messages = {
"txt2img": "common:statusGeneratingTextToImage",
"img2img": "common:statusGeneratingImageToImage",
"inpainting": "common:statusGeneratingInpainting",
"outpainting": "common:statusGeneratingOutpainting",
}
progress.set_current_step(step + 1)
progress.set_current_status(
f"{generation_messages[actual_generation_mode]}"
)
progress.set_current_status_has_steps(True)
if (
generation_parameters["progress_images"]
and step % generation_parameters["save_intermediates"] == 0
and step < generation_parameters["steps"] - 1
):
image = self.generate.sample_to_image(sample)
metadata = self.parameters_to_generated_image_metadata(
generation_parameters
)
command = parameters_to_command(generation_parameters)
(width, height) = image.size
path = self.save_result_image(
image,
command,
metadata,
self.intermediate_path,
step_index=step_index,
postprocessing=False,
)
step_index += 1
self.socketio.emit(
"intermediateResult",
{
"url": self.get_url_from_image_path(path),
"mtime": os.path.getmtime(path),
"metadata": metadata,
"width": width,
"height": height,
"generationMode": generation_parameters["generation_mode"],
"boundingBox": original_bounding_box,
},
)
if generation_parameters["progress_latents"]:
image = self.generate.sample_to_lowres_estimated_image(sample)
(width, height) = image.size
width *= 8
height *= 8
img_base64 = image_to_dataURL(image)
self.socketio.emit(
"intermediateResult",
{
"url": img_base64,
"isBase64": True,
"mtime": 0,
"metadata": {},
"width": width,
"height": height,
"generationMode": generation_parameters["generation_mode"],
"boundingBox": original_bounding_box,
},
)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
def image_done(image, seed, first_seed, attention_maps_image=None):
if self.canceled.is_set():
raise CanceledException
nonlocal generation_parameters
nonlocal esrgan_parameters
nonlocal facetool_parameters
nonlocal progress
step_index = 1
nonlocal prior_variations
"""
Tidy up after generation based on generation_mode
"""
# paste the inpainting image back onto the original
if generation_parameters["generation_mode"] == "inpainting":
image = paste_image_into_bounding_box(
Image.open(init_img_path),
image,
**generation_parameters["bounding_box"],
)
progress.set_current_status("common:statusGenerationComplete")
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
all_parameters = generation_parameters
postprocessing = False
if (
"variation_amount" in all_parameters
and all_parameters["variation_amount"] > 0
):
first_seed = first_seed or seed
this_variation = [[seed, all_parameters["variation_amount"]]]
all_parameters["with_variations"] = (
prior_variations + this_variation
)
all_parameters["seed"] = first_seed
elif "with_variations" in all_parameters:
all_parameters["seed"] = first_seed
else:
all_parameters["seed"] = seed
if self.canceled.is_set():
raise CanceledException
if esrgan_parameters:
progress.set_current_status("common:statusUpscaling")
progress.set_current_status_has_steps(False)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
image = self.esrgan.process(
image=image,
upsampler_scale=esrgan_parameters["level"],
strength=esrgan_parameters["strength"],
seed=seed,
)
postprocessing = True
all_parameters["upscale"] = [
esrgan_parameters["level"],
esrgan_parameters["strength"],
]
if self.canceled.is_set():
raise CanceledException
if facetool_parameters:
if facetool_parameters["type"] == "gfpgan":
progress.set_current_status("common:statusRestoringFacesGFPGAN")
elif facetool_parameters["type"] == "codeformer":
progress.set_current_status("common:statusRestoringFacesCodeFormer")
progress.set_current_status_has_steps(False)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
if facetool_parameters["type"] == "gfpgan":
image = self.gfpgan.process(
image=image,
strength=facetool_parameters["strength"],
seed=seed,
)
elif facetool_parameters["type"] == "codeformer":
image = self.codeformer.process(
image=image,
strength=facetool_parameters["strength"],
fidelity=facetool_parameters["codeformer_fidelity"],
seed=seed,
device="cpu"
if str(self.generate.device) == "mps"
else self.generate.device,
)
all_parameters["codeformer_fidelity"] = facetool_parameters[
"codeformer_fidelity"
]
postprocessing = True
all_parameters["facetool_strength"] = facetool_parameters[
"strength"
]
all_parameters["facetool_type"] = facetool_parameters["type"]
progress.set_current_status("common:statusSavingImage")
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
# restore the stashed URLS and discard the paths, we are about to send the result to client
all_parameters["init_img"] = (
init_img_url
if generation_parameters["generation_mode"] == "img2img"
else ""
)
if "init_mask" in all_parameters:
all_parameters["init_mask"] = "" # TODO: store the mask in metadata
if generation_parameters["generation_mode"] == "unifiedCanvas":
all_parameters["bounding_box"] = original_bounding_box
metadata = self.parameters_to_generated_image_metadata(all_parameters)
command = parameters_to_command(all_parameters)
(width, height) = image.size
generated_image_outdir = (
self.result_path
if generation_parameters["generation_mode"]
in ["txt2img", "img2img"]
else self.temp_image_path
)
path = self.save_result_image(
image,
command,
metadata,
generated_image_outdir,
postprocessing=postprocessing,
)
thumbnail_path = save_thumbnail(
image, os.path.basename(path), self.thumbnail_image_path
)
print(f'>> Image generated: "{path}"')
self.write_log_message(f'[Generated] "{path}": {command}')
if progress.total_iterations > progress.current_iteration:
progress.set_current_step(1)
progress.set_current_status("common:statusIterationComplete")
progress.set_current_status_has_steps(False)
else:
progress.mark_complete()
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
parsed_prompt, _ = get_prompt_structure(generation_parameters["prompt"])
tokens = None if type(parsed_prompt) is Blend else \
get_tokens_for_prompt(self.generate.model, parsed_prompt)
attention_maps_image_base64_url = None if attention_maps_image is None \
else image_to_dataURL(attention_maps_image)
self.socketio.emit(
"generationResult",
{
"url": self.get_url_from_image_path(path),
"thumbnail": self.get_url_from_image_path(thumbnail_path),
"mtime": os.path.getmtime(path),
"metadata": metadata,
"dreamPrompt": command,
"width": width,
"height": height,
"boundingBox": original_bounding_box,
"generationMode": generation_parameters["generation_mode"],
"attentionMaps": attention_maps_image_base64_url,
"tokens": tokens,
},
)
eventlet.sleep(0)
progress.set_current_iteration(progress.current_iteration + 1)
print(generation_parameters)
def diffusers_step_callback_adapter(*cb_args, **kwargs):
if isinstance(cb_args[0], PipelineIntermediateState):
progress_state: PipelineIntermediateState = cb_args[0]
return image_progress(progress_state.latents, progress_state.step)
else:
return image_progress(*cb_args, **kwargs)
self.generate.prompt2image(
**generation_parameters,
step_callback=diffusers_step_callback_adapter,
image_callback=image_done
)
except KeyboardInterrupt:
self.socketio.emit("processingCanceled")
raise
except CanceledException:
self.socketio.emit("processingCanceled")
pass
except Exception as e:
print(e)
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def parameters_to_generated_image_metadata(self, parameters):
try:
# top-level metadata minus `image` or `images`
metadata = self.get_system_config()
# remove any image keys not mentioned in RFC #266
rfc266_img_fields = [
"type",
"postprocessing",
"sampler",
"prompt",
"seed",
"variations",
"steps",
"cfg_scale",
"threshold",
"perlin",
"step_number",
"width",
"height",
"extra",
"seamless",
"hires_fix",
]
rfc_dict = {}
for item in parameters.items():
key, value = item
if key in rfc266_img_fields:
rfc_dict[key] = value
postprocessing = []
rfc_dict["type"] = parameters["generation_mode"]
# 'postprocessing' is either null or an
if "facetool_strength" in parameters:
facetool_parameters = {
"type": str(parameters["facetool_type"]),
"strength": float(parameters["facetool_strength"]),
}
if parameters["facetool_type"] == "codeformer":
facetool_parameters["fidelity"] = float(
parameters["codeformer_fidelity"]
)
postprocessing.append(facetool_parameters)
if "upscale" in parameters:
postprocessing.append(
{
"type": "esrgan",
"scale": int(parameters["upscale"][0]),
"strength": float(parameters["upscale"][1]),
}
)
rfc_dict["postprocessing"] = (
postprocessing if len(postprocessing) > 0 else None
)
# semantic drift
rfc_dict["sampler"] = parameters["sampler_name"]
# display weighted subprompts (liable to change)
subprompts = split_weighted_subprompts(
parameters["prompt"], skip_normalize=True
)
subprompts = [{"prompt": x[0], "weight": x[1]} for x in subprompts]
rfc_dict["prompt"] = subprompts
# 'variations' should always exist and be an array, empty or consisting of {'seed': seed, 'weight': weight} pairs
variations = []
if "with_variations" in parameters:
variations = [
{"seed": x[0], "weight": x[1]}
for x in parameters["with_variations"]
]
rfc_dict["variations"] = variations
print(parameters)
if rfc_dict["type"] == "img2img":
rfc_dict["strength"] = parameters["strength"]
rfc_dict["fit"] = parameters["fit"] # TODO: Noncompliant
rfc_dict["orig_hash"] = calculate_init_img_hash(
self.get_image_path_from_url(parameters["init_img"])
)
rfc_dict["init_image_path"] = parameters[
"init_img"
] # TODO: Noncompliant
metadata["image"] = rfc_dict
return metadata
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def parameters_to_post_processed_image_metadata(
self, parameters, original_image_path
):
try:
current_metadata = retrieve_metadata(original_image_path)["sd-metadata"]
postprocessing_metadata = {}
"""
if we don't have an original image metadata to reconstruct,
need to record the original image and its hash
"""
if "image" not in current_metadata:
current_metadata["image"] = {}
orig_hash = calculate_init_img_hash(
self.get_image_path_from_url(original_image_path)
)
postprocessing_metadata["orig_path"] = (original_image_path,)
postprocessing_metadata["orig_hash"] = orig_hash
if parameters["type"] == "esrgan":
postprocessing_metadata["type"] = "esrgan"
postprocessing_metadata["scale"] = parameters["upscale"][0]
postprocessing_metadata["strength"] = parameters["upscale"][1]
elif parameters["type"] == "gfpgan":
postprocessing_metadata["type"] = "gfpgan"
postprocessing_metadata["strength"] = parameters["facetool_strength"]
elif parameters["type"] == "codeformer":
postprocessing_metadata["type"] = "codeformer"
postprocessing_metadata["strength"] = parameters["facetool_strength"]
postprocessing_metadata["fidelity"] = parameters["codeformer_fidelity"]
else:
raise TypeError(f"Invalid type: {parameters['type']}")
if "postprocessing" in current_metadata["image"] and isinstance(
current_metadata["image"]["postprocessing"], list
):
current_metadata["image"]["postprocessing"].append(
postprocessing_metadata
)
else:
current_metadata["image"]["postprocessing"] = [postprocessing_metadata]
return current_metadata
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def save_result_image(
self,
image,
command,
metadata,
output_dir,
step_index=None,
postprocessing=False,
):
try:
pngwriter = PngWriter(output_dir)
number_prefix = pngwriter.unique_prefix()
uuid = uuid4().hex
truncated_uuid = uuid[:8]
seed = "unknown_seed"
if "image" in metadata:
if "seed" in metadata["image"]:
seed = metadata["image"]["seed"]
filename = f"{number_prefix}.{truncated_uuid}.{seed}"
if step_index:
filename += f".{step_index}"
if postprocessing:
filename += f".postprocessed"
filename += ".png"
path = pngwriter.save_image_and_prompt_to_png(
image=image,
dream_prompt=command,
metadata=metadata,
name=filename,
)
return os.path.abspath(path)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def make_unique_init_image_filename(self, name):
try:
uuid = uuid4().hex
split = os.path.splitext(name)
name = f"{split[0]}.{uuid}{split[1]}"
return name
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def calculate_real_steps(self, steps, strength, has_init_image):
import math
return math.floor(strength * steps) if has_init_image else steps
def write_log_message(self, message):
"""Logs the filename and parameters used to generate or process that image to log file"""
try:
message = f"{message}\n"
with open(self.log_path, "a", encoding="utf-8") as file:
file.writelines(message)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def get_image_path_from_url(self, url):
"""Given a url to an image used by the client, returns the absolute file path to that image"""
try:
if "init-images" in url:
return os.path.abspath(
os.path.join(self.init_image_path, os.path.basename(url))
)
elif "mask-images" in url:
return os.path.abspath(
os.path.join(self.mask_image_path, os.path.basename(url))
)
elif "intermediates" in url:
return os.path.abspath(
os.path.join(self.intermediate_path, os.path.basename(url))
)
elif "temp-images" in url:
return os.path.abspath(
os.path.join(self.temp_image_path, os.path.basename(url))
)
elif "thumbnails" in url:
return os.path.abspath(
os.path.join(self.thumbnail_image_path, os.path.basename(url))
)
else:
return os.path.abspath(
os.path.join(self.result_path, os.path.basename(url))
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def get_url_from_image_path(self, path):
"""Given an absolute file path to an image, returns the URL that the client can use to load the image"""
try:
if "init-images" in path:
return os.path.join(self.init_image_url, os.path.basename(path))
elif "mask-images" in path:
return os.path.join(self.mask_image_url, os.path.basename(path))
elif "intermediates" in path:
return os.path.join(self.intermediate_url, os.path.basename(path))
elif "temp-images" in path:
return os.path.join(self.temp_image_url, os.path.basename(path))
elif "thumbnails" in path:
return os.path.join(self.thumbnail_image_url, os.path.basename(path))
else:
return os.path.join(self.result_url, os.path.basename(path))
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
def save_file_unique_uuid_name(self, bytes, name, path):
try:
uuid = uuid4().hex
truncated_uuid = uuid[:8]
split = os.path.splitext(name)
name = f"{split[0]}.{truncated_uuid}{split[1]}"
file_path = os.path.join(path, name)
os.makedirs(os.path.dirname(file_path), exist_ok=True)
newFile = open(file_path, "wb")
newFile.write(bytes)
return file_path
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
class Progress:
def __init__(self, generation_parameters=None):
self.current_step = 1
self.total_steps = (
self._calculate_real_steps(
steps=generation_parameters["steps"],
strength=generation_parameters["strength"]
if "strength" in generation_parameters
else None,
has_init_image="init_img" in generation_parameters,
)
if generation_parameters
else 1
)
self.current_iteration = 1
self.total_iterations = (
generation_parameters["iterations"] if generation_parameters else 1
)
self.current_status = "common:statusPreparing"
self.is_processing = True
self.current_status_has_steps = False
self.has_error = False
def set_current_step(self, current_step):
self.current_step = current_step
def set_total_steps(self, total_steps):
self.total_steps = total_steps
def set_current_iteration(self, current_iteration):
self.current_iteration = current_iteration
def set_total_iterations(self, total_iterations):
self.total_iterations = total_iterations
def set_current_status(self, current_status):
self.current_status = current_status
def set_is_processing(self, is_processing):
self.is_processing = is_processing
def set_current_status_has_steps(self, current_status_has_steps):
self.current_status_has_steps = current_status_has_steps
def set_has_error(self, has_error):
self.has_error = has_error
def mark_complete(self):
self.current_status = "common:statusProcessingComplete"
self.current_step = 0
self.total_steps = 0
self.current_iteration = 0
self.total_iterations = 0
self.is_processing = False
def to_formatted_dict(
self,
):
return {
"currentStep": self.current_step,
"totalSteps": self.total_steps,
"currentIteration": self.current_iteration,
"totalIterations": self.total_iterations,
"currentStatus": self.current_status,
"isProcessing": self.is_processing,
"currentStatusHasSteps": self.current_status_has_steps,
"hasError": self.has_error,
}
def _calculate_real_steps(self, steps, strength, has_init_image):
return math.floor(strength * steps) if has_init_image else steps
class CanceledException(Exception):
pass
"""
Returns a copy an image, cropped to a bounding box.
"""
def copy_image_from_bounding_box(
image: ImageType, x: int, y: int, width: int, height: int
) -> ImageType:
with image as im:
bounds = (x, y, x + width, y + height)
im_cropped = im.crop(bounds)
return im_cropped
"""
Converts a base64 image dataURL into an image.
The dataURL is split on the first commma.
"""
def dataURL_to_image(dataURL: str) -> ImageType:
image = Image.open(
io.BytesIO(
base64.decodebytes(
bytes(
dataURL.split(",", 1)[1],
"utf-8",
)
)
)
)
return image
"""
Converts an image into a base64 image dataURL.
"""
def image_to_dataURL(image: ImageType) -> str:
buffered = io.BytesIO()
image.save(buffered, format="PNG")
image_base64 = "data:image/png;base64," + base64.b64encode(
buffered.getvalue()
).decode("UTF-8")
return image_base64
"""
Converts a base64 image dataURL into bytes.
The dataURL is split on the first commma.
"""
def dataURL_to_bytes(dataURL: str) -> bytes:
return base64.decodebytes(
bytes(
dataURL.split(",", 1)[1],
"utf-8",
)
)
"""
Pastes an image onto another with a bounding box.
"""
def paste_image_into_bounding_box(
recipient_image: ImageType,
donor_image: ImageType,
x: int,
y: int,
width: int,
height: int,
) -> ImageType:
with recipient_image as im:
bounds = (x, y, x + width, y + height)
im.paste(donor_image, bounds)
return recipient_image
"""
Saves a thumbnail of an image, returning its path.
"""
def save_thumbnail(
image: ImageType,
filename: str,
path: str,
size: int = 256,
) -> str:
base_filename = os.path.splitext(filename)[0]
thumbnail_path = os.path.join(path, base_filename + ".webp")
if os.path.exists(thumbnail_path):
return thumbnail_path
thumbnail_width = size
thumbnail_height = round(size * (image.height / image.width))
image_copy = image.copy()
image_copy.thumbnail(size=(thumbnail_width, thumbnail_height))
image_copy.save(thumbnail_path, "WEBP")
return thumbnail_path