Sergey Borisov
42356ec866
Add ControlNet support to denoise
2024-07-21 20:01:30 +03:00
Sergey Borisov
1748848b7b
Ruff fixes
2024-07-21 18:37:20 +03:00
Sergey Borisov
5772965f09
Fix slightly different output with old backend
2024-07-21 18:31:30 +03:00
Sergey Borisov
e046e60e1c
Add FreeU support to denoise
2024-07-21 18:31:10 +03:00
Sergey Borisov
9a1420280e
Add rescale cfg support to denoise
2024-07-21 17:33:43 +03:00
Mary Hipp
930ff559e4
add sdxl tile to starter models
2024-07-19 16:49:33 -04:00
Ryan Dick
78d2b1b650
Merge branch 'main' into stalker-backend_base
2024-07-19 16:25:20 -04:00
Sergey Borisov
39e10d894c
Add invocation cancellation logic to patchers
2024-07-19 23:17:01 +03:00
Ryan Dick
e16faa6370
Add gradient blending to tile seams in MultiDiffusion.
2024-07-19 13:05:50 -07:00
Ryan Dick
83a86abce2
Add unit tests for ExtensionsManager and ExtensionBase.
2024-07-19 14:15:46 -04:00
Sergey Borisov
0c56d4a581
Ryan's suggested changes to extension manager/extensions
...
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-18 23:49:44 +03:00
Lincoln Stein
97a7f51721
don't use cpu state_dict for model unpatching when executing on cpu ( #6631 )
...
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-07-18 15:34:01 -04:00
StAlKeR7779
710dc6b487
Merge branch 'main' into stalker7779/backend_base
2024-07-18 01:08:04 +03:00
Sergey Borisov
2ef3b49a79
Add run cancelling logic to extension manager
2024-07-17 04:39:15 +03:00
Sergey Borisov
3f79467f7b
Ruff format
2024-07-17 04:24:45 +03:00
Sergey Borisov
2c2ec8f0bc
Comments, a bit refactor
2024-07-17 04:20:31 +03:00
Sergey Borisov
79e35bd0d3
Minor fixes
2024-07-17 03:48:37 +03:00
Sergey Borisov
137202b77c
Remove patch_unet logic for now
2024-07-17 03:40:27 +03:00
Sergey Borisov
03e22c257b
Convert conditioning_mode to enum
2024-07-17 03:37:11 +03:00
Sergey Borisov
ae6d4fbc78
Move out _concat_conditionings_for_batch submethods
2024-07-17 03:31:26 +03:00
Sergey Borisov
cd1bc1595a
Rename sequential as private variable
2024-07-17 03:24:11 +03:00
Ryan Dick
f866b49255
Add some ESRGAN and SwinIR upscale models to the starter models list.
2024-07-16 15:55:10 -04:00
Sergey Borisov
b7c6c63005
Added some comments
2024-07-16 22:52:44 +03:00
Ryan Dick
6b0ca88177
Merge branch 'main' into ryan/spandrel-upscale-tiling
2024-07-16 15:40:14 -04:00
Ryan Dick
81991e072b
Merge branch 'main' into ryan/spandrel-upscale
2024-07-16 15:14:08 -04:00
Sergey Borisov
cec345cb5c
Change attention processor apply logic
2024-07-16 20:03:29 +03:00
Sergey Borisov
608cbe3f5c
Separate inputs in denoise context
2024-07-16 19:30:29 +03:00
psychedelicious
38343917f8
fix(backend): revert non-blocking device transfer
...
In #6490 we enabled non-blocking torch device transfers throughout the model manager's memory management code. When using this torch feature, torch attempts to wait until the tensor transfer has completed before allowing any access to the tensor. Theoretically, that should make this a safe feature to use.
This provides a small performance improvement but causes race conditions in some situations. Specific platforms/systems are affected, and complicated data dependencies can make this unsafe.
- Intermittent black images on MPS devices - reported on discord and #6545 , fixed with special handling in #6549 .
- Intermittent OOMs and black images on a P4000 GPU on Windows - reported in #6613 , fixed in this commit.
On my system, I haven't experience any issues with generation, but targeted testing of non-blocking ops did expose a race condition when moving tensors from CUDA to CPU.
One workaround is to use torch streams with manual sync points. Our application logic is complicated enough that this would be a lot of work and feels ripe for edge cases and missed spots.
Much safer is to fully revert non-locking - which is what this change does.
2024-07-16 08:59:42 +10:00
Sergey Borisov
9f088d1bf5
Multiple small fixes
2024-07-16 00:51:25 +03:00
Sergey Borisov
fd8d1c12d4
Remove 'del' operator overload
2024-07-16 00:43:32 +03:00
Sergey Borisov
d623bd429b
Fix condtionings logic
2024-07-16 00:31:56 +03:00
psychedelicious
28e79c4c5e
chore: ruff
...
Looks like an upstream change to ruff resulted in this file being a violation.
2024-07-15 14:05:04 +10:00
Sergey Borisov
499e4d4fde
Add preview extension to check logic
2024-07-13 00:45:04 +03:00
Sergey Borisov
e961dd1dec
Remove remains of priority logic
2024-07-13 00:44:21 +03:00
Sergey Borisov
7e00526999
Remove overrides logic for now
2024-07-13 00:28:56 +03:00
Sergey Borisov
3a9dda9177
Renames
2024-07-12 22:44:00 +03:00
Sergey Borisov
bd8ae5d896
Simplify guidance modes
2024-07-12 22:01:37 +03:00
Sergey Borisov
87e96e1be2
Rename modifiers to callbacks, convert order to int, a bit unify injection points
2024-07-12 22:01:05 +03:00
Sergey Borisov
0bc60378d3
A bit rework conditioning convert to unet kwargs
2024-07-12 20:43:32 +03:00
Sergey Borisov
9cc852cf7f
Base code from draft PR
2024-07-12 20:31:26 +03:00
Ryan Dick
ab775726b7
Add tiling support to the SpoandrelImageToImage node.
2024-07-10 14:25:19 -04:00
Ryan Dick
650902dc29
Fix broken unit test caused by non-existent model path.
2024-07-10 13:59:17 -04:00
Ryan Dick
7b5d4935b4
Merge branch 'main' into ryan/spandrel-upscale
2024-07-09 13:47:11 -04:00
Ryan Dick
af63c538ed
Demote error log to warning to models treated as having size 0.
2024-07-09 08:35:43 -04:00
Ryan Dick
0ce6ec634d
Do not assign the result of SpandrelImageToImageModel.load_from_file(...) during probe to ensure that the model is immediately gc'd.
2024-07-05 14:05:12 -04:00
Ryan Dick
35f8781ea2
Fix static type errors with SCHEDULER_NAME_VALUES. And, avoid bi-directional cross-directory imports, which contribute to circular import issues.
2024-07-05 07:38:35 -07:00
Ryan Dick
36202d6d25
Delete unused duplicate libc_util.py file. The active version is at invokeai/backend/model_manager/libc_util.py.
2024-07-04 10:30:40 -04:00
Ryan Dick
1d449097cc
Apply ruff rule to disallow all relative imports.
2024-07-04 09:35:37 -04:00
Ryan Dick
9da5925287
Add ruff rule to disallow relative parent imports.
2024-07-04 09:35:37 -04:00
Ryan Dick
414750a45d
Update calc_model_size_by_data(...) to handle all expected model types, and to log an error if an unexpected model type is received.
2024-07-04 09:08:25 -04:00
Ryan Dick
a405f14ea2
Fix SpandrelImageToImageModel size calculation for the model cache.
2024-07-03 16:38:16 -04:00
Ryan Dick
114320ee69
(minor) typo
2024-07-03 16:28:21 -04:00
Ryan Dick
6161aa73af
Move pil_to_tensor() and tensor_to_pil() utilities to the SpandrelImageToImage class.
2024-07-03 16:28:21 -04:00
Ryan Dick
1ab20f43c8
Tidy spandrel model probe logic, and document the reasons behind the current implementation.
2024-07-03 16:28:21 -04:00
Ryan Dick
29c8ddfb88
WIP - A bunch of boilerplate to support Spandrel Image-to-Image models throughout the model manager and the frontend.
2024-07-03 16:28:21 -04:00
Ryan Dick
95079dc7d4
Use a ModelIdentifierField to identify the spandrel model in the UpscaleSpandrelInvocation.
2024-07-03 16:28:21 -04:00
Ryan Dick
2a1514272f
Set the dtype correctly for SpandrelImageToImageModels when they are loaded.
2024-07-03 16:28:21 -04:00
Ryan Dick
59ce9cf41c
WIP - Begin to integrate SpandreImageToImageModel type into the model manager.
2024-07-03 16:28:21 -04:00
Ryan Dick
e6abea7bc5
(minor) Remove redundant else clause on a for-loop with no break statement.
2024-07-03 16:28:21 -04:00
Ryan Dick
c335f92345
(minor) simplify startswith(...) syntax.
2024-07-03 16:28:21 -04:00
Ryan Dick
e4813f800a
Update calc_model_size_by_data(...) to handle all expected model types, and to log an error if an unexpected model type is received.
2024-07-02 21:51:45 -04:00
Ryan Dick
3752509066
Expose the VAE tile_size on the VAE encode and decode invocations.
2024-07-02 09:07:03 -04:00
Ryan Dick
79640ba14e
Add context manager for overriding VAE tiling params.
2024-07-02 09:07:03 -04:00
Kent Keirsey
5df2a79549
Update starter models
2024-06-28 17:49:45 +10:00
Kent Keirsey
10b9088312
update controlnet starter models
2024-06-28 17:49:45 +10:00
Lincoln Stein
3e0fb45dd7
Load single-file checkpoints directly without conversion ( #6510 )
...
* use model_class.load_singlefile() instead of converting; works, but performance is poor
* adjust the convert api - not right just yet
* working, needs sql migrator update
* rename migration_11 before conflict merge with main
* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* implement lightweight version-by-version config migration
* simplified config schema migration code
* associate sdxl config with sdxl VAEs
* remove use of original_config_file in load_single_file()
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-27 17:31:28 -04:00
Ryan Dick
14775cc9c4
ruff format
2024-06-27 09:45:13 -04:00
psychedelicious
c7562dd6c0
fix(backend): mps should not use non_blocking
...
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28
Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.
Fixes : #6545
2024-06-27 19:15:23 +10:00
Ryan Dick
9a3b8c6fcb
Fix handling of init_timestep in StableDiffusionGeneratorPipeline and improve its documentation.
2024-06-26 12:51:51 -04:00
Ryan Dick
bd74b84cc5
Revert "Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array."
...
This reverts commit fa40061eca
.
2024-06-26 12:51:51 -04:00
Brandon Rising
dc23bebebf
Run ruff
2024-06-26 21:46:59 +10:00
Kent Keirsey
38b6f90c02
Update prevention exception message
2024-06-26 21:46:59 +10:00
Ryan Dick
cd9dfefe3c
Fix inpainting mask shape assertions.
2024-06-25 11:31:52 -07:00
Ryan Dick
e1af78c702
Make the tile_overlap input to MultiDiffusion *strictly* control the amount of overlap rather than being a lower bound.
2024-06-25 11:31:52 -07:00
Ryan Dick
c5588e1ff7
Add TODO comment explaining why some schedulers do not interact well with MultiDiffusion.
2024-06-25 11:31:52 -07:00
Ryan Dick
07ac292680
Consolidate _region_step() function - the separation wasn't really adding any value.
2024-06-25 11:31:52 -07:00
Ryan Dick
7c032ea604
(minor) Fix some documentation typos.
2024-06-25 11:31:52 -07:00
Ryan Dick
fa40061eca
Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array.
2024-06-25 11:31:52 -07:00
Ryan Dick
25067e4f0d
Delete rough notes.
2024-06-25 11:31:52 -07:00
Ryan Dick
fb0aaa3e6d
Fix advanced scheduler behaviour in MultiDiffusionPipeline.
2024-06-25 11:31:52 -07:00
Ryan Dick
c22526b9d0
Fix handling of stateful schedulers in MultiDiffusionPipeline.
2024-06-25 11:31:52 -07:00
Ryan Dick
c881882f73
Connect TiledMultiDiffusionDenoiseLatents to the MultiDiffusionPipeline backend.
2024-06-25 11:31:52 -07:00
Ryan Dick
36473fc52a
Remove regional conditioning logic from MultiDiffusionPipeline - it is not yet supported.
2024-06-25 11:31:52 -07:00
Ryan Dick
b9964ecc4a
Initial (untested) implementation of MultiDiffusionPipeline.
2024-06-25 11:31:52 -07:00
Ryan Dick
051af802fe
Remove inpainting support from MultiDiffusionPipeline.
2024-06-25 11:31:52 -07:00
Ryan Dick
3ff2e558d9
Remove IP-Adapter and T2I-Adapter support from MultiDiffusionPipeline.
2024-06-25 11:31:52 -07:00
Ryan Dick
fc187c9253
Document plan for the rest of the MultiDiffusion implementation.
2024-06-25 11:31:52 -07:00
Ryan Dick
605f460c7d
Add detailed docstring to latents_from_embeddings().
2024-06-25 11:31:52 -07:00
Ryan Dick
60d1e686d8
Copy StableDiffusionGeneratorPipeline as a starting point for a new MultiDiffusionPipeline.
2024-06-25 11:31:52 -07:00
Ryan Dick
22704dd542
Simplify handling of inpainting models. Improve the in-code documentation around inpainting.
2024-06-25 11:31:52 -07:00
Ryan Dick
875673c9ba
Minor tidying of latents_from_embeddings(...).
2024-06-25 11:31:52 -07:00
Ryan Dick
f604575862
Consolidate latents_from_embeddings(...) and generate_latents_from_embeddings(...) into a single function.
2024-06-25 11:31:52 -07:00
Ryan Dick
60ac937698
Improve clarity of comments regarded when 'noise' and 'latents' are expected to be set.
2024-06-25 11:31:52 -07:00
Ryan Dick
1e41949a02
Fix static check errors on imports in diffusers_pipeline.py.
2024-06-25 11:31:52 -07:00
Ryan Dick
5f0e330ed2
Remove a condition for handling inpainting models that never resolves to True. The same logic is already applied earlier by AddsMaskLatents.
2024-06-25 11:31:52 -07:00
Ryan Dick
9dd779b414
Add clarifying comment to explain why noise might be None in latents_from_embedding().
2024-06-25 11:31:52 -07:00
Ryan Dick
fa183025ac
Remove unused are_like_tensors() function.
2024-06-25 11:31:52 -07:00
Ryan Dick
d3c85aa91a
Remove unused StableDiffusionGeneratorPipeline.use_ip_adapter member.
2024-06-25 11:31:52 -07:00
Ryan Dick
82619602a5
Remove unused StableDiffusionGeneratorPipeline.control_model.
2024-06-25 11:31:52 -07:00
Ryan Dick
196f3b721d
Stricter typing for the is_gradient_mask: bool.
2024-06-25 11:31:52 -07:00
Ryan Dick
244c28859d
Fix typing of control_data to reflect that it can be None.
2024-06-25 11:31:52 -07:00
Ryan Dick
40ae174c41
Fix typing of timesteps and init_timestep.
2024-06-25 11:31:52 -07:00
Ryan Dick
afaebdf151
Fix typing to reflect that the callback arg to latents_from_embeddings is never None.
2024-06-25 11:31:52 -07:00
Ryan Dick
d661517d94
Move seed above optional params.
2024-06-25 11:31:52 -07:00
Ryan Dick
82a69a54ac
Simplify handling of AddsMaskGuidance, and fix some related type errors.
2024-06-25 11:31:52 -07:00
Ryan Dick
ffc28176fe
Remove unused num_inference_steps.
2024-06-25 11:31:52 -07:00
Ryan Dick
d08e405017
Fix ControlNetModel type hint import source.
2024-06-25 11:31:52 -07:00
Lincoln Stein
b03073d888
[MM] Add support for probing and loading SDXL VAE checkpoint files ( #6524 )
...
* add support for probing and loading SDXL VAE checkpoint files
* broaden regexp probe for SDXL VAEs
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-06-20 02:57:27 +00:00
Ryan Dick
8e47e005a7
Tidy SilenceWarnings context manager:
...
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a decorator
- Remove duplicate implementation
- Check the initial verbosity on __enter__() rather than __init__()
2024-06-18 15:06:22 -04:00
Brandon Rising
63a7e19dbf
Run ruff
2024-06-18 10:38:29 -04:00
Brandon Rising
fbc5a8ec65
Ignore validation on improperly formatted hashes (pytest)
2024-06-18 10:38:29 -04:00
Brandon Rising
8ce6e4540e
Run ruff
2024-06-18 10:38:29 -04:00
Brandon Rising
f14f377ede
Update validator list
2024-06-18 10:38:29 -04:00
Brandon Rising
1925f83f5e
Update validator list
2024-06-18 10:38:29 -04:00
Brandon Rising
3a5ad6d112
Update validator list
2024-06-18 10:38:29 -04:00
Brandon Rising
41a6bb45f3
Initial functionality
2024-06-18 10:38:29 -04:00
Lincoln Stein
a3cb5da130
Improve RAM<->VRAM memory copy performance in LoRA patching and elsewhere ( #6490 )
...
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* do not save original weights if there is a CPU copy of state dict
* Update invokeai/backend/model_manager/load/load_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* documentation fixes requested during penultimate review
* add non-blocking=True parameters to several torch.nn.Module.to() calls, for slight performance increases
* fix ruff errors
* prevent crash on non-cuda-enabled systems
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-13 17:10:03 +00:00
psychedelicious
fde58ce0a3
Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api
2024-06-07 14:23:41 +10:00
Lincoln Stein
f81b8bc9f6
add support for generic loading of diffusers directories
2024-06-07 13:54:30 +10:00
Lincoln Stein
2871676f79
LoRA patching optimization ( #6439 )
...
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* do not save original weights if there is a CPU copy of state dict
* Update invokeai/backend/model_manager/load/load_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* documentation fixes added during penultimate review
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
psychedelicious
e7513f6088
docs(mm): add comment in move_model_to_device
2024-06-03 10:56:04 +10:00
Lincoln Stein
2276f327e5
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-06-02 09:45:31 -04:00
Lincoln Stein
21a60af881
when unlocking models, offload_unlocked_models should prune to vram limit only ( #6450 )
...
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-29 03:01:21 +00:00
Ryan Dick
829b9ad66b
Add a callout about the hackiness of dropping tokens in the TextualInversionManager.
2024-05-28 05:11:54 -07:00
Ryan Dick
3aa1c8d3a8
Update TextualInversionManager for compatibility with the latest transformers release. See https://github.com/invoke-ai/InvokeAI/issues/6445 .
2024-05-28 05:11:54 -07:00
Ryan Dick
994c61b67a
Add docs to TextualInversionManager and improve types. No changes to functionality.
2024-05-28 05:11:54 -07:00
Lincoln Stein
34e1eb19f9
merge with main and resolve conflicts
2024-05-27 22:20:34 -04:00
Lincoln Stein
532f82cb97
Optimize RAM to VRAM transfer ( #6312 )
...
* avoid copying model back from cuda to cpu
* handle models that don't have state dicts
* add assertions that models need a `device()` method
* do not rely on torch.nn.Module having the device() method
* apply all patches after model is on the execution device
* fix model patching in latents too
* log patched tokenizer
* closes #6375
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-24 17:06:09 +00:00
Lincoln Stein
987ee704a1
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-05-17 22:54:03 -04:00
Lincoln Stein
f29c406fed
refactor model_install to work with refactored download queue
2024-05-13 22:49:15 -04:00
psychedelicious
93da75209c
feat(nodes): use new blur_if_nsfw
method
2024-05-14 07:23:38 +10:00
psychedelicious
9c819f0fd8
fix(nodes): fix nsfw checker model download
2024-05-14 07:23:38 +10:00
Lincoln Stein
0bf14c2830
add multifile_download() method to download service
2024-05-12 20:14:00 -06:00
blessedcoolant
da61396b1c
cleanup: seamless unused older code cleanup
2024-05-13 08:11:08 +10:00
blessedcoolant
6c9fb617dc
fix: fix seamless
2024-05-13 08:11:08 +10:00
Lincoln Stein
b48d4a049d
bad implementation of diffusers folder download
2024-05-08 21:21:01 -07:00
Lincoln Stein
e9a20051bd
refactor DWOpenPose and add type hints
2024-05-03 18:08:53 -04:00
Lincoln Stein
3b64e7a1fd
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-05-02 21:20:35 -04:00
Lincoln Stein
49c84cd423
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-04-30 18:13:42 -04:00
blessedcoolant
39ab4dd83e
Merge branch 'main' into pr/6086
2024-05-01 00:37:06 +05:30
psychedelicious
2d7b8c2a1b
fix(backend): do not round image dims to 64 in controlnet processor resize
...
Rounding the dims results in control images that are subtly different than the input. We round to the nearest 8px later, there's no need to round now.
2024-04-30 08:10:59 -04:00
psychedelicious
1fe90c357c
feat(backend): lift managed model loading out of depthanything class
2024-04-29 08:56:00 +10:00
psychedelicious
fcb071f30c
feat(backend): lift managed model loading out of lama class
2024-04-29 08:12:51 +10:00
Lincoln Stein
57c831442e
fix safe_filename() on windows
2024-04-28 14:42:40 -04:00
Lincoln Stein
7c39929758
support VRAM caching of dict models that lack to()
2024-04-28 13:41:06 -04:00
Lincoln Stein
a26667d3ca
make download and convert cache keys safe for filename length
2024-04-28 12:24:36 -04:00
Lincoln Stein
bb04f496e0
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-04-28 11:33:26 -04:00
Lincoln Stein
70903ef057
refactor load_ckpt_from_url()
2024-04-28 11:33:23 -04:00
psychedelicious
241a1fdb57
feat(mm): support sdxl ckpt inpainting models
...
There are only a couple SDXL inpainting models, and my tests indicate they are not as good as SD1.5 inpainting, but at least we support them now.
- Add the config file. This matches what is used in A1111. The only difference from the non-inpainting SDXL config is the number of in-channels.
- Update the legacy config maps to use this config file.
2024-04-28 12:57:27 +10:00
Lincoln Stein
d72f272f16
Address change requests in first round of PR reviews.
...
Pending:
- Move model install calls into model manager and create passthrus in invocation_context.
- Consider splitting load_model_from_url() into a call to get the path and a call to load the path.
2024-04-24 23:53:30 -04:00