80a67572f1
Fix invocation name of tiled_multi_diffusion_denoise_latents.
2024-06-25 11:31:52 -07:00
60ac937698
Improve clarity of comments regarded when 'noise' and 'latents' are expected to be set.
2024-06-25 11:31:52 -07:00
196f3b721d
Stricter typing for the is_gradient_mask: bool.
2024-06-25 11:31:52 -07:00
40ae174c41
Fix typing of timesteps and init_timestep.
2024-06-25 11:31:52 -07:00
ffc28176fe
Remove unused num_inference_steps.
2024-06-25 11:31:52 -07:00
230e205541
WIP TiledMultiDiffusionDenoiseLatents. Updated parameter list and first half of the logic.
2024-06-25 11:31:52 -07:00
7e94350351
Tidy DenoiseLatentsInvocation.prep_control_data(...) and fix some type errors.
2024-06-25 11:31:52 -07:00
c4e8549c73
Make DenoiseLatentsInvocation.prep_control_data(...) a staticmethod so that it can be called externally.
2024-06-25 11:31:52 -07:00
350a210835
Copy TiledStableDiffusionRefineInvocation as a starting point for TiledMultiDiffusionDenoiseLatents.py
2024-06-25 11:31:52 -07:00
ed781dbb0c
Change tiling strategy to make TiledStableDiffusionRefineInvocation work with more tile shapes and overlaps.
2024-06-25 11:31:52 -07:00
b41ea963e7
Expose a few more params from TiledStableDiffusionRefineInvocation.
2024-06-25 11:31:52 -07:00
da5d105049
Add support for LoRA models in TiledStableDiffusionRefineInvocation.
2024-06-25 11:31:52 -07:00
5301770525
Add naive ControlNet support to TiledStableDiffusionRefineInvocation
2024-06-25 11:31:52 -07:00
534640ccde
Rough prototype of TiledStableDiffusionRefineInvocation is working.
2024-06-25 11:31:52 -07:00
d5ab8cab5c
WIP - TiledStableDiffusionRefine
2024-06-25 11:31:52 -07:00
4767301ad3
Minor improvements to LatentsToImageInvocation type hints.
2024-06-25 11:31:52 -07:00
21d7ca45e6
Expose vae_decode(...) as a staticmethod on LatentsToImageInvocation.
2024-06-25 11:31:52 -07:00
020e8eb413
Fix return type of prepare_noise_and_latents(...).
2024-06-25 11:31:52 -07:00
3d49541c09
Make init_scheduler() a staticmethod on DenoiseLatentsInvocation so that it can be called externally.
2024-06-25 11:31:52 -07:00
1ef266845a
Only allow a single positive/negative prompt conditioning input for tiled refine.
2024-06-25 11:31:52 -07:00
a37589ca5f
WIP on TiledStableDiffusionRefine
2024-06-25 11:31:52 -07:00
171a505f5e
Convert several methods in DenoiseLatentsInvocation to staticmethods so that they can be called externally.
2024-06-25 11:31:52 -07:00
8004a0d5f5
Simplify the logic in prepare_noise_and_latents(...).
2024-06-25 11:31:52 -07:00
610a1fd611
Split out the prepare_noise_and_latents(...) logic in DenoiseLatentsInvocation so that it can be called from other invocations.
2024-06-25 11:31:52 -07:00
43108eec13
(minor) Add a TODO note to get_scheduler(...).
2024-06-25 11:31:52 -07:00
a43d602f16
fix(queue): add clear_queue_on_startup config to clear problematic queues
2024-06-19 11:39:25 +10:00
79ceac2f82
(minor) Use SilenceWarnings as a decorator rather than a context manager to save an indentation level.
2024-06-18 15:06:22 -04:00
d13aafb514
Tidy denoise_latents.py imports to all use absolute import paths.
2024-06-18 15:06:22 -04:00
cd70937b7f
feat(api): improved model install confirmation page styling & messaging
2024-06-17 10:51:08 +10:00
fb694b3e17
feat(app): add model_install_download_started
event
...
Previously, we used `model_install_download_progress` for both download starting and progressing. When handling this event, we don't know which actual thing it represents.
Add `model_install_download_started` event to explicitly represent a model download started event.
2024-06-17 09:50:25 +10:00
7f03b04b2f
Merge branch 'main' into chainchompa/model-install-deeplink
2024-06-14 17:16:25 -04:00
4029972530
formatting
2024-06-14 17:15:55 -04:00
aae318425d
added route for installing huggingface model from model marketplace
2024-06-14 17:08:39 -04:00
785bb1d9e4
Fix all comparisons against the DEFAULT_PRECISION constant. DEFAULT_PRECISION is a torch.dtype. Previously, it was compared to a str in a number of places where it would always resolve to False. This is a bugfix that results in a change to the default behavior. In practice, this will not change the behavior for many users, because it only causes a change in behavior if a users has configured float32 as their default precision.
2024-06-14 11:26:10 -07:00
568a4844f7
fix: other recursive imports
2024-06-10 04:12:20 -07:00
7d19af2caa
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-06-08 18:55:06 -04:00
52c0c4a32f
Rename latent.py -> denoise_latents.py.
2024-06-07 09:28:42 -04:00
8f1afc032a
Move SchedulerInvocation to a new file. No functional changes.
2024-06-07 09:28:42 -04:00
854bca668a
Move CreateDenoiseMaskInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
fea9013cad
Move CreateGradientMaskInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
045caddee1
Move LatentsToImageInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
58697141bf
Move ImageToLatentsInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
5e419dbb56
Move ScaleLatentsInvocation and ResizeLatentsInvocation to their own file. No functional changes.
2024-06-07 09:28:42 -04:00
595096bdcf
Move BlendLatentsInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
ed03d281e6
Move CropLatentsCoreInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
0b37496c57
Move IdealSizeInvocation to its own file. No functional changes.
2024-06-07 09:28:42 -04:00
fde58ce0a3
Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api
2024-06-07 14:23:41 +10:00
dc134935c8
replace load_and_cache_model() with load_remote_model() and load_local_odel()
2024-06-07 14:12:16 +10:00
f81b8bc9f6
add support for generic loading of diffusers directories
2024-06-07 13:54:30 +10:00
2871676f79
LoRA patching optimization ( #6439 )
...
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* do not save original weights if there is a CPU copy of state dict
* Update invokeai/backend/model_manager/load/load_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
* documentation fixes added during penultimate review
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com >
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com >
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-06-06 13:53:35 +00:00