Commit Graph

11802 Commits

Author SHA1 Message Date
Lincoln Stein
39881d3d7d fix installer logic for tokenizer_3 and text_encoder_3 2024-06-21 23:34:18 -04:00
Lincoln Stein
28f1d25973 unpin dependencies; fix typo in sd3.py 2024-06-21 15:59:47 -04:00
Lincoln Stein
95377ea159 add non-commercial use message to sd3 starter; rebuild frontend 2024-06-20 21:59:28 -04:00
Lincoln Stein
445561e3a4 add sd3 to starter models 2024-06-20 18:13:46 -04:00
blessedcoolant
66260fd345 fix: Update Clip 3 slot title & lint issues 2024-06-20 08:53:35 +05:30
blessedcoolant
c403efa83f fix: Make TE5 Optional 2024-06-20 08:45:36 +05:30
blessedcoolant
cd99ef2f46 Merge branch 'main' into lstein/feat/sd3-model-loading 2024-06-20 08:43:34 +05:30
Lincoln Stein
b03073d888
[MM] Add support for probing and loading SDXL VAE checkpoint files (#6524)
* add support for probing and loading SDXL VAE checkpoint files

* broaden regexp probe for SDXL VAEs

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-06-20 02:57:27 +00:00
steffylo
a43d602f16 fix(queue): add clear_queue_on_startup config to clear problematic queues 2024-06-19 11:39:25 +10:00
Ryan Dick
7e9a89f8c6
Tidy SilenceWarnings context manager (#6493)
## Summary

No functional changes, just cleaning some things up as I touch the code.
This PR cleans up the `SilenceWarnings` context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a
decorator
- Remove duplicate implementation
- Check the initial verbosity on `__enter__()` rather than `__init__()`
- Save an indentation level in DenoiseLatents

## QA Instructions

I generated an image to confirm that warnings are still muted.

## Merge Plan

- [x] ⚠️ Merge https://github.com/invoke-ai/InvokeAI/pull/6492 first,
then change the target branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-06-18 15:23:32 -04:00
Ryan Dick
79ceac2f82 (minor) Use SilenceWarnings as a decorator rather than a context manager to save an indentation level. 2024-06-18 15:06:22 -04:00
Ryan Dick
8e47e005a7 Tidy SilenceWarnings context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a decorator
- Remove duplicate implementation
- Check the initial verbosity on __enter__() rather than __init__()
2024-06-18 15:06:22 -04:00
Ryan Dick
d13aafb514 Tidy denoise_latents.py imports to all use absolute import paths. 2024-06-18 15:06:22 -04:00
Lincoln Stein
9dce4f09ae scale default RAM cache by size of system evirtual memory 2024-06-18 13:49:12 -04:00
Brandon Rising
63a7e19dbf Run ruff 2024-06-18 10:38:29 -04:00
Brandon Rising
fbc5a8ec65 Ignore validation on improperly formatted hashes (pytest) 2024-06-18 10:38:29 -04:00
Brandon Rising
8ce6e4540e Run ruff 2024-06-18 10:38:29 -04:00
Brandon Rising
f14f377ede Update validator list 2024-06-18 10:38:29 -04:00
Brandon Rising
1925f83f5e Update validator list 2024-06-18 10:38:29 -04:00
Brandon Rising
3a5ad6d112 Update validator list 2024-06-18 10:38:29 -04:00
Brandon Rising
41a6bb45f3 Initial functionality 2024-06-18 10:38:29 -04:00
chainchompa
70e40fa6c1
added route to install huggingface models from model marketplace (#6515)
## Summary
added route to install huggingface models from model marketplace
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions
test by going to
http://localhost:5173/api/v2/models/install/huggingface?source=${hfRepo}
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-06-16 21:13:58 -04:00
blessedcoolant
22b5c036aa Revert "fix: height and weight not working on sd3 node"
This reverts commit be14fd59c9.
2024-06-17 06:41:49 +05:30
blessedcoolant
be14fd59c9 fix: height and weight not working on sd3 node 2024-06-17 06:34:01 +05:30
psychedelicious
e26125b734 tests: fix test_model_install.py 2024-06-17 10:57:11 +10:00
psychedelicious
cd70937b7f feat(api): improved model install confirmation page styling & messaging 2024-06-17 10:51:08 +10:00
psychedelicious
f002bca2fa feat(ui): handle new model_install_download_started event
When a model install is initiated from outside the client, we now trigger the model manager tab's model install list to update.

- Handle new `model_install_download_started` event
- Handle `model_install_download_complete` event (this event is not new but was never handled)
- Update optimistic updates/cache invalidation logic to efficiently update the model install list
2024-06-17 10:07:10 +10:00
psychedelicious
56771de856 feat(ui): add redux actions for model_install_download_started event 2024-06-17 09:52:46 +10:00
psychedelicious
c11478a94a chore(ui): typegen 2024-06-17 09:51:18 +10:00
psychedelicious
fb694b3e17 feat(app): add model_install_download_started event
Previously, we used `model_install_download_progress` for both download starting and progressing. When handling this event, we don't know which actual thing it represents.

Add `model_install_download_started` event to explicitly represent a model download started event.
2024-06-17 09:50:25 +10:00
psychedelicious
1bc98abc76 docs(ui): explain model install events 2024-06-17 09:33:46 +10:00
Lincoln Stein
423057a2e8 add config variable to suppress loading of sd3 text_encoder_3 T5 model 2024-06-16 16:28:39 -04:00
blessedcoolant
f65d50a4dd wip: basic wrapper for generating sd3 images 2024-06-16 04:18:20 +05:30
Lincoln Stein
554809c647 return correct base type for sd3 VAEs 2024-06-15 18:17:03 -04:00
chainchompa
7f03b04b2f
Merge branch 'main' into chainchompa/model-install-deeplink 2024-06-14 17:16:25 -04:00
chainchompa
4029972530 formatting 2024-06-14 17:15:55 -04:00
chainchompa
328f160e88 refetch model installs when a new model install starts 2024-06-14 17:09:07 -04:00
chainchompa
aae318425d added route for installing huggingface model from model marketplace 2024-06-14 17:08:39 -04:00
Lincoln Stein
ac0396e6f7 Merge branch 'lstein/feat/sd3-model-loading' of github.com:invoke-ai/InvokeAI into lstein/feat/sd3-model-loading 2024-06-14 16:48:20 -04:00
Lincoln Stein
78f704e7d5 tweak installer to select correct components of HF SD3 diffusers models 2024-06-14 16:46:24 -04:00
blessedcoolant
41236031b2 chore: remove unrequired changes to v1 workflow field types 2024-06-15 00:00:44 +05:30
Ryan Dick
785bb1d9e4 Fix all comparisons against the DEFAULT_PRECISION constant. DEFAULT_PRECISION is a torch.dtype. Previously, it was compared to a str in a number of places where it would always resolve to False. This is a bugfix that results in a change to the default behavior. In practice, this will not change the behavior for many users, because it only causes a change in behavior if a users has configured float32 as their default precision. 2024-06-14 11:26:10 -07:00
blessedcoolant
ddbd2ebd9d wip: add Transformer Field to Node UI 2024-06-14 22:25:26 +05:30
blessedcoolant
0c970bc880 wip: add SD3 Model Loader Invocation 2024-06-14 22:21:09 +05:30
blessedcoolant
c79d9b9ecf wip: Add Initial support for select SD3 models in UI 2024-06-14 16:04:16 +05:30
Lincoln Stein
a3cb5da130
Improve RAM<->VRAM memory copy performance in LoRA patching and elsewhere (#6490)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes requested during penultimate review

* add non-blocking=True parameters to several torch.nn.Module.to() calls, for slight performance increases

* fix ruff errors

* prevent crash on non-cuda-enabled systems

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-13 17:10:03 +00:00
Lincoln Stein
03b9d17d0b draft sd3 loading; probable VRAM leak when using quantized submodels 2024-06-13 00:51:00 -04:00
Lincoln Stein
002f8242a1 add draft SD3 probing; there is an issue with FromOriginalControlNetMixin in backend.util.hotfixes due to new diffusers 2024-06-12 22:44:34 -04:00
blessedcoolant
568a4844f7 fix: other recursive imports 2024-06-10 04:12:20 -07:00
blessedcoolant
b1e56e2485 fix: SchedulerOutput not being imported correctly 2024-06-10 04:12:20 -07:00