ce22c0fbaa
sync pydantic and sql field names; merge routes
2023-11-06 18:08:57 -05:00
935e4632c2
feat(nodes): add freeu support ( #4846 )
...
### What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
## Description
**Note: FreeU is not in the current release of diffusers. Looks like it
will be in release 0.22. This PR needs to wait until that is released.**
[feat(nodes): add freeu
support](15b33ad501
)
Add support for FreeU. See:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU
Implementation:
- `ModelPatcher.apply_freeu()` handles the enabling freeu (which is very
simple with diffusers).
- `FreeUConfig` model added to hold the hyperparameters.
- `freeu_config` added as optional sub-field on `UNetField`.
- `FreeUInvocation` added, works like LoRA - chain it to add the FreeU
config to the UNet
- No support for model-dependent presets, this will be a future workflow
editor enhancement
Closes https://github.com/invoke-ai/InvokeAI/issues/4845
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234 " would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes #4845
## QA Instructions, Screenshots, Recordings
You'll need to install diffusers from their github repo before testing
this:
`pip install git+https://github.com/huggingface/diffusers `
1. Create a graph like this:

2. Get a free lunch!
No FreeU:

With FreeU:

No FreeU:

With FreeU:

No FreeU:

With FreeU:

2023-11-06 13:58:32 -08:00
a83d8810c4
Merge branch 'main' into feat/nodes/freeu
2023-11-06 13:47:56 -08:00
76b3f8956b
Fix ROCm support in Docker container
2023-11-06 13:47:08 -08:00
ff8a8a1963
Merge branch 'main' into feat/nodes/freeu
2023-11-06 09:04:54 -08:00
cb6d0c8851
Re-add feat/mix cnet t2iadapter ( #4929 )
...
Reverts invoke-ai/InvokeAI#4923 , which was a revert on the premature
merge.
slide to the left. revert, revert.
2023-11-06 22:29:13 +05:30
67f2616d5a
Merge branch 'main' into revert-4923-revert-4914-feat/mix-cnet-t2iadapter
2023-11-06 07:34:51 -08:00
f8f1740668
Set Defaults to 1
2023-11-06 07:11:16 -08:00
e66d0f7372
Merge branch 'main' into feat/nodes/freeu
2023-11-06 05:39:58 -08:00
546aaedbe4
Update pyproject.toml
2023-11-06 05:29:17 -08:00
55f8865524
Merge branch 'main' into refactor/model-manager-2
2023-11-05 21:45:26 -05:00
2d051559d1
fix flake8 complaints
2023-11-05 21:45:08 -05:00
7f650d00de
translationBot(ui): update translation (Italian)
...
Currently translated at 97.7% (1191 of 1219 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com >
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-05 11:12:33 -08:00
db9cef0092
re-run isort
2023-11-04 23:50:07 -04:00
72c34aea75
added add_model_record and get_model_record to router api
2023-11-04 23:42:44 -04:00
edeea5237b
add sql-based model config store and api
2023-11-04 23:03:26 -04:00
4e6b579526
translationBot(ui): update translation (Italian)
...
Currently translated at 97.6% (1190 of 1219 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com >
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-05 12:09:20 +11:00
6334c4adf5
translationBot(ui): update translation (German)
...
Currently translated at 53.8% (657 of 1219 strings)
Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de >
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-05 12:09:20 +11:00
66b2366efc
Remove LowRA from Initial Models ( #5016 )
...
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
Removing LowRA from the initial models as it's been deleted from
CivitAI.
## Related Tickets & Documents
https://discord.com/channels/1020123559063990373/1168415065205112872
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
2023-11-04 17:14:24 -04:00
e147379aa7
Merge branch 'main' into main
2023-11-04 17:05:01 -04:00
5a821384d3
fix model-not-found error
2023-11-04 08:24:01 -07:00
2bbba323c6
Fix model cache gc.collect() condition. ( #5036 )
...
See
https://github.com/invoke-ai/InvokeAI/pull/5034#discussion_r1382381733
2023-11-04 09:09:46 -04:00
aa02ebf8f5
Fix model cache gc.collect() condition.
2023-11-04 08:52:10 -04:00
fb3d0c4b12
Fix bug in model cache reference count checking.
2023-11-03 13:50:40 -07:00
8488ab0134
Reduce frequency that we call gc.collect() in the model cache.
2023-11-03 13:50:40 -07:00
875231ed3d
Add reminder to clean up our model cache clearing logic.
2023-11-03 13:50:40 -07:00
43b300498f
Remove explicit gc.collect() after transferring models from device to CPU. I'm not sure why this was there in the first place, but it was taking a significant amount of time (up to ~1sec in my tests).
2023-11-03 13:50:40 -07:00
5b420653f9
feat(ui): show placeholder in refiner collapse instead of hiding it, if no refiner models installed
2023-11-03 14:15:24 +11:00
3d32ce2b58
fix(ui): hide refiner collapse if refiner not installed
2023-11-03 14:15:24 +11:00
e391f3c9a8
Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model.
2023-11-02 19:41:33 -07:00
6e7a3f0546
(minor) Fix static checks and typo.
2023-11-02 19:20:37 -07:00
4a683cc669
Add a app config parameter to control the ModelCache logging behavior.
2023-11-02 19:20:37 -07:00
3781e56e57
Add log_memory_usage param to ModelCache.
2023-11-02 19:20:37 -07:00
267e709ba2
(minor) Fix int literal typing error.
2023-11-02 19:20:37 -07:00
8ff49109a8
Update get_pretty_snapshot_diff(...) to handle None-snapshots.
2023-11-02 19:20:37 -07:00
bac2a757e8
Replace deepcopy with a pickle roundtrip in apply_ti(...) to improve speed.
2023-11-02 19:05:24 -07:00
a4a7b601a1
Improve LoRA patching speed ( #5017 )
...
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
Improve LoRA patching speed with the following changes:
- Calculate LoRA layer weights on the same device as the target model.
Prior to this change, weights were always calculated on the CPU. If the
target model is on the GPU, this significantly improves performance.
- Move models to their target devices _before_ applying LoRA patches.
- Improve the ordering of Tensor copy / cast operations.
## QA Instructions, Screenshots, Recordings
Tests:
- [x] Tested with a CUDA GPU, saw savings of ~10secs with 1 LoRA applied
to an SDXL model.
- [x] No regression in CPU-only environment
- [ ] No regression (and possible improvement?) on Mac with MPS.
- [x] Weights get restored correctly after using a LoRA
- [x] Stacking multiple LoRAs
Please hammer away with a variety of LoRAs in case there is some edge
case that I've missed.
## Added/updated tests?
- [x] Yes (Added some minimal unit tests. Definitely would benefit from
more, but it's a step in the right direction.)
- [ ] No
2023-11-02 13:34:10 -04:00
fa7f6a6a10
Further tidying of LoRA patching. Revert some changes that didn't end up being important under the constraint that calculations are done on the same device as the model.
2023-11-02 10:03:17 -07:00
e92b84955c
Add minimal unit tests for ModelPatcher.apply_lora(...)
2023-11-02 10:03:17 -07:00
61b17c475a
Add TODO note about improving _resolve_lora_key(...).
2023-11-02 10:03:17 -07:00
379d68f595
Patch LoRA on device when model is already on device.
2023-11-02 10:03:17 -07:00
545c811bf1
Remove device and dtype members from LoRAModelRaw, they can too easily get out-of-sync with the underlying layer states.
2023-11-02 10:03:17 -07:00
2ba5b44ec4
Remove unused _lora_forward_hook(...).
2023-11-02 10:03:17 -07:00
7f4ce518b7
auto-format lora.py
2023-11-02 10:03:17 -07:00
6c66adcd90
fix(ui): show collapse labels only if not default value
2023-11-01 14:41:13 +11:00
584b513038
Remove LowRA from Initial Models
2023-11-01 08:55:06 +11:00
94055ae54a
translationBot(ui): update translation (Chinese (Simplified))
...
Currently translated at 99.8% (1215 of 1217 strings)
Co-authored-by: nemuruibai <nemuruibai@gmail.com >
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-11-01 05:31:16 +11:00
a79c86b901
translationBot(ui): update translation (German)
...
Currently translated at 51.7% (630 of 1217 strings)
Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de >
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-01 05:31:16 +11:00
ed81d6d533
Update contributingNodes.md
2023-10-31 07:17:14 -07:00
63548c5ea7
Update community node installation instructions
2023-10-31 07:17:14 -07:00