Commit Graph

8353 Commits

Author SHA1 Message Date
Millun Atluri
9733cd4199 Update xformers to 0.0.22post7 2023-11-06 17:17:03 -08:00
Millun Atluri
9976bc6908 Update installer.py to cu121 2023-11-06 17:17:03 -08:00
Millun Atluri
c68db6e40f Update xformers to ~0.0.22 2023-11-06 17:17:03 -08:00
Kent Keirsey
840cbc1d39
xformers==0.0.20 (#4881)
I'm not sure if it's correct way of handling things, but correcting this
string to '==0.0.20' fixes xformers install for me - and maybe for
others it will too. Sorry for absolutely incorrect PR.

Please see [this
thread](https://github.com/facebookresearch/xformers/issues/740), this
is the issue I had (trying to install InvokeAI with
Automatic/Manual/StableMatrix way).

With ~=0.0.19 (0.0.22):
```
(InvokeAI) pip install torch torchvision xformers~=0.0.19
Collecting torch
  Obtaining dependency information for torch from edce54779f/torch-2.1.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torch-2.1.0-cp311-cp311-win_amd64.whl.metadata (25 kB)
Collecting torchvision
  Obtaining dependency information for torchvision from ab6f42af83/torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata (6.6 kB)
Collecting xformers
  Using cached xformers-0.0.22.post3.tar.gz (3.9 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [20 lines of output]
      Traceback (most recent call last):
        File "C:\Users\Drun\invokeai\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
          main()
        File "C:\Users\Drun\invokeai\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Drun\invokeai\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 355, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in _get_build_requires
          self.run_setup()
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 507, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup
          exec(code, locals())
        File "<string>", line 23, in <module>
      ModuleNotFoundError: No module named 'torch'
```

With 0.0.20:

```
(InvokeAI) pip install torch torchvision xformers==0.0.20
Collecting torch
  Obtaining dependency information for torch from edce54779f/torch-2.1.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torch-2.1.0-cp311-cp311-win_amd64.whl.metadata (25 kB)
Collecting torchvision
  Obtaining dependency information for torchvision from ab6f42af83/torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata (6.6 kB)
Collecting xformers==0.0.20
  Obtaining dependency information for xformers==0.0.20 from d4a42f582a/xformers-0.0.20-cp311-cp311-win_amd64.whl.metadata
  Using cached xformers-0.0.20-cp311-cp311-win_amd64.whl.metadata (1.1 kB)
Collecting numpy (from xformers==0.0.20)
  Obtaining dependency information for numpy from 3f826c6d15/numpy-1.26.0-cp311-cp311-win_amd64.whl.metadata
  Using cached numpy-1.26.0-cp311-cp311-win_amd64.whl.metadata (61 kB)
Collecting pyre-extensions==0.0.29 (from xformers==0.0.20)
  Using cached pyre_extensions-0.0.29-py3-none-any.whl (12 kB)
Collecting torch
  Using cached torch-2.0.1-cp311-cp311-win_amd64.whl (172.3 MB)
Collecting filelock (from torch)
  Obtaining dependency information for filelock from 97afbafd9d/filelock-3.12.4-py3-none-any.whl.metadata
  Using cached filelock-3.12.4-py3-none-any.whl.metadata (2.8 kB)
Requirement already satisfied: typing-extensions in c:\users\drun\invokeai\.venv\lib\site-packages (from torch) (4.8.0)
Requirement already satisfied: sympy in c:\users\drun\invokeai\.venv\lib\site-packages (from torch) (1.12)
Collecting networkx (from torch)
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch)
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting typing-inspect (from pyre-extensions==0.0.29->xformers==0.0.20)
  Obtaining dependency information for typing-inspect from 107a22063b/typing_inspect-0.9.0-py3-none-any.whl.metadata
  Using cached typing_inspect-0.9.0-py3-none-any.whl.metadata (1.5 kB)
Collecting requests (from torchvision)
  Obtaining dependency information for requests from 0e2d847013/requests-2.31.0-py3-none-any.whl.metadata
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision
  Using cached torchvision-0.15.2-cp311-cp311-win_amd64.whl (1.2 MB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Obtaining dependency information for pillow!=8.3.*,>=5.3.0 from debe992677/Pillow-10.0.1-cp311-cp311-win_amd64.whl.metadata
  Using cached Pillow-10.0.1-cp311-cp311-win_amd64.whl.metadata (9.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Obtaining dependency information for MarkupSafe>=2.0 from 08b85bc194/MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata
  Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata (3.1 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision)
  Obtaining dependency information for charset-normalizer<4,>=2 from 50028bbb26/charset_normalizer-3.3.0-cp311-cp311-win_amd64.whl.metadata
  Using cached charset_normalizer-3.3.0-cp311-cp311-win_amd64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision)
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision)
  Obtaining dependency information for urllib3<3,>=1.21.1 from 9957270221/urllib3-2.0.6-py3-none-any.whl.metadata
  Using cached urllib3-2.0.6-py3-none-any.whl.metadata (6.6 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Obtaining dependency information for certifi>=2017.4.17 from 2234eab223/certifi-2023.7.22-py3-none-any.whl.metadata
  Using cached certifi-2023.7.22-py3-none-any.whl.metadata (2.2 kB)
Requirement already satisfied: mpmath>=0.19 in c:\users\drun\invokeai\.venv\lib\site-packages (from sympy->torch) (1.3.0)
Collecting mypy-extensions>=0.3.0 (from typing-inspect->pyre-extensions==0.0.29->xformers==0.0.20)
  Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Using cached xformers-0.0.20-cp311-cp311-win_amd64.whl (97.6 MB)
Using cached Pillow-10.0.1-cp311-cp311-win_amd64.whl (2.5 MB)
Using cached filelock-3.12.4-py3-none-any.whl (11 kB)
Using cached numpy-1.26.0-cp311-cp311-win_amd64.whl (15.8 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Using cached charset_normalizer-3.3.0-cp311-cp311-win_amd64.whl (97 kB)
Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl (17 kB)
Using cached urllib3-2.0.6-py3-none-any.whl (123 kB)
Using cached typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Installing collected packages: urllib3, pillow, numpy, networkx, mypy-extensions, MarkupSafe, idna, filelock, charset-normalizer, certifi, typing-inspect, requests, jinja2, torch, pyre-extensions, xformers, torchvision
Successfully installed MarkupSafe-2.1.3 certifi-2023.7.22 charset-normalizer-3.3.0 filelock-3.12.4 idna-3.4 jinja2-3.1.2 mypy-extensions-1.0.0 networkx-3.1 numpy-1.26.0 pillow-10.0.1 pyre-extensions-0.0.29 requests-2.31.0 torch-2.0.1 torchvision-0.15.2 typing-inspect-0.9.0 urllib3-2.0.6 xformers-0.0.20
```

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: I'm no-brainer. It fixed issue for me, so I did PR.
Who knows?

## Technical details:
Windows 11, Standalone clean and freshly-installed Python 3.11
2023-11-06 16:07:03 -08:00
Millun Atluri
014d6187ab
Update pyproject.toml 2023-11-07 10:22:20 +11:00
Millun Atluri
9fb15fae87
Update pyproject.toml 2023-11-07 10:20:16 +11:00
Millun Atluri
a07336a020
Merge branch 'main' into patch-1 2023-11-07 10:17:46 +11:00
Millun Atluri
0718cc2392
Update xformers to 0.0.21 2023-11-07 10:16:44 +11:00
Kent Keirsey
935e4632c2
feat(nodes): add freeu support (#4846)
### What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description

**Note: FreeU is not in the current release of diffusers. Looks like it
will be in release 0.22. This PR needs to wait until that is released.**

[feat(nodes): add freeu
support](15b33ad501)

Add support for FreeU. See:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU

Implementation:
- `ModelPatcher.apply_freeu()` handles the enabling freeu (which is very
simple with diffusers).
- `FreeUConfig` model added to hold the hyperparameters.
- `freeu_config` added as optional sub-field on `UNetField`.
- `FreeUInvocation` added, works like LoRA - chain it to add the FreeU
config to the UNet
- No support for model-dependent presets, this will be a future workflow
editor enhancement

Closes https://github.com/invoke-ai/InvokeAI/issues/4845

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4845 

## QA Instructions, Screenshots, Recordings

You'll need to install diffusers from their github repo before testing
this:
`pip install git+https://github.com/huggingface/diffusers`

1. Create a graph like this:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/af17719b-b001-4534-8c4e-883484fd7465)
2. Get a free lunch!

No FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/279d1a69-1577-4c31-ab82-ebf67f65920d)
With FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/c332c778-0b87-4215-8a36-d4822e06f4de)

No FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/ebec097b-ad54-4295-b734-33656738a2cf)
With FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/3423140d-c9ce-4697-9993-d2bb0d0f5634)

No FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7cb0e39d-aa87-4a48-a3af-b9f47a866814)
With FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/9113d2fe-5bd3-474f-8f33-82cdeb7abf82)
2023-11-06 13:58:32 -08:00
Kent Keirsey
a83d8810c4
Merge branch 'main' into feat/nodes/freeu 2023-11-06 13:47:56 -08:00
Wilson E. Alvarez
76b3f8956b Fix ROCm support in Docker container 2023-11-06 13:47:08 -08:00
Kent Keirsey
ff8a8a1963
Merge branch 'main' into feat/nodes/freeu 2023-11-06 09:04:54 -08:00
blessedcoolant
cb6d0c8851
Re-add feat/mix cnet t2iadapter (#4929)
Reverts invoke-ai/InvokeAI#4923, which was a revert on the premature
merge.

slide to the left. revert, revert.
2023-11-06 22:29:13 +05:30
Kent Keirsey
67f2616d5a
Merge branch 'main' into revert-4923-revert-4914-feat/mix-cnet-t2iadapter 2023-11-06 07:34:51 -08:00
Kent Keirsey
f8f1740668 Set Defaults to 1 2023-11-06 07:11:16 -08:00
Kent Keirsey
e66d0f7372
Merge branch 'main' into feat/nodes/freeu 2023-11-06 05:39:58 -08:00
Kent Keirsey
546aaedbe4 Update pyproject.toml 2023-11-06 05:29:17 -08:00
Riccardo Giovanetti
7f650d00de translationBot(ui): update translation (Italian)
Currently translated at 97.7% (1191 of 1219 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-05 11:12:33 -08:00
Riccardo Giovanetti
4e6b579526 translationBot(ui): update translation (Italian)
Currently translated at 97.6% (1190 of 1219 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-05 12:09:20 +11:00
Alexander Eichhorn
6334c4adf5 translationBot(ui): update translation (German)
Currently translated at 53.8% (657 of 1219 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-05 12:09:20 +11:00
Lincoln Stein
66b2366efc
Remove LowRA from Initial Models (#5016)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Removing LowRA from the initial models as it's been deleted from
CivitAI.

## Related Tickets & Documents

https://discord.com/channels/1020123559063990373/1168415065205112872


- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-04 17:14:24 -04:00
Lincoln Stein
e147379aa7
Merge branch 'main' into main 2023-11-04 17:05:01 -04:00
Lincoln Stein
5a821384d3 fix model-not-found error 2023-11-04 08:24:01 -07:00
Ryan Dick
2bbba323c6
Fix model cache gc.collect() condition. (#5036)
See
https://github.com/invoke-ai/InvokeAI/pull/5034#discussion_r1382381733
2023-11-04 09:09:46 -04:00
Ryan Dick
aa02ebf8f5 Fix model cache gc.collect() condition. 2023-11-04 08:52:10 -04:00
Ryan Dick
fb3d0c4b12 Fix bug in model cache reference count checking. 2023-11-03 13:50:40 -07:00
Ryan Dick
8488ab0134 Reduce frequency that we call gc.collect() in the model cache. 2023-11-03 13:50:40 -07:00
Ryan Dick
875231ed3d Add reminder to clean up our model cache clearing logic. 2023-11-03 13:50:40 -07:00
Ryan Dick
43b300498f Remove explicit gc.collect() after transferring models from device to CPU. I'm not sure why this was there in the first place, but it was taking a significant amount of time (up to ~1sec in my tests). 2023-11-03 13:50:40 -07:00
psychedelicious
5b420653f9 feat(ui): show placeholder in refiner collapse instead of hiding it, if no refiner models installed 2023-11-03 14:15:24 +11:00
psychedelicious
3d32ce2b58 fix(ui): hide refiner collapse if refiner not installed 2023-11-03 14:15:24 +11:00
Ryan Dick
e391f3c9a8 Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model. 2023-11-02 19:41:33 -07:00
Ryan Dick
6e7a3f0546 (minor) Fix static checks and typo. 2023-11-02 19:20:37 -07:00
Ryan Dick
4a683cc669 Add a app config parameter to control the ModelCache logging behavior. 2023-11-02 19:20:37 -07:00
Ryan Dick
3781e56e57 Add log_memory_usage param to ModelCache. 2023-11-02 19:20:37 -07:00
Ryan Dick
267e709ba2 (minor) Fix int literal typing error. 2023-11-02 19:20:37 -07:00
Ryan Dick
8ff49109a8 Update get_pretty_snapshot_diff(...) to handle None-snapshots. 2023-11-02 19:20:37 -07:00
Ryan Dick
bac2a757e8 Replace deepcopy with a pickle roundtrip in apply_ti(...) to improve speed. 2023-11-02 19:05:24 -07:00
Ryan Dick
a4a7b601a1
Improve LoRA patching speed (#5017)
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Have you updated all relevant documentation?
- [x] Yes
- [ ] No

## Description
Improve LoRA patching speed with the following changes:
- Calculate LoRA layer weights on the same device as the target model.
Prior to this change, weights were always calculated on the CPU. If the
target model is on the GPU, this significantly improves performance.
- Move models to their target devices _before_ applying LoRA patches.
- Improve the ordering of Tensor copy / cast operations.

## QA Instructions, Screenshots, Recordings

Tests:

- [x] Tested with a CUDA GPU, saw savings of ~10secs with 1 LoRA applied
to an SDXL model.
- [x] No regression in CPU-only environment
- [ ] No regression (and possible improvement?) on Mac with MPS.
- [x] Weights get restored correctly after using a LoRA
- [x] Stacking multiple LoRAs

Please hammer away with a variety of LoRAs in case there is some edge
case that I've missed.

## Added/updated tests?

- [x] Yes (Added some minimal unit tests. Definitely would benefit from
more, but it's a step in the right direction.)
- [ ] No
2023-11-02 13:34:10 -04:00
Ryan Dick
fa7f6a6a10 Further tidying of LoRA patching. Revert some changes that didn't end up being important under the constraint that calculations are done on the same device as the model. 2023-11-02 10:03:17 -07:00
Ryan Dick
e92b84955c Add minimal unit tests for ModelPatcher.apply_lora(...) 2023-11-02 10:03:17 -07:00
Ryan Dick
61b17c475a Add TODO note about improving _resolve_lora_key(...). 2023-11-02 10:03:17 -07:00
Ryan Dick
379d68f595 Patch LoRA on device when model is already on device. 2023-11-02 10:03:17 -07:00
Ryan Dick
545c811bf1 Remove device and dtype members from LoRAModelRaw, they can too easily get out-of-sync with the underlying layer states. 2023-11-02 10:03:17 -07:00
Ryan Dick
2ba5b44ec4 Remove unused _lora_forward_hook(...). 2023-11-02 10:03:17 -07:00
Ryan Dick
7f4ce518b7 auto-format lora.py 2023-11-02 10:03:17 -07:00
psychedelicious
6c66adcd90 fix(ui): show collapse labels only if not default value 2023-11-01 14:41:13 +11:00
Millun Atluri
584b513038
Remove LowRA from Initial Models 2023-11-01 08:55:06 +11:00
nemuruibai
94055ae54a translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1215 of 1217 strings)

Co-authored-by: nemuruibai <nemuruibai@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-11-01 05:31:16 +11:00
Alexander Eichhorn
a79c86b901 translationBot(ui): update translation (German)
Currently translated at 51.7% (630 of 1217 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-01 05:31:16 +11:00