Commit Graph

784 Commits

Author SHA1 Message Date
Kevin Turner
31fd285c82 generate: fix import error 2022-12-12 16:17:46 -08:00
Kevin Turner
0b48f2e801 fix --safety_checker arg parsing
and add note to diffusers loader about where safety checker gets called
2022-12-12 16:17:10 -08:00
Kevin Turner
e32b82a480 Merge remote-tracking branch 'origin/main' into dev/diffusers
# Conflicts:
#	ldm/invoke/generator/inpaint.py
#	ldm/invoke/generator/txt2img2img.py
2022-12-12 11:17:25 -08:00
Lincoln Stein
7314f1a862
add --karras_max option to invoke.py command line (#1762)
This addresses image regression image reported in #1754
2022-12-12 13:16:15 -05:00
rmagur1203
f4e7383490
Load model in inpaint when using free_gpu_mem option (#1938)
* Load model in inpaint when using free_gpu_mem option

* Passing free_gpu_mem option to inpaint generator
2022-12-12 09:14:30 -05:00
rmagur1203
96a12099ed
Fix the mistake of not importing the gc (#1939) 2022-12-12 09:14:09 -05:00
Kevin Turner
1cae089889 Merge remote-tracking branch 'origin/main' into dev/diffusers
# Conflicts:
#	environments-and-requirements/requirements-base.txt
#	ldm/invoke/generator/txt2img.py
#	ldm/invoke/generator/txt2img2img.py
2022-12-11 11:23:52 -08:00
rmagur1203
bd0c0d77d2
Reduce more memories on free_gpu_mem option (#1915)
* Enhance free_gpu_mem option
Unload cond_stage_model on free_gpu_mem option is setted

* Enhance free_gpu_mem option
Unload cond_stage_model on free_gpu_mem option is setted
2022-12-11 13:49:55 -05:00
Lincoln Stein
f745f78cb3
correct bug when trying to enhance JPG images (#1928)
This fix was authored by @mebelz and is reissued here to base it on
`main`.
2022-12-11 13:48:47 -05:00
Damian Stewart
9f855a358a
fix for crash with inpainting model introduced by #1866 (#1922)
* fix for crash using inpainting model

* prevent crash due to invalid attention_maps_saver
2022-12-11 13:48:12 -05:00
Lincoln Stein
e7144055d1 make webGUI model changing work again
- Using relative root addresses was causing problems when the
  current working directory was changed after start time.
- This commit makes the root address absolute at start time, such
  that changing the working directory later on doesn't break anything.
2022-12-11 11:19:22 -05:00
Lincoln Stein
0439b51a26
Simple Installer for Unified Directory Structure, Initial Implementation (#1819)
* partially working simple installer

* works on linux

* fix linux requirements files

* read root environment variable in right place

* fix cat invokeai.init in test workflows

* fix classical cp error in test-invoke-pip.yml

* respect --root argument now

* untested bat installers added

* windows install.bat now working

fix logic to find frontend files

* rename simple_install to "installer"

1. simple_install => 'installer'
2. source and binary install directories are removed

* enable update scripts to update requirements

- Also pin requirements to known working commits.
- This may be a breaking change; exercise with caution
- No functional testing performed yet!

* update docs and installation requirements

NOTE: This may be a breaking commit! Due to the way the installer
works, I have to push to a public branch in order to do full end-to-end
testing.

- Updated installation docs, removing binary and source installers and
  substituting the "simple" unified installer.
- Pin requirements for the "http:" downloads to known working commits.
- Removed as much as possible the invoke-ai forks of others' repos.

* fix directory path for installer

* correct requirement/environment errors

* exclude zip files in .gitignore

* possible fix for dockerbuild

* ready for torture testing

- final Windows bat file tweaks
- copy environments-and-requirements to the runtime directory so that
  the `update.sh` script can run.

  This is not ideal, since we lose control over the
  requirements. Better for the update script to pull the proper
  updated requirements script from the repository.

* allow update.sh/update.bat to install arbitrary InvokeAI versions

- Can pass the zip file path to any InvokeAI release, branch, commit or tag,
  and the installer will try to install it.
- Updated documentation
- Added Linux Python install hints.

* use binary installer's :err_exit function

* user diffusers 0.10.0

* added logic for CPPFLAGS on mac

* improve windows install documentation

- added information on a couple of gotchas I experienced during
  windows installation, including DLL loading errors experienced
  when Visual Studio C++ Redistributable was not present.

* tagged to pull from 2.2.4-rc1

- also fix error of shell window closing immediately if suitable
  python not found

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-11 00:37:08 -05:00
Kevin Turner
520c17ab86 diffusers(AddsMaskedGuidance): partial fix for k-schedulers
Prevents them from crashing, but results are still hot garbage.
2022-12-10 21:19:32 -08:00
Kevin Turner
cd358c40ad diffusers: stopgap fix for attention_maps_callback crash after recent merge 2022-12-10 19:48:15 -08:00
Kevin Turner
061c5369a2 Merge remote-tracking branch 'origin/main' into dev/diffusers 2022-12-10 19:18:00 -08:00
Damian Stewart
8cbb50c204 avoid further crash under low-memory conditions 2022-12-10 15:32:11 -05:00
blessedcoolant
12a8d7fc14 Fix crash introduced in #1866 2022-12-10 15:32:11 -05:00
Kevin Turner
63532226a5 Merge remote-tracking branch 'origin/main' into dev/diffusers
# Conflicts:
#	backend/invoke_ai_web_server.py
#	ldm/generate.py
#	ldm/invoke/CLI.py
#	ldm/invoke/generator/base.py
#	ldm/invoke/generator/txt2img.py
#	ldm/models/diffusion/cross_attention_control.py
#	ldm/modules/attention.py
2022-12-10 08:43:37 -08:00
Kevin Turner
66d32b79b7 diffusers: work more better with more models.
fixed relative path problem with local models.

fixed models on hub not always having a `fp16` branch.
2022-12-10 08:29:12 -08:00
Kevin Turner
50c48cffc7 diffusers: make masked img2img behave better with multi-step schedulers
re-randomizing the noise each step was confusing them.
2022-12-10 08:27:46 -08:00
Kevin Turner
1a67836012 diffusers: use xformers when available
diffusers no longer auto-enables this as of 0.10.2.
2022-12-10 08:26:24 -08:00
Damian Stewart
786b8878d6
Save and display per-token attention maps (#1866)
* attention maps saving to /tmp

* tidy up diffusers branch backporting of cross attention refactoring

* base64-encoding the attention maps image for generationResult

* cleanup/refactor conditioning.py

* attention maps and tokens being sent to web UI

* attention maps: restrict count to actual token count and improve robustness

* add argument type hint to image_to_dataURL function

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-10 15:57:41 +01:00
Kevin Turner
a3e41d63ec Merge remote-tracking branch 'origin/main' into dev/diffusers
# Conflicts:
#	.github/workflows/test-invoke-conda.yml
#	.github/workflows/test-invoke-pip.yml
#	environments-and-requirements/requirements-base.txt
2022-12-09 10:34:29 -08:00
Kevin Turner
c28f56daec diffusers(ModelCache): stopgap to make from_cpu compatible with diffusers 2022-12-09 10:23:39 -08:00
wfng92
d2026d0509 Fix error when init_mask=None and invert_mask=True
In the event where no `init_mask` is given and `invert_mask` is set to True, the script will raise the following error:

```bash
AttributeError: 'NoneType' object has no attribute 'mode'
```

The new implementation will only run inversion when both variables are valid.
2022-12-08 22:37:11 -05:00
Jonathan
06369d07c0 Update CLI.py 2022-12-08 22:34:49 -05:00
Jonathan
4e61069821 Update embiggen.py 2022-12-08 22:34:49 -05:00
Kevin Turner
9199d698f8 diffusers: upgrade to diffusers 0.10, add Heun scheduler 2022-12-08 13:02:47 -08:00
Kevin Turner
30a8d4c2b3 diffusers: enable DPMSolver++ scheduler 2022-12-07 19:00:23 -08:00
Kevin Turner
9bcb3b1bf7 fixup! refactor(diffusers): reduce some code duplication amongst the different tasks 2022-12-07 18:26:08 -08:00
Kevin Turner
821c7df240 refactor(diffusers): reduce some code duplication amongst the different tasks 2022-12-07 18:20:56 -08:00
Daya Adianto
d7ba041007 Enable force free GPU memory in img2img 2022-12-07 19:25:21 -05:00
Kevin Turner
0390e6740d
Merge branch 'main' into dev/diffusers 2022-12-07 11:18:44 -08:00
spezialspezial
42f5bd4e12 Account for flat models
Merged models from auto11 merge board are flat for some reason. Current behavior of invoke is not changed by this modification.
2022-12-07 12:11:37 -05:00
Kevin Turner
04a5bc938e diffusers: txt2img2img (hires_fix)
with so much slicing and dicing of pipeline methods to stitch them together
2022-12-06 22:28:09 -08:00
Kevin Turner
bf6376417a diffusers: reset num_vectors_per_token
sync with 44a0055571
2022-12-06 15:52:49 -08:00
Kevin Turner
a0eb30a9b9 Merge remote-tracking branch 'origin/main' into dev/diffusers 2022-12-06 15:50:40 -08:00
blessedcoolant
1d1988683b Fix Embedding Dir not working 2022-12-05 22:24:31 -05:00
Kevin Turner
5c7e6751e0 inpainting for the normal model. I think it works this time. 2022-12-05 12:37:20 -08:00
Damian Stewart
69d42762de refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary 2022-12-05 20:00:18 +01:00
Damian Stewart
c6f31e5f36 fix off-by-one bug in cross-attention-control (#1774)
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
2022-12-05 19:44:05 +01:00
Kevin Turner
f3570d8344 inpainting for the normal model [WIP]
This seems to be performing well until the LAST STEP, at which point it dissolves to confetti.
2022-12-04 23:36:12 -08:00
Kevin Turner
b2664e807e fix debug_image to not crash with non-RGB images. 2022-12-04 20:12:47 -08:00
Kevin Turner
875312080d diffusers support for the inpainting model 2022-12-04 20:12:04 -08:00
spezialspezial
1c0247d58a Eventually update APP_VERSION to 2.2.3
Not sure what the procedure is for the version number. Is this supposed to match every git tag or just major versions? Same question for setup.py
2022-12-04 14:33:16 -05:00
Damian Stewart
ff42027a00 add cross-attention control support to diffusers (fails on MPS)
For unknown reasons MPS produces garbage output with .swap(). Use
--always_use_cpu arg to invoke.py for now to test this code on MPS.
2022-12-04 16:07:54 +01:00
Damian Stewart
f48706efee add always_use_cpu arg to bypass MPS 2022-12-04 15:15:39 +01:00
Damian Stewart
d335f51e5f
fix off-by-one bug in cross-attention-control (#1774)
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
2022-12-04 11:41:03 +01:00
Kevin Turner
e0495a7440 Merge remote-tracking branch 'origin/main' into dev/diffusers
# Conflicts:
#	scripts/configure_invokeai.py
2022-12-03 20:00:39 -08:00
Eugene Brodsky
6d6076d3c7 (config) fix permissions on configure_invokeai.py, improve documentation in globals.py comment 2022-12-03 14:33:21 -05:00