Commit Graph

3308 Commits

Author SHA1 Message Date
Lincoln Stein
884768c39d
Make sure --free_gpu_mem still works when using CKPT-based diffuser model (#2367)
This PR attempts to fix `--free_gpu_mem` option that was not working in
CKPT-based diffuser model after #1583.

I noticed that the memory usage after #1583 did not decrease after
generating an image when `--free_gpu_mem` option was enabled.
It turns out that the option was not propagated into `Generator`
instance, hence the generation will always run without the memory saving
procedure.

This PR also related to #2326. Initially, I was trying to make
`--free_gpu_mem` works on 🤗 diffuser model as well.
In the process, I noticed that InvokeAI will raise an exception when
`--free_gpu_mem` is enabled.
I tried to quickly fix it by simply ignoring the exception and produce a
warning message to user's console.
2023-01-23 21:48:23 -05:00
Lincoln Stein
bc2194228e stability improvements
- provide full traceback when a model fails to load
- fix VAE record for VoxelArt; otherwise load fails
2023-01-23 21:40:27 -05:00
Lincoln Stein
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101 correct fail-to-resume error
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
  error in epoch calculation that caused script not to resume from
  latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
blessedcoolant
66babb2e81 Japanese Localization Build 2023-01-24 09:07:29 +13:00
blessedcoolant
31a967965b Add Japanese Localization 2023-01-24 09:07:29 +13:00
Katsuyuki-Karasawa
b9c9b947cd update japanese translation 2023-01-24 09:07:29 +13:00
唐澤 克幸
1eee08a070 add Japanese Translation 2023-01-24 09:07:29 +13:00
Lincoln Stein
aca1b61413
[Feature] Add interactive diffusers model merger (#2388)
This PR adds `scripts/merge_fe.py`, which will merge any 2-3 diffusers
models registered in InvokeAI's `models.yaml`, producing a new merged
model that will be registered as well.

Currently this script will only work if all models to be merged are
known by their repo_ids. Local models, including those converted from
ckpt files, will cause a crash due to a bug in the diffusers
`checkpoint_merger.py` code. I have made a PR against
huggingface/diffusers which fixes this:
https://github.com/huggingface/diffusers/pull/2060
2023-01-23 09:27:05 -05:00
Lincoln Stein
e18beaff9c
Merge branch 'main' into feat/merge-script 2023-01-23 09:05:38 -05:00
Kevin Turner
d7554b01fd
fix typo in prompt 2023-01-23 00:24:06 -08:00
Kevin Turner
70f8793700
Merge branch 'main' into feat/import-with-vae 2023-01-23 00:17:46 -08:00
Kevin Turner
0d4e6cbff5
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-23 00:12:33 -08:00
Kevin Turner
ea61bf2c94
[bugfix] ckpt conversion script respects cache in ~/invokeai/models (#2395) 2023-01-23 00:07:23 -08:00
Lincoln Stein
7dead7696c fixed setup.py to install the new scripts 2023-01-23 00:43:15 -05:00
Lincoln Stein
ffcc5ad795 conversion script uses invokeai models cache by default 2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d add model merging documentation and launcher script menu entries 2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19 create small module for merge importation logic 2023-01-22 18:07:53 -05:00
Damian Stewart
c0610f7cb9 pass missing value 2023-01-22 18:19:06 +01:00
Damian Stewart
313b206ff8 squash float16/float32 mismatch on linux 2023-01-22 18:13:12 +01:00
Lincoln Stein
f0fe483915
Merge branch 'main' into feat/merge-script 2023-01-21 18:42:40 -05:00
Lincoln Stein
4ee8d104f0 working, but needs diffusers PR to be accepted 2023-01-21 18:39:13 -05:00
Kevin Turner
89791d91e8
fix: use pad_token for padding (#2381)
Stable Diffusion 2 does not use eos_token for padding.

Fixes #2378
2023-01-21 13:30:03 -08:00
Kevin Turner
87f3da92e9
Merge branch 'main' into fix/sd2-padding-token 2023-01-21 13:11:02 -08:00
Lincoln Stein
f169bb0020
fix long prompt weighting bug in ckpt codepath (#2382) 2023-01-21 15:14:14 -05:00
Damian Stewart
155efadec2
Merge branch 'main' into fix/sd2-padding-token 2023-01-21 21:05:40 +01:00
Damian Stewart
bffe199ad7 SwapCrossAttnProcessor working - tested on mac CPU (MPS doesn't work) 2023-01-21 20:54:18 +01:00
Damian Stewart
0c2a511671 wip SwapCrossAttnProcessor 2023-01-21 18:07:36 +01:00
Damian Stewart
e94c8fa285 fix long prompt weighting bug in ckpt codepath 2023-01-21 12:08:21 +01:00
Lincoln Stein
b3363a934d
Update index.md (#2377) 2023-01-21 00:17:23 -05:00
Lincoln Stein
599c558c87
Merge branch 'main' into patch-1 2023-01-20 23:54:40 -05:00
Kevin Turner
d35ec3398d fix: use pad_token for padding
Stable Diffusion does not use the eos_token for padding.
2023-01-20 19:25:20 -08:00
Lincoln Stein
96a900d1fe correctly import diffusers models by their local path
- Corrects a bug in which the local path was treated as a repo_id
2023-01-20 20:13:43 -05:00
Lincoln Stein
f00f7095f9
Add instructions for installing xFormers on linux (#2360)
I've written up the install procedure for xFormers on Linux systems.

I need help with the Windows install; I don't know what the build
dependencies (compiler, etc) are. This section of the docs is currently
empty.

Please see `docs/installation/070_INSTALL_XFORMERS.md`
2023-01-20 17:57:12 -05:00
mauwii
d7217e3801
disable instable CI tests for windows runners
therefore enable all pytorch versions to verify installation
2023-01-20 23:30:25 +01:00
mauwii
fc5fdae562
update installation instructions 2023-01-20 23:30:25 +01:00
mauwii
a491644e56
fix dependencies/requirements 2023-01-20 23:30:24 +01:00
mauwii
ec2a509e01
make images in README.md compatible to pypi
also add missing new-lines before/after headings
2023-01-20 23:30:24 +01:00
mauwii
6a3a0af676
update test-invoke-pip.yml
- remove stable-diffusion-model from matrix
- add windows-cuda-11_6 and linux-cuda-11_6
- enable linux-cpu
- disable windows-cpu
- change step order
- remove job env
- set runner.os specific env
- install editable
- cache models folder
- remove `--model` and `--root` arguments from invoke command
2023-01-20 23:30:24 +01:00
mauwii
ef4b03289a
enable image generating step for windows as well
- also remove left over debug lines and development branch leftover
2023-01-20 23:30:24 +01:00
mauwii
963b666844
fix memory issue on windows runner
- use cpu version which is only 162.6 MB
- set `INVOKEAI_ROOT=C:\InvokeAI` on Windows runners
2023-01-20 23:30:24 +01:00
mauwii
5a788f8f73
fix test-invoke-pip.yml matrix 2023-01-20 23:30:24 +01:00
mauwii
5afb63e41b
replace legacy setup.py with pyproject.toml
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
    - ldm/invoke/CLI.py
    - scripts/load_models.py
    - scripts/preload_models.py
- update test-invoke-pip.yml:
    - remove pr type "converted_to_draft"
    - remove reference to dev/diffusers
    - remove no more needed requirements from matrix
    - add pytorch to matrix
    - install via `pip3 install --use-pep517 .`
    - use the created executables
        - this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
2023-01-20 23:30:24 +01:00
Lincoln Stein
279ffcfe15
Merge branch 'main' into lstein/xformers-instructions 2023-01-20 17:29:39 -05:00
Lincoln Stein
9b73292fcb add pip install documentation for xformers 2023-01-20 17:28:14 -05:00
Lincoln Stein
67d91dc550 Merge branch 'bugfix/embed-loading-messages' of github.com:invoke-ai/InvokeAI into bugfix/embed-loading-messages 2023-01-20 17:16:50 -05:00
Lincoln Stein
a1c0818a08 ignore .DS_Store files when scanning Mac embeddings 2023-01-20 17:16:39 -05:00
Lincoln Stein
2cf825b169
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-20 17:14:46 -05:00
Lincoln Stein
292b0d70d8 Merge branch 'lstein-improve-ti-frontend' of github.com:invoke-ai/InvokeAI into lstein-improve-ti-frontend 2023-01-20 17:14:08 -05:00
Lincoln Stein
c3aa3d48a0 ignore .DS_Store files when scanning Mac embeddings 2023-01-20 17:13:32 -05:00