Commit Graph

283 Commits

Author SHA1 Message Date
Damian Stewart
5299324321 workaround for pytorch bug, fixes #2418 2023-01-30 18:45:53 +01:00
Jonathan
07e03b31b7
Update --hires_fix (#2414)
* Update --hires_fix

Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
2023-01-29 12:27:01 -06:00
Lincoln Stein
12a29bfbc0
Merge branch 'main' into install/change-script-locations 2023-01-26 17:10:33 -05:00
Lincoln Stein
8e4c044ca2 clean up tab/cursor behavior in textual inversion txt gui 2023-01-26 15:18:28 -05:00
Lincoln Stein
9dc3832b9b clean up merge_models 2023-01-26 15:10:16 -05:00
Lincoln Stein
d3a469d136 fix location of textual_inversion script 2023-01-26 11:56:23 -05:00
Lincoln Stein
cbd967cbc4 add documentation caveat about location of HF cached models 2023-01-26 11:48:03 -05:00
Lincoln Stein
a49d546125 simplified code a bit 2023-01-26 09:46:34 -05:00
Lincoln Stein
288e31fc60 remove dependency on original clipseg library
- This replaces the original clipseg library with the transformers
  version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do
  a fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device
  the clipseg model will be loaded into, and it will reside in CPU.
  However, performance is more than acceptable.
2023-01-26 09:35:16 -05:00
Lincoln Stein
61403fe306 fix second conflict in CLI.py 2023-01-24 14:21:21 -05:00
Lincoln Stein
b2f288d6ec fix conflict in CLI.py 2023-01-24 14:20:40 -05:00
Lincoln Stein
079ec4cb5c
Merge branch 'main' into feat/import-with-vae 2023-01-24 13:16:00 -05:00
Lincoln Stein
00839d02ab
Merge branch 'main' into lstein-improve-ti-frontend 2023-01-24 11:53:03 -05:00
Lincoln Stein
ce52d0c42b
Merge branch 'main' into feat/import-with-vae 2023-01-24 11:52:40 -05:00
Lincoln Stein
7473d814f5 remove original setup.py 2023-01-24 09:11:05 -05:00
Lincoln Stein
b2c30c2093
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-24 09:08:13 -05:00
Lincoln Stein
a7048eea5f
Merge branch 'main' into feat/import-with-vae 2023-01-24 09:07:41 -05:00
Lincoln Stein
9f32daab2d
Merge branch 'main' into lstein-import-safetensors 2023-01-23 21:58:07 -05:00
Lincoln Stein
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101 correct fail-to-resume error
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
  error in epoch calculation that caused script not to resume from
  latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
Lincoln Stein
e18beaff9c
Merge branch 'main' into feat/merge-script 2023-01-23 09:05:38 -05:00
Kevin Turner
d7554b01fd
fix typo in prompt 2023-01-23 00:24:06 -08:00
Kevin Turner
70f8793700
Merge branch 'main' into feat/import-with-vae 2023-01-23 00:17:46 -08:00
Kevin Turner
0d4e6cbff5
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-23 00:12:33 -08:00
Lincoln Stein
ffcc5ad795 conversion script uses invokeai models cache by default 2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d add model merging documentation and launcher script menu entries 2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19 create small module for merge importation logic 2023-01-22 18:07:53 -05:00
Lincoln Stein
96a900d1fe correctly import diffusers models by their local path
- Corrects a bug in which the local path was treated as a repo_id
2023-01-20 20:13:43 -05:00
mauwii
5afb63e41b
replace legacy setup.py with pyproject.toml
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
    - ldm/invoke/CLI.py
    - scripts/load_models.py
    - scripts/preload_models.py
- update test-invoke-pip.yml:
    - remove pr type "converted_to_draft"
    - remove reference to dev/diffusers
    - remove no more needed requirements from matrix
    - add pytorch to matrix
    - install via `pip3 install --use-pep517 .`
    - use the created executables
        - this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
2023-01-20 23:30:24 +01:00
Lincoln Stein
2cf825b169
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-20 17:14:46 -05:00
Lincoln Stein
9e3c947cd3
Merge branch 'main' into lstein-improve-ti-frontend 2023-01-20 17:01:09 -05:00
Lincoln Stein
02ce602a38
Merge branch 'main' into feat/disable-xformers 2023-01-19 18:45:59 -05:00
Lincoln Stein
f0010919f2
Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 18:03:36 -05:00
Lincoln Stein
895505976e [bugfix] suppress extraneous warning messages generated by diffusers
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:

1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
   them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
Lincoln Stein
171f4aa71b [feat] Provide option to disable xformers from command line
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.

--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
Lincoln Stein
775e1a21c7 improve embed trigger token not found error
- Now indicates that the trigger is *neither* a huggingface concept,
  nor the trigger of a locally loaded embed.
2023-01-19 15:46:58 -05:00
Lincoln Stein
3c3d893b9d improve status reporting when loading local and remote embeddings
- During trigger token processing, emit better status messages indicating
  which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
  token is in fact a local embed.
2023-01-19 15:43:52 -05:00
Lincoln Stein
33a5c83c74 during ckpt->diffusers tell user when custom autoencoder can't be loaded
- When a ckpt or safetensors file uses an external autoencoder and we
  don't know which diffusers model corresponds to this (if any!), then
  we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.
2023-01-19 12:05:49 -05:00
Lincoln Stein
7ee0edcb9e when converting a ckpt/safetensors model, preserve vae in diffusers config
- After successfully converting a ckt file to diffusers, model_manager
  will attempt to create an equivalent 'vae' entry to the resulting
  diffusers stanza.

- This is a bit of a hack, as it relies on a hard-coded dictionary
  to map ckpt VAEs to diffusers VAEs. The correct way to do this
  would be to convert the VAE to a diffusers model and then point
  to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
  I did it the easy way first and will work on the better solution later.
2023-01-19 11:02:49 -05:00
Lincoln Stein
7bd2220a24 fix two bugs in model import
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
   weights file or diffusers directory. This is now fixed.
2023-01-19 01:30:58 -05:00
Lincoln Stein
ab675af264
Merge branch 'main' into lstein-improve-ti-frontend 2023-01-18 22:22:30 -05:00
Daya Adianto
be58a6bfbc
Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 10:21:06 +07:00
Daya Adianto
5a40aadbee
Ensure free_gpu_mem option is passed into the generator (#2326) 2023-01-19 09:57:03 +07:00
Lincoln Stein
e11f15cf78 Merge branch 'main' into lstein-import-safetensors 2023-01-18 17:09:48 -05:00
Lincoln Stein
a2bdc8b579 Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-18 12:16:06 -05:00
Lincoln Stein
1c62ae461e fix vae safetensor loading 2023-01-18 12:15:57 -05:00
Lincoln Stein
b9ab9ffb4a
Merge branch 'main' into lstein-import-safetensors 2023-01-18 10:58:38 -05:00
Daya Adianto
f3e952ecf0
Use global_cache_dir calls properly 2023-01-18 21:06:01 +07:00
Daya Adianto
aa4e8d8cf3
Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists 2023-01-18 21:02:31 +07:00
Daya Adianto
2282e681f7
Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set
This commit allows InvokeAI to store & load 🤗 models at a location
set by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

Reference: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 19:32:09 +07:00