079ec4cb5c
Merge branch 'main' into feat/import-with-vae
2023-01-24 13:16:00 -05:00
00839d02ab
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-24 11:53:03 -05:00
ce52d0c42b
Merge branch 'main' into feat/import-with-vae
2023-01-24 11:52:40 -05:00
b2c30c2093
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-24 09:08:13 -05:00
a7048eea5f
Merge branch 'main' into feat/import-with-vae
2023-01-24 09:07:41 -05:00
9f32daab2d
Merge branch 'main' into lstein-import-safetensors
2023-01-23 21:58:07 -05:00
bc2194228e
stability improvements
...
- provide full traceback when a model fails to load
- fix VAE record for VoxelArt; otherwise load fails
2023-01-23 21:40:27 -05:00
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-23 21:15:12 -05:00
98e9721101
correct fail-to-resume error
...
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
error in epoch calculation that caused script not to resume from
latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
e18beaff9c
Merge branch 'main' into feat/merge-script
2023-01-23 09:05:38 -05:00
d7554b01fd
fix typo in prompt
2023-01-23 00:24:06 -08:00
70f8793700
Merge branch 'main' into feat/import-with-vae
2023-01-23 00:17:46 -08:00
0d4e6cbff5
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-23 00:12:33 -08:00
ffcc5ad795
conversion script uses invokeai models cache by default
2023-01-23 00:35:16 -05:00
48deb3e49d
add model merging documentation and launcher script menu entries
2023-01-23 00:20:28 -05:00
6c31225d19
create small module for merge importation logic
2023-01-22 18:07:53 -05:00
87f3da92e9
Merge branch 'main' into fix/sd2-padding-token
2023-01-21 13:11:02 -08:00
e94c8fa285
fix long prompt weighting bug in ckpt codepath
2023-01-21 12:08:21 +01:00
d35ec3398d
fix: use pad_token for padding
...
Stable Diffusion does not use the eos_token for padding.
2023-01-20 19:25:20 -08:00
96a900d1fe
correctly import diffusers models by their local path
...
- Corrects a bug in which the local path was treated as a repo_id
2023-01-20 20:13:43 -05:00
67d91dc550
Merge branch 'bugfix/embed-loading-messages' of github.com:invoke-ai/InvokeAI into bugfix/embed-loading-messages
2023-01-20 17:16:50 -05:00
a1c0818a08
ignore .DS_Store files when scanning Mac embeddings
2023-01-20 17:16:39 -05:00
2cf825b169
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-20 17:14:46 -05:00
292b0d70d8
Merge branch 'lstein-improve-ti-frontend' of github.com:invoke-ai/InvokeAI into lstein-improve-ti-frontend
2023-01-20 17:14:08 -05:00
c3aa3d48a0
ignore .DS_Store files when scanning Mac embeddings
2023-01-20 17:13:32 -05:00
9e3c947cd3
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-20 17:01:09 -05:00
195294e74f
sort models alphabetically
2023-01-20 15:17:54 -05:00
02ce602a38
Merge branch 'main' into feat/disable-xformers
2023-01-19 18:45:59 -05:00
f0010919f2
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 18:03:36 -05:00
895505976e
[bugfix] suppress extraneous warning messages generated by diffusers
...
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
171f4aa71b
[feat] Provide option to disable xformers from command line
...
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
775e1a21c7
improve embed trigger token not found error
...
- Now indicates that the trigger is *neither* a huggingface concept,
nor the trigger of a locally loaded embed.
2023-01-19 15:46:58 -05:00
3c3d893b9d
improve status reporting when loading local and remote embeddings
...
- During trigger token processing, emit better status messages indicating
which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
token is in fact a local embed.
2023-01-19 15:43:52 -05:00
33a5c83c74
during ckpt->diffusers tell user when custom autoencoder can't be loaded
...
- When a ckpt or safetensors file uses an external autoencoder and we
don't know which diffusers model corresponds to this (if any!), then
we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.
2023-01-19 12:05:49 -05:00
7ee0edcb9e
when converting a ckpt/safetensors model, preserve vae in diffusers config
...
- After successfully converting a ckt file to diffusers, model_manager
will attempt to create an equivalent 'vae' entry to the resulting
diffusers stanza.
- This is a bit of a hack, as it relies on a hard-coded dictionary
to map ckpt VAEs to diffusers VAEs. The correct way to do this
would be to convert the VAE to a diffusers model and then point
to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
I did it the easy way first and will work on the better solution later.
2023-01-19 11:02:49 -05:00
7bd2220a24
fix two bugs in model import
...
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
weights file or diffusers directory. This is now fixed.
2023-01-19 01:30:58 -05:00
ab675af264
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-18 22:22:30 -05:00
be58a6bfbc
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 10:21:06 +07:00
5a40aadbee
Ensure free_gpu_mem option is passed into the generator ( #2326 )
2023-01-19 09:57:03 +07:00
e11f15cf78
Merge branch 'main' into lstein-import-safetensors
2023-01-18 17:09:48 -05:00
a2bdc8b579
Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors
2023-01-18 12:16:06 -05:00
1c62ae461e
fix vae safetensor loading
2023-01-18 12:15:57 -05:00
b9ab9ffb4a
Merge branch 'main' into lstein-import-safetensors
2023-01-18 10:58:38 -05:00
f3e952ecf0
Use global_cache_dir calls properly
2023-01-18 21:06:01 +07:00
aa4e8d8cf3
Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists
2023-01-18 21:02:31 +07:00
a7b2074106
Ignore free_gpu_mem when using 🤗 diffuser model ( #2326 )
2023-01-18 19:42:11 +07:00
2282e681f7
Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set
...
This commit allows InvokeAI to store & load 🤗 models at a location
set by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.
Reference: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 19:32:09 +07:00
2fd5fe6c89
Merge branch 'main' into lstein-improve-migration
2023-01-17 22:55:58 -05:00
4a9e93463d
Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors
2023-01-17 22:52:50 -05:00
0b5c0c374e
load safetensors vaes
2023-01-17 22:51:57 -05:00