Lincoln Stein
00839d02ab
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-24 11:53:03 -05:00
Lincoln Stein
b2c30c2093
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-24 09:08:13 -05:00
Lincoln Stein
9f32daab2d
Merge branch 'main' into lstein-import-safetensors
2023-01-23 21:58:07 -05:00
Lincoln Stein
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101
correct fail-to-resume error
...
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
error in epoch calculation that caused script not to resume from
latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
Lincoln Stein
e18beaff9c
Merge branch 'main' into feat/merge-script
2023-01-23 09:05:38 -05:00
Kevin Turner
0d4e6cbff5
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-23 00:12:33 -08:00
Lincoln Stein
ffcc5ad795
conversion script uses invokeai models cache by default
2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d
add model merging documentation and launcher script menu entries
2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19
create small module for merge importation logic
2023-01-22 18:07:53 -05:00
Kevin Turner
87f3da92e9
Merge branch 'main' into fix/sd2-padding-token
2023-01-21 13:11:02 -08:00
Damian Stewart
e94c8fa285
fix long prompt weighting bug in ckpt codepath
2023-01-21 12:08:21 +01:00
Kevin Turner
d35ec3398d
fix: use pad_token for padding
...
Stable Diffusion does not use the eos_token for padding.
2023-01-20 19:25:20 -08:00
Lincoln Stein
67d91dc550
Merge branch 'bugfix/embed-loading-messages' of github.com:invoke-ai/InvokeAI into bugfix/embed-loading-messages
2023-01-20 17:16:50 -05:00
Lincoln Stein
a1c0818a08
ignore .DS_Store files when scanning Mac embeddings
2023-01-20 17:16:39 -05:00
Lincoln Stein
2cf825b169
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-20 17:14:46 -05:00
Lincoln Stein
292b0d70d8
Merge branch 'lstein-improve-ti-frontend' of github.com:invoke-ai/InvokeAI into lstein-improve-ti-frontend
2023-01-20 17:14:08 -05:00
Lincoln Stein
c3aa3d48a0
ignore .DS_Store files when scanning Mac embeddings
2023-01-20 17:13:32 -05:00
Lincoln Stein
9e3c947cd3
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-20 17:01:09 -05:00
Lincoln Stein
195294e74f
sort models alphabetically
2023-01-20 15:17:54 -05:00
Lincoln Stein
02ce602a38
Merge branch 'main' into feat/disable-xformers
2023-01-19 18:45:59 -05:00
Lincoln Stein
f0010919f2
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 18:03:36 -05:00
Lincoln Stein
895505976e
[bugfix] suppress extraneous warning messages generated by diffusers
...
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
Lincoln Stein
171f4aa71b
[feat] Provide option to disable xformers from command line
...
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
Lincoln Stein
775e1a21c7
improve embed trigger token not found error
...
- Now indicates that the trigger is *neither* a huggingface concept,
nor the trigger of a locally loaded embed.
2023-01-19 15:46:58 -05:00
Lincoln Stein
3c3d893b9d
improve status reporting when loading local and remote embeddings
...
- During trigger token processing, emit better status messages indicating
which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
token is in fact a local embed.
2023-01-19 15:43:52 -05:00
Lincoln Stein
ab675af264
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-18 22:22:30 -05:00
Daya Adianto
be58a6bfbc
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 10:21:06 +07:00
Daya Adianto
5a40aadbee
Ensure free_gpu_mem option is passed into the generator ( #2326 )
2023-01-19 09:57:03 +07:00
Lincoln Stein
e11f15cf78
Merge branch 'main' into lstein-import-safetensors
2023-01-18 17:09:48 -05:00
Lincoln Stein
a2bdc8b579
Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors
2023-01-18 12:16:06 -05:00
Lincoln Stein
1c62ae461e
fix vae safetensor loading
2023-01-18 12:15:57 -05:00
Lincoln Stein
b9ab9ffb4a
Merge branch 'main' into lstein-import-safetensors
2023-01-18 10:58:38 -05:00
Daya Adianto
f3e952ecf0
Use global_cache_dir calls properly
2023-01-18 21:06:01 +07:00
Daya Adianto
aa4e8d8cf3
Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists
2023-01-18 21:02:31 +07:00
Daya Adianto
a7b2074106
Ignore free_gpu_mem when using 🤗 diffuser model ( #2326 )
2023-01-18 19:42:11 +07:00
Daya Adianto
2282e681f7
Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set
...
This commit allows InvokeAI to store & load 🤗 models at a location
set by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.
Reference: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 19:32:09 +07:00
Lincoln Stein
2fd5fe6c89
Merge branch 'main' into lstein-improve-migration
2023-01-17 22:55:58 -05:00
Lincoln Stein
4a9e93463d
Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors
2023-01-17 22:52:50 -05:00
Lincoln Stein
0b5c0c374e
load safetensors vaes
2023-01-17 22:51:57 -05:00
Lincoln Stein
5750f5dac2
Merge branch 'main' into lstein-import-safetensors
2023-01-17 21:31:56 -05:00
Kevin Turner
5aec48735e
lint(generator): 🚮 remove unused imports
2023-01-17 11:44:45 -08:00
Kevin Turner
3c919f0337
Restore ldm/invoke/conditioning.py
2023-01-17 11:37:14 -08:00
Lincoln Stein
fc2098834d
support direct loading of .safetensors models
...
- Small fix to allow ckpt files with the .safetensors suffix
to be directly loaded, rather than undergo a conversion step
first.
2023-01-17 08:11:19 -05:00
Lincoln Stein
8a31e5c5e3
allow safetensors models to be imported
2023-01-17 00:18:09 -05:00
Lincoln Stein
bcc0110c59
Merge branch 'lstein-fix-autocast' of github.com:invoke-ai/InvokeAI into lstein-fix-autocast
2023-01-16 23:18:54 -05:00
Lincoln Stein
ce1c5e70b8
fix autocast dependency in cross_attention_control
2023-01-16 23:18:43 -05:00
Lincoln Stein
ce00c9856f
fix perlin noise and txt2img2img
2023-01-16 22:50:13 -05:00
Lincoln Stein
7e8f364d8d
do not use autocast for diffusers
...
- All tensors in diffusers code path are now set explicitly to
float32 or float16, depending on the --precision flag.
- autocast is still used in the ckpt path, since it is being
deprecated.
2023-01-16 19:32:06 -05:00
Lincoln Stein
088cd2c4dd
further tweaks to model management
...
- Work around problem with OmegaConf.update() that prevented model names
from containing periods.
- Fix logic bug in !delete_model that didn't check for existence of model
in config file.
2023-01-16 17:11:59 -05:00