Commit Graph

3139 Commits

Author SHA1 Message Date
Lincoln Stein
7ee0edcb9e when converting a ckpt/safetensors model, preserve vae in diffusers config
- After successfully converting a ckt file to diffusers, model_manager
  will attempt to create an equivalent 'vae' entry to the resulting
  diffusers stanza.

- This is a bit of a hack, as it relies on a hard-coded dictionary
  to map ckpt VAEs to diffusers VAEs. The correct way to do this
  would be to convert the VAE to a diffusers model and then point
  to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
  I did it the easy way first and will work on the better solution later.
2023-01-19 11:02:49 -05:00
Lincoln Stein
7bd2220a24 fix two bugs in model import
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
   weights file or diffusers directory. This is now fixed.
2023-01-19 01:30:58 -05:00
Lincoln Stein
284b432ffd add triton install instructions 2023-01-18 22:34:36 -05:00
Lincoln Stein
ab675af264
Merge branch 'main' into lstein-improve-ti-frontend 2023-01-18 22:22:30 -05:00
Daya Adianto
be58a6bfbc
Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 10:21:06 +07:00
Daya Adianto
5a40aadbee
Ensure free_gpu_mem option is passed into the generator (#2326) 2023-01-19 09:57:03 +07:00
Lincoln Stein
e11f15cf78 Merge branch 'main' into lstein-import-safetensors 2023-01-18 17:09:48 -05:00
Lincoln Stein
ce17051b28
Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set (#2359)
This commit allows InvokeAI to store & load 🤗 models at a location set
by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

By integrating this commit, a user who either use `HF_HOME` or
`XDG_CACHE_HOME` environment variables in their environment can let
InvokeAI to reuse the existing cache directory used by 🤗 library by
default. I happened to benefit from this commit because I have a Jupyter
Notebook that uses 🤗 diffusers model stored at `XDG_CACHE_HOME`
directory.

Reference:
https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 17:05:06 -05:00
Lincoln Stein
a2bdc8b579 Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-18 12:16:06 -05:00
Lincoln Stein
1c62ae461e fix vae safetensor loading 2023-01-18 12:15:57 -05:00
Lincoln Stein
c5b802b596
Merge branch 'main' into feature/hub-in-xdg-cache-home 2023-01-18 11:53:46 -05:00
Lincoln Stein
b9ab9ffb4a
Merge branch 'main' into lstein-import-safetensors 2023-01-18 10:58:38 -05:00
Lincoln Stein
f232068ab8
Update automated install doc - link to MS C libs (#2306)
Updated the link for the MS Visual C libraries - I'm not sure if MS
changed the location of the files but this new one leads right to the
file downloads.
2023-01-18 10:56:09 -05:00
Lincoln Stein
4556f29359
Merge branch 'main' into lstein/xformers-instructions 2023-01-18 09:33:17 -05:00
Lincoln Stein
c1521be445 add instructions for installing xFormers on linux 2023-01-18 09:31:19 -05:00
Daya Adianto
f3e952ecf0
Use global_cache_dir calls properly 2023-01-18 21:06:01 +07:00
Daya Adianto
aa4e8d8cf3
Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists 2023-01-18 21:02:31 +07:00
Daya Adianto
a7b2074106
Ignore free_gpu_mem when using 🤗 diffuser model (#2326) 2023-01-18 19:42:11 +07:00
Daya Adianto
2282e681f7
Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set
This commit allows InvokeAI to store & load 🤗 models at a location
set by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

Reference: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 19:32:09 +07:00
Lincoln Stein
6e2365f835
Merge branch 'main' into patch-1 2023-01-17 23:52:13 -05:00
Lincoln Stein
e4ea98c277
further improvements to initial load (#2330)
- Migration process will not crash if duplicate model files are found,
one in legacy location and the other in new location. The model in the
legacy location will be deleted in this case.

- Added a hint to stable-diffusion-2.1 telling people it will work best
with 768 pixel images.

- Added the anything-4.0 model.
2023-01-17 23:21:14 -05:00
Lincoln Stein
2fd5fe6c89
Merge branch 'main' into lstein-improve-migration 2023-01-17 22:55:58 -05:00
Lincoln Stein
4a9e93463d Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-17 22:52:50 -05:00
Lincoln Stein
0b5c0c374e load safetensors vaes 2023-01-17 22:51:57 -05:00
Lincoln Stein
5750f5dac2
Merge branch 'main' into lstein-import-safetensors 2023-01-17 21:31:56 -05:00
Kevin Turner
3fb095de88
do not use autocast for diffusers (#2349)
fixes #2345
2023-01-17 14:26:35 -08:00
Lincoln Stein
c5fecfe281
Merge branch 'main' into lstein-improve-migration 2023-01-17 17:05:12 -05:00
Kevin Turner
1fa6a3558e
Merge branch 'main' into lstein-fix-autocast 2023-01-17 14:00:51 -08:00
Lincoln Stein
2ee68cecd9
tip fix (#2281)
Context: Small fix for the manual, added tab for a "!!! tip"
2023-01-17 16:25:09 -05:00
Lincoln Stein
c8d1d4d159
Merge branch 'main' into lstein-fix-autocast 2023-01-17 16:23:33 -05:00
Lincoln Stein
529b19f8f6
Merge branch 'main' into patch-1 2023-01-17 14:57:17 -05:00
Lincoln Stein
be4f44fafd
[Enhancement] add --default_only arg to configure_invokeai.py, for CI use (#2355)
Added a --default_only argument that limits model downloads to the
single default model, for use in continuous integration.

New behavior

         - switch -
    --yes      --default_only           Behavior
    -----      --------------           --------

   <not set>     <not set>              interactive download

   --yes         <not set>              non-interactively download all
                                          recommended models

   --yes        --default_only          non-interactively download the
                                          default model
2023-01-17 14:56:50 -05:00
Kevin Turner
5aec48735e lint(generator): 🚮 remove unused imports 2023-01-17 11:44:45 -08:00
Kevin Turner
3c919f0337
Restore ldm/invoke/conditioning.py 2023-01-17 11:37:14 -08:00
mauwii
858ddffab6
add --default_only to run-preload-models step 2023-01-17 20:10:37 +01:00
Lincoln Stein
212fec669a add --default_only arg to configure_invokeai.py for CI use
Added a --default_only argument that limits model downloads to the single
default model, for use in continuous integration.

New behavior

         - switch -
    --yes      --default_only           Behavior
    -----      --------------           --------

   <not set>     <not set>              interactive download

   --yes         <not set>              non-interactively download all
                                          recommended models

   --yes        --default_only          non-interactively download the
                                          default model
2023-01-17 12:45:04 -05:00
Lincoln Stein
fc2098834d support direct loading of .safetensors models
- Small fix to allow ckpt files with the .safetensors suffix
  to be directly loaded, rather than undergo a conversion step
  first.
2023-01-17 08:11:19 -05:00
Lincoln Stein
8a31e5c5e3 allow safetensors models to be imported 2023-01-17 00:18:09 -05:00
Lincoln Stein
bcc0110c59 Merge branch 'lstein-fix-autocast' of github.com:invoke-ai/InvokeAI into lstein-fix-autocast 2023-01-16 23:18:54 -05:00
Lincoln Stein
ce1c5e70b8 fix autocast dependency in cross_attention_control 2023-01-16 23:18:43 -05:00
Lincoln Stein
ce00c9856f fix perlin noise and txt2img2img 2023-01-16 22:50:13 -05:00
Lincoln Stein
7e8f364d8d do not use autocast for diffusers
- All tensors in diffusers code path are now set explicitly to
  float32 or float16, depending on the --precision flag.
- autocast is still used in the ckpt path, since it is being
  deprecated.
2023-01-16 19:32:06 -05:00
Lincoln Stein
088cd2c4dd further tweaks to model management
- Work around problem with OmegaConf.update() that prevented model names
  from containing periods.
- Fix logic bug in !delete_model that didn't check for existence of model
  in config file.
2023-01-16 17:11:59 -05:00
Lincoln Stein
9460763eff
Merge branch 'main' into lstein-improve-migration 2023-01-16 16:47:08 -05:00
Lincoln Stein
fe46d9d0f7
Merge branch 'main' into patch-1 2023-01-16 16:46:46 -05:00
Damian Stewart
563196bd03
pass step count and step index to diffusion step func (#2342) 2023-01-16 19:56:54 +00:00
Lincoln Stein
d2a038200c
Merge branch 'main' into lstein-improve-migration 2023-01-16 14:22:13 -05:00
Lincoln Stein
d6ac0eeffd make SD-1.5 the default again 2023-01-16 14:21:34 -05:00
Lincoln Stein
3a1724652e
upgrade requirements to CUDA 11.7, torch 1.13 (#2331)
* upgrade requirements to CUDA 11.7, torch 1.13

* fix ROCm version number

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-01-16 14:19:27 -05:00
Lincoln Stein
8c073a7818
Merge branch 'main' into patch-1 2023-01-16 08:38:14 -05:00