Damian Stewart
478c379534
for cac make t_start=0.1 the default
2023-01-30 15:30:01 +01:00
Damian Stewart
c5c160a788
Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation
2023-01-30 14:51:06 +01:00
Damian Stewart
27ee939e4b
with diffusers cac, always run the original prompt on the first step
2023-01-30 14:50:57 +01:00
Jonathan
5ce62e00c9
Merge branch 'main' into diffusers_cross_attention_control_reimplementation
2023-01-29 13:52:01 -06:00
Kevin Turner
5a8c28de97
Merge remote-tracking branch 'origin/main' into fix/hires_inpaint
2023-01-29 10:51:59 -08:00
Jonathan
07e03b31b7
Update --hires_fix ( #2414 )
...
* Update --hires_fix
Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
2023-01-29 12:27:01 -06:00
Eugene Brodsky
5ee5c5a012
(training) correctly import TI module; fix type annotation
2023-01-28 19:09:16 -05:00
Eugene Brodsky
2c0bee2a6d
(config) ensure the correct 'invokeai' command is displayed to the user after configuration
2023-01-28 17:39:33 -05:00
Eugene Brodsky
34e0d7aaa8
(config) rename all mentions of scripts/configure_invokeai.py to the new invokeai-configure command
2023-01-28 17:39:33 -05:00
Eugene Brodsky
47dbe7bc0d
(assets) move 'caution.png' to avoid including entire 'assets' dir in the wheel
...
reduces wheel size to 3MB from 27MB
2023-01-28 17:39:33 -05:00
Eugene Brodsky
2ff47cdecf
(scripts) rename/reorganize CLI scripts
...
- add torch MPS fallback directly to CLI.py
- rename CLI scripts with `invoke-...` prefix
- delete long-deprecated scripts
- add a missing package dependency
- delete setup.py as obsolete
2023-01-28 17:39:33 -05:00
Eugene Brodsky
22c34aabfe
(package) move TI scripts into a module; update packaging of 'configs' dir
2023-01-28 17:39:33 -05:00
Eugene Brodsky
c5a9e70e7f
(parser) fix missing argument default in parse_legacy_blend
2023-01-28 17:39:33 -05:00
Eugene Brodsky
4b659982b7
(installer) install.bat wrapper for the python script
2023-01-28 17:39:33 -05:00
Eugene Brodsky
71733bcfa1
(installer) copy launch/update scripts to the root dir; improve launch experience on Linux/Mac
...
- install.sh is now a thin wrapper around the pythonized install script
- install.bat not done yet - to follow
- user messaging is tailored to the current platform (paste shortcuts, file paths, etc)
- emit invoke.sh/invoke.bat scripts to the runtime dir
- improve launch scripts (add help option, etc)
- only emit the platform-specific scripts
2023-01-28 17:39:33 -05:00
Eugene Brodsky
d047e070b8
(config) fix config file creation in edge cases
...
if the config directory is missing, initialize it using the standard
process of copying it over, instead of failing to create the config file
this can happen if the user is re-running the config script in a directory which
already has the init file, but no configs dir
2023-01-28 17:39:33 -05:00
Eugene Brodsky
9997fde144
(config) moving the 'configs' dir into the 'config' module
...
This allows reliable distribution of the initial 'configs' directory
with the Python package, and enables the configuration script to be running
from anywhere, as long as the virtual environment is available on the sys.path
2023-01-28 17:39:33 -05:00
Eugene Brodsky
88526b9294
(config) move configure_invokeai script to the config module for easier importing
2023-01-28 17:39:32 -05:00
Eugene Brodsky
93129fde32
(installer) run configure_invokeai from within the installer
2023-01-28 17:39:32 -05:00
Kevin Turner
b6d37a70ca
fix(inpainting model): threshold mask to avoid gray blurry seam
2023-01-28 13:34:22 -08:00
Kevin Turner
71b6ddf5fb
fix(inpainting model): blank areas to be repainted in the masked image
...
Otherwise the model seems too reluctant to change these areas, even though the mask channel should allow it to.
2023-01-28 11:10:32 -08:00
Kevin Turner
6556b200b5
remove experimental "blur" infill
...
It seems counterproductive for use with the inpainting model, and not especially useful otherwise.
2023-01-27 15:25:50 -08:00
Kevin Turner
d627cd1865
feat(inpaint): add simpler infill methods for use with inpainting model
2023-01-27 14:28:16 -08:00
Kevin Turner
09b6104bfd
refactor(txt2img2img): factor out tensor shape
2023-01-27 12:04:12 -08:00
Kevin Turner
1bb5b4ab32
fix dimension errors when inpainting model is used with hires-fix
2023-01-27 11:52:05 -08:00
Lincoln Stein
12a29bfbc0
Merge branch 'main' into install/change-script-locations
2023-01-26 17:10:33 -05:00
Lincoln Stein
8e4c044ca2
clean up tab/cursor behavior in textual inversion txt gui
2023-01-26 15:18:28 -05:00
Lincoln Stein
9dc3832b9b
clean up merge_models
2023-01-26 15:10:16 -05:00
Lincoln Stein
d3a469d136
fix location of textual_inversion script
2023-01-26 11:56:23 -05:00
Lincoln Stein
cbd967cbc4
add documentation caveat about location of HF cached models
2023-01-26 11:48:03 -05:00
damian
e090c0dc10
try without setting every time
2023-01-26 17:46:51 +01:00
damian
c381788ab9
don't restore None
2023-01-26 17:44:27 +01:00
damian
fb312f9ed3
use the correct value - whoops
2023-01-26 17:30:29 +01:00
damian
729752620b
trying out JPPhoto's patch on vast.ai
2023-01-26 17:27:33 +01:00
damian
8ed8bf52d0
use 'auto' slice size
2023-01-26 17:04:22 +01:00
Lincoln Stein
a49d546125
simplified code a bit
2023-01-26 09:46:34 -05:00
Lincoln Stein
288e31fc60
remove dependency on original clipseg library
...
- This replaces the original clipseg library with the transformers
version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do
a fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device
the clipseg model will be loaded into, and it will reside in CPU.
However, performance is more than acceptable.
2023-01-26 09:35:16 -05:00
Damian Stewart
5e7ed964d2
wip updating docs
2023-01-25 23:49:38 +01:00
Damian Stewart
95d147c5df
MPS support: negatory
2023-01-25 23:03:30 +01:00
Damian Stewart
41aed57449
wip tracking down MPS slicing support
2023-01-25 22:27:23 +01:00
Damian Stewart
34a3f4a820
cleanup
2023-01-25 21:47:17 +01:00
Damian Stewart
1f5ad1b05e
sliced swap working
2023-01-25 21:38:27 +01:00
Damian Stewart
c52dd7e3f4
Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation
2023-01-25 14:51:15 +01:00
Damian Stewart
a4aea1540b
more wip sliced attention (.swap doesn't work yet)
2023-01-25 14:51:08 +01:00
Kevin Turner
3c53b46a35
Merge branch 'main' into diffusers_cross_attention_control_reimplementation
2023-01-24 19:32:34 -08:00
Lincoln Stein
61403fe306
fix second conflict in CLI.py
2023-01-24 14:21:21 -05:00
Lincoln Stein
b2f288d6ec
fix conflict in CLI.py
2023-01-24 14:20:40 -05:00
Lincoln Stein
079ec4cb5c
Merge branch 'main' into feat/import-with-vae
2023-01-24 13:16:00 -05:00
Lincoln Stein
00839d02ab
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-24 11:53:03 -05:00
Lincoln Stein
ce52d0c42b
Merge branch 'main' into feat/import-with-vae
2023-01-24 11:52:40 -05:00
Lincoln Stein
7473d814f5
remove original setup.py
2023-01-24 09:11:05 -05:00
Lincoln Stein
b2c30c2093
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-24 09:08:13 -05:00
Lincoln Stein
a7048eea5f
Merge branch 'main' into feat/import-with-vae
2023-01-24 09:07:41 -05:00
Damian Stewart
63c6019f92
sliced attention processor wip (untested)
2023-01-24 14:46:32 +01:00
Lincoln Stein
9f32daab2d
Merge branch 'main' into lstein-import-safetensors
2023-01-23 21:58:07 -05:00
Lincoln Stein
bc2194228e
stability improvements
...
- provide full traceback when a model fails to load
- fix VAE record for VoxelArt; otherwise load fails
2023-01-23 21:40:27 -05:00
Lincoln Stein
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101
correct fail-to-resume error
...
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
error in epoch calculation that caused script not to resume from
latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
Lincoln Stein
e18beaff9c
Merge branch 'main' into feat/merge-script
2023-01-23 09:05:38 -05:00
Kevin Turner
d7554b01fd
fix typo in prompt
2023-01-23 00:24:06 -08:00
Kevin Turner
70f8793700
Merge branch 'main' into feat/import-with-vae
2023-01-23 00:17:46 -08:00
Kevin Turner
0d4e6cbff5
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-23 00:12:33 -08:00
Lincoln Stein
ffcc5ad795
conversion script uses invokeai models cache by default
2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d
add model merging documentation and launcher script menu entries
2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19
create small module for merge importation logic
2023-01-22 18:07:53 -05:00
Damian Stewart
c0610f7cb9
pass missing value
2023-01-22 18:19:06 +01:00
Damian Stewart
313b206ff8
squash float16/float32 mismatch on linux
2023-01-22 18:13:12 +01:00
Kevin Turner
87f3da92e9
Merge branch 'main' into fix/sd2-padding-token
2023-01-21 13:11:02 -08:00
Damian Stewart
bffe199ad7
SwapCrossAttnProcessor working - tested on mac CPU (MPS doesn't work)
2023-01-21 20:54:18 +01:00
Damian Stewart
0c2a511671
wip SwapCrossAttnProcessor
2023-01-21 18:07:36 +01:00
Damian Stewart
e94c8fa285
fix long prompt weighting bug in ckpt codepath
2023-01-21 12:08:21 +01:00
Kevin Turner
d35ec3398d
fix: use pad_token for padding
...
Stable Diffusion does not use the eos_token for padding.
2023-01-20 19:25:20 -08:00
Lincoln Stein
96a900d1fe
correctly import diffusers models by their local path
...
- Corrects a bug in which the local path was treated as a repo_id
2023-01-20 20:13:43 -05:00
mauwii
5afb63e41b
replace legacy setup.py with pyproject.toml
...
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
- ldm/invoke/CLI.py
- scripts/load_models.py
- scripts/preload_models.py
- update test-invoke-pip.yml:
- remove pr type "converted_to_draft"
- remove reference to dev/diffusers
- remove no more needed requirements from matrix
- add pytorch to matrix
- install via `pip3 install --use-pep517 .`
- use the created executables
- this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
2023-01-20 23:30:24 +01:00
Lincoln Stein
67d91dc550
Merge branch 'bugfix/embed-loading-messages' of github.com:invoke-ai/InvokeAI into bugfix/embed-loading-messages
2023-01-20 17:16:50 -05:00
Lincoln Stein
a1c0818a08
ignore .DS_Store files when scanning Mac embeddings
2023-01-20 17:16:39 -05:00
Lincoln Stein
2cf825b169
Merge branch 'main' into bugfix/embed-loading-messages
2023-01-20 17:14:46 -05:00
Lincoln Stein
292b0d70d8
Merge branch 'lstein-improve-ti-frontend' of github.com:invoke-ai/InvokeAI into lstein-improve-ti-frontend
2023-01-20 17:14:08 -05:00
Lincoln Stein
c3aa3d48a0
ignore .DS_Store files when scanning Mac embeddings
2023-01-20 17:13:32 -05:00
Lincoln Stein
9e3c947cd3
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-20 17:01:09 -05:00
Lincoln Stein
195294e74f
sort models alphabetically
2023-01-20 15:17:54 -05:00
Lincoln Stein
02ce602a38
Merge branch 'main' into feat/disable-xformers
2023-01-19 18:45:59 -05:00
Lincoln Stein
f0010919f2
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 18:03:36 -05:00
Lincoln Stein
895505976e
[bugfix] suppress extraneous warning messages generated by diffusers
...
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
Lincoln Stein
171f4aa71b
[feat] Provide option to disable xformers from command line
...
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
Lincoln Stein
775e1a21c7
improve embed trigger token not found error
...
- Now indicates that the trigger is *neither* a huggingface concept,
nor the trigger of a locally loaded embed.
2023-01-19 15:46:58 -05:00
Lincoln Stein
3c3d893b9d
improve status reporting when loading local and remote embeddings
...
- During trigger token processing, emit better status messages indicating
which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
token is in fact a local embed.
2023-01-19 15:43:52 -05:00
Lincoln Stein
33a5c83c74
during ckpt->diffusers tell user when custom autoencoder can't be loaded
...
- When a ckpt or safetensors file uses an external autoencoder and we
don't know which diffusers model corresponds to this (if any!), then
we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.
2023-01-19 12:05:49 -05:00
Lincoln Stein
7ee0edcb9e
when converting a ckpt/safetensors model, preserve vae in diffusers config
...
- After successfully converting a ckt file to diffusers, model_manager
will attempt to create an equivalent 'vae' entry to the resulting
diffusers stanza.
- This is a bit of a hack, as it relies on a hard-coded dictionary
to map ckpt VAEs to diffusers VAEs. The correct way to do this
would be to convert the VAE to a diffusers model and then point
to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
I did it the easy way first and will work on the better solution later.
2023-01-19 11:02:49 -05:00
Lincoln Stein
7bd2220a24
fix two bugs in model import
...
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
weights file or diffusers directory. This is now fixed.
2023-01-19 01:30:58 -05:00
Lincoln Stein
ab675af264
Merge branch 'main' into lstein-improve-ti-frontend
2023-01-18 22:22:30 -05:00
Daya Adianto
be58a6bfbc
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 10:21:06 +07:00
Daya Adianto
5a40aadbee
Ensure free_gpu_mem option is passed into the generator ( #2326 )
2023-01-19 09:57:03 +07:00
Lincoln Stein
e11f15cf78
Merge branch 'main' into lstein-import-safetensors
2023-01-18 17:09:48 -05:00
Lincoln Stein
a2bdc8b579
Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors
2023-01-18 12:16:06 -05:00
Lincoln Stein
1c62ae461e
fix vae safetensor loading
2023-01-18 12:15:57 -05:00
Lincoln Stein
b9ab9ffb4a
Merge branch 'main' into lstein-import-safetensors
2023-01-18 10:58:38 -05:00
Daya Adianto
f3e952ecf0
Use global_cache_dir calls properly
2023-01-18 21:06:01 +07:00
Daya Adianto
aa4e8d8cf3
Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists
2023-01-18 21:02:31 +07:00
Daya Adianto
a7b2074106
Ignore free_gpu_mem when using 🤗 diffuser model ( #2326 )
2023-01-18 19:42:11 +07:00