blessedcoolant
b229fe19aa
Merge branch 'main' into lstein/configure-max-cache-size
2023-07-07 01:52:12 +12:00
Sergey Borisov
04b57c408f
Add clip skip option to prompt node
2023-07-06 16:09:40 +03:00
blessedcoolant
6f1268e2b1
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-07 00:32:22 +12:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api
2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d
all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT
2023-07-05 23:13:01 -04:00
Lincoln Stein
a7cbcae176
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:57 -04:00
Lincoln Stein
0a6dccd607
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:14 -04:00
Lincoln Stein
43c51ff157
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-05 20:48:15 -04:00
Lincoln Stein
cfa3b2419c
partial implementation of merge
2023-07-05 20:25:47 -04:00
Lincoln Stein
d4550b3059
clean up lint errors in lora.py
2023-07-05 19:18:25 -04:00
Lincoln Stein
83d3a043da
merge latest changes from main
2023-07-05 19:15:53 -04:00
Lincoln Stein
71dad6d404
Merge branch 'main' into ti-ui
2023-07-05 16:57:31 -04:00
Lincoln Stein
685a47cc7d
fix crash during lora application
2023-07-05 16:40:47 -04:00
Lincoln Stein
f8bbec8572
Recognize and load diffusers-style LoRAs (.bin)
...
Prevent double-reporting of autoimported models
- closes #3636
Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:21:23 -04:00
Lincoln Stein
863336acbb
Recognize and load diffusers-style LoRAs (.bin)
...
Prevent double-reporting of autoimported models
- closes #3636
Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:19:16 -04:00
Lincoln Stein
90ae8ce26a
prevent model install crash "torch needs to be restarted with spawn"
2023-07-05 16:18:20 -04:00
Lincoln Stein
ad5d90aca8
prevent model install crash "torch needs to be restarted with spawn"
2023-07-05 15:38:07 -04:00
Lincoln Stein
5b6dd47b9f
add API for model convert
2023-07-05 15:13:21 -04:00
Lincoln Stein
5027d0a603
accept @psychedelicious suggestions above
2023-07-05 14:50:57 -04:00
Lincoln Stein
9f9ce08e44
Merge branch 'main' into lstein/remove-hardcoded-cuda-device
2023-07-05 13:38:33 -04:00
blessedcoolant
9e2d63ef97
Merge branch 'main' into fix/ckpt_convert_scan
2023-07-06 05:01:34 +12:00
Sergey Borisov
0ac9dca926
Fix loading diffusers ti
2023-07-05 19:46:00 +03:00
Lincoln Stein
bd82c4ace0
model installer confirms deletion of models
2023-07-05 09:57:23 -04:00
Lincoln Stein
9edf78dd2e
merge with main
2023-07-05 09:12:54 -04:00
Lincoln Stein
6112197edf
convert implemented; need router
2023-07-05 09:05:05 -04:00
Sergey Borisov
ee042ab76d
Fix ckpt scanning on conversion
2023-07-05 14:18:30 +03:00
Sergey Borisov
2beb8f049e
Fix model detection
2023-07-05 09:43:46 +03:00
blessedcoolant
639d88afd6
revert: inference_mode to no_grad
2023-07-05 16:39:15 +12:00
blessedcoolant
c0501ed5c2
fix: Slow loading of Loras
...
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-07-05 12:47:34 +10:00
Lincoln Stein
5d099f4a49
update_model working
2023-07-04 17:26:57 -04:00
Lincoln Stein
752b4d50cf
model_delete method now working
2023-07-04 10:40:32 -04:00
Lincoln Stein
c1c49d9a76
import model returns 404 for invalid path, 409 for duplicate model
2023-07-04 10:08:10 -04:00
Lincoln Stein
96bf92ead4
add the import model router
2023-07-04 14:35:47 +10:00
Lincoln Stein
fc419546bc
Merge branch 'main' into lstein/remove-hardcoded-cuda-device
2023-07-03 14:10:47 -04:00
Lincoln Stein
d6de11bd56
resolve merge conflict
2023-07-03 12:19:11 -04:00
Lincoln Stein
ed86d0b708
Union[foo, None]=>Optional[foo]
2023-07-03 12:17:45 -04:00
Lincoln Stein
fb2b2a371d
Merge branch 'lstein/fix-vae-conversion-crash' into release/invokeai-3-0-alpha
2023-07-03 11:21:16 -04:00
Lincoln Stein
10d513c5f7
add runtime root path to relative vaes and other submodels
2023-07-03 11:19:33 -04:00
Lincoln Stein
877b187a1b
Merge branch 'lstein/restore-3.9-compatibility' into release/invokeai-3-0-alpha
2023-07-03 11:01:34 -04:00
Lincoln Stein
ac9ec4e75a
restore 3.9 compatibility by replacing | with Union[]
2023-07-03 10:57:40 -04:00
Lincoln Stein
2465c7987b
Revert "restore 3.9 compatibility by replacing | with Union[]"
...
This reverts commit 76bafeb99e
.
2023-07-03 10:56:41 -04:00
Lincoln Stein
76bafeb99e
restore 3.9 compatibility by replacing | with Union[]
2023-07-03 10:55:04 -04:00
Lincoln Stein
6935858ef3
add debugging messages to aid in memory leak tracking
2023-07-02 13:34:53 -04:00
Lincoln Stein
3c2ce51f10
Merge branch 'lstein/remove-hardcoded-cuda-device' into release/invokeai-3-0-alpha
2023-07-01 21:15:58 -04:00
Lincoln Stein
0f02915012
remove hardcoded cuda device in model manager init
2023-07-01 21:15:42 -04:00
Lincoln Stein
f1928d2588
prevent crashes on malformed models
2023-07-01 14:32:58 -04:00
blessedcoolant
c74bb5cdbf
Merge branch 'main' into lstein/fix-vae-convert
2023-07-01 11:18:21 +12:00
Lincoln Stein
1347fc2f00
fix incorrect VAE config file path during conversion of ckpts
2023-06-30 19:14:06 -04:00
Lincoln Stein
ace4f6d586
fix duplicate model key addition when root directory is a relative path
2023-06-28 17:02:03 -04:00
StAlKeR7779
ac46b129bf
Merge branch 'main' into feat/lora_model_patch
2023-06-28 22:43:58 +03:00
Lincoln Stein
79fc708580
warn but do not crash when model scan finds random cruft in models
directory
2023-06-28 15:26:42 -04:00
Lincoln Stein
e8ed0fad6c
autoimport from embedding/controlnet/lora folders designated in startup file
2023-06-27 12:30:53 -04:00
Lincoln Stein
823e098b7c
prompt user for prediction type when autoimporting a v2 model without .yaml file
...
don't ask user for prediction type of a config.yaml provided
2023-06-26 16:30:34 -04:00
Lincoln Stein
011adfc958
merge with main
2023-06-26 13:53:59 -04:00
Lincoln Stein
befd95eb19
rename root_dir to root_path attributes to emphasize return of a Path
2023-06-26 13:52:25 -04:00
Lincoln Stein
a2ddb3823b
fix add_model() logic
2023-06-26 13:33:38 -04:00
Sergey Borisov
91c3a58fb6
Fix lycoris layers init
2023-06-26 04:33:37 +03:00
Sergey Borisov
5cebf67ee4
Apply lora by patching lora instead of hooks
2023-06-26 03:57:33 +03:00
Sergey Borisov
1ba94a92b3
Fixes
2023-06-26 03:54:42 +03:00
Sergey Borisov
23c22ac933
Refactor logic/small fixes
2023-06-26 03:07:54 +03:00
Lincoln Stein
160b5d7992
add support for an autoimport models directory scanned at startup time
2023-06-25 18:50:15 -04:00
Lincoln Stein
c91d1eacba
Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout
2023-06-25 16:04:48 -04:00
Lincoln Stein
60b37b7ff4
fix model manager documentation
2023-06-25 16:04:43 -04:00
Sergey Borisov
a3c22b5fe6
Remove upcast_attention and prediction_type from stable diffusion model logic, fix ckpt conversion according to this
2023-06-25 21:06:22 +03:00
Lincoln Stein
c3c4a71173
implemented Stalker's suggested improvements
2023-06-24 12:37:26 -04:00
Lincoln Stein
ba1371a88f
rename ModelType.Pipeline to ModelType.Main
2023-06-24 11:45:49 -04:00
Lincoln Stein
539d1f3bde
remove redundant prediction_type and attention_upscaling flags
2023-06-23 16:54:52 -04:00
Lincoln Stein
466ec3ab5e
add router API support for model manager heuristic_import()`
2023-06-23 16:35:39 -04:00
Lincoln Stein
58d1857ab6
merge with main
2023-06-23 13:57:25 -04:00
Lincoln Stein
56bd873d7a
make relative model paths work in model manager
2023-06-23 10:52:59 -04:00
Sergey Borisov
5aaaaf64a1
Fix ckpt conversion
2023-06-23 17:29:54 +03:00
StAlKeR7779
9140e2c0f2
Merge branch 'main' into fix/vae_conversion
2023-06-23 15:03:59 +03:00
Lincoln Stein
c7b7e087e4
Merge branch 'main' into lstein/installer-for-new-model-layout
2023-06-23 01:45:05 +01:00
blessedcoolant
bb85608890
Merge branch 'main' into feat/onnx
2023-06-23 05:18:41 +12:00
Sergey Borisov
6c7668aaca
Update onnx model structure, change code according
2023-06-22 20:03:17 +03:00
psychedelicious
b937b7da01
feat(models): update model manager service & route to return list of models
2023-06-22 17:34:12 +10:00
Sergey Borisov
21245a0fb2
Set model type to const value in openapi schema, add model format enums to model schema(as they not not referenced in case of Literal definition)
2023-06-22 16:51:53 +10:00
Sergey Borisov
da566b59e8
Update model format field to use enums
2023-06-22 16:51:53 +10:00
Sergey Borisov
e4dc9c5a04
Rename format to model_format(still named format when work with config)
2023-06-22 16:51:53 +10:00
Sergey Borisov
aceadacad4
Remove default model logic
2023-06-22 16:51:53 +10:00
blessedcoolant
727293d722
fix: 2.1 models breaking generation
...
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-06-22 16:42:59 +10:00
Sergey Borisov
ef83a2fffe
Add name, base_mode, type fields to model info
2023-06-22 16:42:51 +10:00
Sergey Borisov
01d17601b8
Generate config names for openapi
2023-06-22 16:41:19 +10:00
blessedcoolant
bf0d5f4cfc
fix: Update missing name types to new names
2023-06-22 16:41:02 +10:00
blessedcoolant
9838dda1b7
chore: Update model config type names
2023-06-22 16:40:40 +10:00
Sergey Borisov
7759b3f75a
Small refactor
2023-06-21 04:24:25 +03:00
Sergey Borisov
4d337f6abc
ONNX Model/runtime first implementation
2023-06-21 02:12:21 +03:00
Lincoln Stein
90df316835
Merge branch 'main' into lstein/installer-for-new-model-layout
2023-06-20 22:50:41 +01:00
Lincoln Stein
ac6403f877
address some of ebr issues
2023-06-20 11:08:27 -04:00
Sergey Borisov
92c86fd0b8
Set model type to const value in openapi schema, add model format enums to model schema(as they not not referenced in case of Literal definition)
2023-06-20 03:44:58 +03:00
Sergey Borisov
46dc751139
Update model format field to use enums
2023-06-20 03:30:09 +03:00
Sergey Borisov
4cefe37723
Rename format to model_format(still named format when work with config)
2023-06-20 03:25:08 +03:00
Sergey Borisov
82b73c50a0
Remove default model logic
2023-06-20 03:13:10 +03:00
blessedcoolant
b0c4451324
Merge branch 'main' into model-manager-ui-30
2023-06-19 23:02:59 +12:00
Sergey Borisov
7b35162b9e
Remove old logic except for inpaint, add support for lora and ti to inpaint node
2023-06-19 15:57:28 +10:00
Sergey Borisov
82091b9a66
Fix vae conversion
2023-06-18 23:46:07 +03:00
Lincoln Stein
e1d53b86f3
Merge branch 'main' into lstein/installer-for-new-model-layout
2023-06-17 16:26:56 -07:00
blessedcoolant
bf0577c882
fix: 2.1 models breaking generation
...
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-06-18 08:26:25 +12:00
Sergey Borisov
dc669d1447
Add name, base_mode, type fields to model info
2023-06-17 22:48:44 +03:00
Sergey Borisov
16dc78f6c6
Generate config names for openapi
2023-06-17 17:15:36 +03:00
blessedcoolant
c8dfa49d86
fix: Update missing name types to new names
2023-06-17 22:04:28 +12:00
blessedcoolant
67d05d2066
chore: Update model config type names
2023-06-17 21:28:43 +12:00
Lincoln Stein
f28d50070e
configure/install basically working; needs edge case testing
2023-06-16 22:54:36 -04:00
Lincoln Stein
ada7399753
rewrite of widget display - marshalling needs rewrite
2023-06-15 23:32:33 -04:00
Sergey Borisov
5f2d07917d
Fix lora import, fix sd2 config, fix list models api
2023-06-15 21:30:15 +03:00
Sergey Borisov
6c5954f9d1
Add controlnet to model manager, fixes
2023-06-14 04:26:21 +03:00
Sergey Borisov
740c05a0bb
Save models on rescan, uncache model on edit/delete, fixes
2023-06-14 03:12:12 +03:00
Sergey Borisov
26090011c4
Fix conflict resolve, add model configs to type annotation
2023-06-14 00:26:37 +03:00
Sergey Borisov
e7db6d8120
Fix ckpt and vae conversion, migrate script, remove sd2-base
2023-06-13 18:05:12 +03:00
Lincoln Stein
87ba17a1f5
add migration script and update convert and face restoration paths
2023-06-13 01:27:51 -04:00
Lincoln Stein
1439dc7712
Add SchedulerPredictionType and ModelVariantType enums
2023-06-12 16:07:04 -04:00
Sergey Borisov
36eb1bd893
Fixes
2023-06-12 16:14:09 +03:00
Sergey Borisov
9fa78443de
Fixes, add sd variant detection
2023-06-12 05:52:30 +03:00
Lincoln Stein
893f776f1d
model_probe working; model_install incomplete
2023-06-11 19:51:53 -04:00
Lincoln Stein
085ab54124
remove modified models.py and migrate code to models/base.py
2023-06-11 16:10:15 -04:00
Lincoln Stein
8e1a56875e
remove defunct code
2023-06-11 12:57:06 -04:00
Lincoln Stein
000626ab2e
move all installation code out of model_manager
2023-06-11 12:51:50 -04:00
Sergey Borisov
694fd0c92f
Fixes, first runable version
2023-06-11 16:42:40 +03:00
Sergey Borisov
738ba40f51
Fixes
2023-06-11 06:12:21 +03:00
Sergey Borisov
3ce3a7ee72
Rewrite model configs, separate models
2023-06-11 04:49:09 +03:00
Lincoln Stein
74b43c9bdf
fix incorrect variable/typenames in model_cache
2023-06-10 10:41:48 -04:00
Lincoln Stein
a87d52a389
resolve conflicts between lstein & sttalker changes
2023-06-10 09:59:19 -04:00
Lincoln Stein
959e64c9b3
start removing repo_id support
2023-06-10 09:57:23 -04:00
Sergey Borisov
2c056ead42
New models structure draft
2023-06-10 03:14:10 +03:00
Lincoln Stein
887576d217
add directory scanning for loras, controlnets and textual_inversions
2023-06-08 23:11:53 -04:00
Lincoln Stein
6652f3405b
merge with main
2023-06-08 21:08:43 -04:00
Lincoln Stein
2a6d11e645
create databases directory on startup
2023-06-08 07:17:54 -04:00
Lincoln Stein
9ed86a08f1
multiple small fixes
...
1. Contents of autoscan directory field are restored after doing an installation.
2. Activate dialogue to choose V2 parameterization when importing from a directory.
3. Remove autoscan directory from init file when its checkbox is unselected.
4. Add widget cycling behavior to install models form.
2023-06-07 17:32:00 -04:00
Lincoln Stein
04f9757f8d
prevent crash when trying to calculate size of missing safety_checker
...
- Also fixed up order in which logger is created in invokeai-web
so that handlers are installed after command-line options are
parsed (and not before!)
2023-06-06 22:57:49 -04:00
Lincoln Stein
1f9e1eb964
merge with main
2023-06-06 22:18:41 -04:00
Lincoln Stein
d8d11f9bbb
quench fp16 rev id not found warning
2023-06-06 22:01:05 -04:00
Lincoln Stein
f5044c290d
fix crash during model conversion
2023-06-06 17:05:29 -04:00
Lincoln Stein
90333c0074
merge with main
2023-06-05 22:03:44 -04:00
Lincoln Stein
cb157ea530
fix crash when install-models launched from config script
2023-06-04 14:55:51 -04:00
Lincoln Stein
1a7fb601dc
ask user for v2 variant when model manager can't infer it
2023-06-04 11:27:44 -04:00
Lincoln Stein
31e97ead2a
move invokeai.db to ~/invokeai/databases
...
- The invokeai.db database file has now been moved into
`INVOKEAIROOT/databases`. Using plural here for possible
future with more than one database file.
- Removed a few dangling debug messages that appeared during
testing.
- Rebuilt frontend to test web.
2023-06-03 20:25:34 -04:00
Lincoln Stein
72d1e4e404
fix bug in model_manager that prevented import of inpainting models
2023-06-02 22:39:26 -04:00
Lincoln Stein
1390b65a9c
new TUI is fully functional; needs some polishing
2023-06-02 17:20:50 -04:00
Lincoln Stein
41f7758977
listing, downloading and deleting LoRAs working; TI support pending
2023-06-02 00:40:15 -04:00
Lincoln Stein
e9821ab711
implemented tabbed model selection; not wired to backend yet
2023-06-01 00:31:46 -04:00
Sergey Borisov
b47786e846
First working TI draft
2023-05-31 02:12:27 +03:00
Sergey Borisov
69ccd3a0b5
Fixes for checkpoint models
2023-05-30 19:12:47 +03:00
Sergey Borisov
79de9047b5
First working lora implementation
2023-05-30 01:11:00 +03:00
Lincoln Stein
2273b3a8c8
fix potential race condition in config system
2023-05-25 20:41:26 -04:00
Sergey Borisov
8e419a4f97
Revert weak references as can be done without it
2023-05-23 04:29:40 +03:00
Sergey Borisov
2533209326
Rewrite cache to weak references
2023-05-23 03:48:22 +03:00
Lincoln Stein
d2dc1ed26f
make InvokeAI package installable
...
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the
installer script.
Main changes.
1. Move static web pages into `invokeai/frontend/web` and modify the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root
to launch.
2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.
3. Fix a bug in the checkpoint converter script that was identified
during testing.
4. Better error reporting when checkpoint converter fails.
5. Rebuild front end.
2023-05-22 17:51:47 -04:00
Lincoln Stein
27241cdde1
port more globals changes over
2023-05-18 17:17:45 -04:00
Lincoln Stein
259d6ec90d
fixup cachedir call
2023-05-18 14:52:16 -04:00
Lincoln Stein
a77c4c87b2
fixed logic error in resolution of model path
2023-05-18 14:35:34 -04:00
Lincoln Stein
d96175d127
resolve some undefined symbols in model_cache
2023-05-18 14:31:47 -04:00
Lincoln Stein
b1a99d772c
added method to convert vaes
2023-05-18 13:31:11 -04:00
Lincoln Stein
7ea995149e
fixes to env parsing, textual inversion & help text
...
- Make environment variable settings case InSenSiTive:
INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
environment variables will both set `max_loaded_models`
- Updated realesrgan to use new config system.
- Updated textual_inversion_training to use new config system.
- Discovered a race condition when InvokeAIAppConfig is created
at module load time, which makes it impossible to customize
or replace the help message produced with --help on the command
line. To fix this, moved all instances of get_invokeai_config()
from module load time to object initialization time. Makes code
cleaner, too.
- Added `--from_file` argument to `invokeai-node-cli` and changed
github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
Sergey Borisov
fd82763412
Model manager draft
2023-05-18 03:56:52 +03:00
Lincoln Stein
e971a7f35c
when migrating models.yaml, rename original models.yaml.orig
2023-05-16 22:37:53 -04:00
Lincoln Stein
cd16857f38
fix None in model_type
2023-05-16 00:13:44 -04:00
Lincoln Stein
1442f1cb8d
change model filter to None in second place
2023-05-16 00:03:57 -04:00
Lincoln Stein
4fe94a9315
list_models() now returns a dict of {type,{name: info}}
2023-05-15 23:44:08 -04:00
Lincoln Stein
c8f765cc06
improve debugging messages
2023-05-14 18:29:55 -04:00
Lincoln Stein
b9e9087dbe
do not manage GPU for pipelines if sequential_offloading is True
2023-05-14 18:09:38 -04:00
Lincoln Stein
63e465eb5c
tweaks to get_model()
behavior
...
1. If an external VAE is specified in config file, then
get_model(submodel=vae) will return the external VAE, not the one
burnt into the parent diffusers pipeline.
2. The mechanism in (1) is generalized such that you can now have
"unet:", "text_encoder:" and similar stanzas in the config file.
Valid formats of these subsections:
unet:
repo_id: foo/bar
unet:
path: /path/to/local/folder
unet:
repo_id: foo/bar
subfolder: unet
In the near future, these will also be used to attach external
parts to the pipeline, generalizing VAE behavior.
3. Accommodate callers (i.e. the WebUI) that are passing the
model key ("diffusers/stable-diffusion-1.5") to get_model()
instead of the tuple of model_name and model_type.
4. Fixed bug in VAE model attaching code.
5. Rebuilt web front end.
2023-05-14 16:50:59 -04:00
Lincoln Stein
baf5451fa0
Merge branch 'main' into lstein/new-model-manager
2023-05-13 22:01:34 -04:00
Lincoln Stein
1103ab2844
merge with main
2023-05-13 21:35:19 -04:00
Lincoln Stein
b31a6ff605
fix reversed args in _model_key() call
2023-05-13 21:11:06 -04:00
Sergey Borisov
1f602e6143
Fix - apply precision to text_encoder
2023-05-14 03:46:13 +03:00
Sergey Borisov
039fa73269
Change SDModelType enum to string, fixes(model unload negative locks count, scheduler load error, saftensors convert, wrong logic in del_model, wrong parse metadata in web)
2023-05-14 03:06:26 +03:00
Lincoln Stein
2204e47596
allow submodels to be fetched independent of parent pipeline
2023-05-13 16:54:47 -04:00
Lincoln Stein
d8b1f29066
proxy SDModelInfo so that it can be used directly as context
2023-05-13 16:29:18 -04:00
Lincoln Stein
b23c9f1da5
get Tuple type hint syntax right
2023-05-13 14:59:21 -04:00
Lincoln Stein
72967bf118
convert add_model(), del_model(), list_models() etc to use bifurcated names
2023-05-13 14:44:44 -04:00
Sergey Borisov
3b2a054f7a
Add model loader node; unet, clip, vae fields; change compel node to clip field
2023-05-13 04:37:20 +03:00
Sergey Borisov
131145eab1
A big refactor of model manager(according to IMHO)
2023-05-12 23:13:34 +03:00
Kevin Turner
4caa1f19b2
fix(model manager): fix string formatting error on model checksum timer
2023-05-11 19:06:02 -07:00
Lincoln Stein
df5b968954
model manager now running as a service
2023-05-11 21:24:29 -04:00
blessedcoolant
06b5800d28
Add UniPC Scheduler
2023-05-11 22:43:18 +12:00
Lincoln Stein
4627910c5d
added a wrapper model_manager_service and model events
2023-05-11 00:09:19 -04:00
Lincoln Stein
99c692f397
check that model name matches format
2023-05-09 23:46:59 -04:00
Lincoln Stein
3d85e769ce
clean up ckpt handling
...
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
Lincoln Stein
9cb962cad7
ckpt model conversion now done in ModelCache
2023-05-08 23:39:44 -04:00
Lincoln Stein
a108155544
added StALKeR779's great model size calculating routine
2023-05-08 21:47:03 -04:00
Lincoln Stein
c15b49c805
implement StALKeR7779 requested API for fetching submodels
2023-05-07 23:18:17 -04:00
Lincoln Stein
fd63e36822
optimize subfolder so that it returns submodel if parent is in RAM
2023-05-07 21:39:11 -04:00
Lincoln Stein
4649920074
adjust t2i to work with new model structure
2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90
cap model cache size using bytes, not # models
2023-05-07 18:07:28 -04:00
Lincoln Stein
647ffb2a0f
defined abstract baseclass for model manager service
2023-05-06 22:41:19 -04:00
Lincoln Stein
05a27bda5e
generalize model loading support, include loras/embeds
2023-05-06 15:58:44 -04:00
Lincoln Stein
e0214a32bc
mostly ported to new manager API; needs testing
2023-05-06 00:44:12 -04:00
Lincoln Stein
af8c7c7d29
model manager rewritten to use model_cache; API changed!
2023-05-05 19:32:28 -04:00
Lincoln Stein
a4e36bc02a
when model is forcibly moved into RAM update loaded_models set
2023-05-04 23:28:03 -04:00
Lincoln Stein
68bc0112fa
implement lazy GPU offloading and ref counting
2023-05-04 23:15:32 -04:00
Lincoln Stein
e4196bbe5b
adjust non-app modules to use new config system
2023-05-04 00:43:51 -04:00
Lincoln Stein
15ffb53e59
remove globals, args, generate and the legacy CLI
2023-05-03 23:36:51 -04:00
Lincoln Stein
a273bdbdc1
Merge branch 'main' into lstein/new-model-manager
2023-05-03 18:09:29 -04:00
Lincoln Stein
e1fed52c66
work on model cache and its regression test finished
2023-05-03 12:38:18 -04:00
Lincoln Stein
bb959448c1
implement hashing for local & remote models
2023-05-02 16:52:27 -04:00
Lincoln Stein
2e2abf6ea6
caching of subparts working
2023-05-01 22:57:30 -04:00
Lincoln Stein
974841926d
logger is a interchangeable service
2023-04-29 10:48:50 -04:00
Lincoln Stein
8db20e0d95
rename log to logger throughout
2023-04-29 09:43:40 -04:00
Lincoln Stein
6b79e2b407
Merge branch 'main' into enhance/invokeai-logs
...
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
Lincoln Stein
956ad6bcf5
add redesigned model cache for diffusers & transformers
2023-04-28 00:41:52 -04:00
Lincoln Stein
aab262d991
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-14 20:12:38 -04:00
Lincoln Stein
0b0e6fe448
convert remainder of print() to log.info()
2023-04-14 15:15:14 -04:00
Lincoln Stein
c132dbdefa
change "ialog" to "log"
2023-04-11 18:48:20 -04:00
Lincoln Stein
5a4765046e
add logging support
...
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.
Examples:
### A critical error (logging.CRITICAL)
*** A non-fatal error (logging.ERROR)
** A warning (logging.WARNING)
>> Informational message (logging.INFO)
| Debugging message (logging.DEBUG)
2023-04-11 09:33:28 -04:00
AbdBarho
de189f2db6
Increase chunk size when computing SHAs
2023-04-09 21:53:59 +02:00
Lincoln Stein
8334757af9
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-07 18:55:54 -04:00
Lincoln Stein
4c339dd4b0
refactor get_submodels() into individual methods
2023-04-06 17:08:23 -04:00
Lincoln Stein
d44151d6ff
add a new method to model_manager that retrieves individual pipeline parts
...
- New method is ModelManager.get_sub_model(model_name:str,model_part:SDModelComponent)
To use:
```
from invokeai.backend import ModelManager, SDModelComponent as sdmc
manager = ModelManager('/path/to/models.yaml')
vae = manager.get_sub_model('stable-diffusion-1.5', sdmc.vae)
```
2023-04-05 17:25:42 -04:00
Lincoln Stein
3c4b6d5735
Merge branch 'main' into enhance/heuristic-import-improvements
2023-03-29 16:54:43 -04:00
Lincoln Stein
9a7580dedd
fix bugs in online ckpt conversion of 2.0 models
...
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.
- When legacy checkpoints built on SD-2.0 models were converted
on-the-fly using --ckpt_convert, generation would crash with a
precision incompatibility error.
2023-03-28 00:17:20 -04:00
Lincoln Stein
fe5d9ad171
improve importation and conversion of legacy checkpoint files
...
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.
To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:
1. When initially called, the caller can pass a config file path, in
which case it will be used.
2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
my-new-model.safetensors
my-new-model.yaml
```
The yaml file is then used as the configuration file for
importation and conversion.
3. If no such file is found, then the method opens up the checkpoint
and probes it to determine whether it is V1, V1-inpaint or V2.
If it is a V1 format, then the appropriate v1-inference.yaml config
file is used. Unfortunately there are two V2 variants that cannot be
distinguished by introspection.
4. If the probe algorithm is unable to determine the model type, then its
last-ditch effort is to execute an optional callback function that can
be provided by the caller. This callback, named `config_file_callback`
receives the path to the legacy checkpoint and returns the path to the
config file to use. The CLI uses to put up a multiple choice prompt to
the user. The WebUI **could** use this to prompt the user to choose
from a radio-button selection.
5. If the config file cannot be determined, then the import is abandoned.
The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:
```
my-new-model.safetensors
my-new-model.vae.pt
```
For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.
No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
Lincoln Stein
5ac0316c62
fix issue with embeddings being loaded twice
...
- as noted by JPPhoto
2023-03-25 10:45:03 -04:00
Lincoln Stein
9ceec40b76
Merge branch 'main' into feat/use-custom-vaes
2023-03-24 17:45:02 -04:00
Lincoln Stein
85b2822f5e
Merge branch 'main' into security/scan-ckpt-files-main
2023-03-24 08:39:59 -04:00
Lincoln Stein
6e7dbf99f3
Merge branch 'main' into bugfix/dreambooth_ema
2023-03-23 23:24:15 -04:00
Lincoln Stein
deeff36e16
Merge branch 'main' into security/scan-ckpt-files-main
2023-03-23 23:20:52 -04:00
Lincoln Stein
f751dcd245
load embeddings after a ckpt legacy model is converted to diffusers
...
- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 15:21:58 -04:00
Lincoln Stein
a97107bd90
handle VAEs that do not have a "state_dict" key
2023-03-23 15:11:29 -04:00
Lincoln Stein
b2ce45a417
re-implement model scanning when loading legacy checkpoint files
...
- This PR turns on pickle scanning before a legacy checkpoint file
is loaded from disk within the checkpoint_to_diffusers module.
- Also miscellaneous diagnostic message cleanup.
2023-03-23 15:03:30 -04:00
Lincoln Stein
4e0b5d85ba
convert custom VAEs into diffusers
...
- When a legacy checkpoint model is loaded via --convert_ckpt and its
models.yaml stanza refers to a custom VAE path (using the 'vae:'
key), the custom VAE will be converted and used within the diffusers
model. Otherwise the VAE contained within the legacy model will be
used.
- Note that the heuristic_import() method, which imports arbitrary
legacy files on disk and URLs, will continue to default to the
the standard stabilityai/sd-vae-ft-mse VAE. This can be fixed after
the fact by editing the models.yaml stanza using the Web or CLI
UIs.
- Fixes issue #2917
2023-03-23 13:14:19 -04:00
Lincoln Stein
a958ae5e29
Merge branch 'main' into feat/use-custom-vaes
2023-03-23 10:32:56 -04:00
Lincoln Stein
3ca654d256
speculative fix for alternative vaes
2023-03-13 23:27:29 -04:00
jeremy
e0e01f6c50
Reduced Pickle ACE attack surface
...
Prior to this commit, all models would be loaded with the extremely unsafe `torch.load` method, except those with the exact extension `.safetensors`. Even a change in casing (eg. `saFetensors`, `Safetensors`, etc) would cause the file to be loaded with torch.load instead of the much safer `safetensors.toch.load_file`.
If a malicious actor renamed an infected `.ckpt` to something like `.SafeTensors` or `.SAFETENSORS` an unsuspecting user would think they are loading a safe .safetensor, but would in fact be parsing an unsafe pickle file, and executing an attacker's payload. This commit fixes this vulnerability by reversing the loading-method decision logic to only use the unsafe `torch.load` when the file extension is exactly `.ckpt`.
2023-03-13 16:16:30 -04:00
Kyle Schouviller
3ee2798ede
[fix] Get the model again if current model is empty
2023-03-12 22:26:11 -04:00
Fabio 'MrWHO' Torchetti
5c5106c14a
Add keys when non EMA
2023-03-12 16:22:22 -05:00
Fabio 'MrWHO' Torchetti
c367b21c71
Fix issue #2932
2023-03-12 15:40:33 -05:00
Lincoln Stein
6a77634b34
remove unneeded generate initializer routines
2023-03-11 17:14:03 -05:00
Lincoln Stein
c14241436b
move ModelManager initialization into its own module and restore embedding support
2023-03-11 10:56:53 -05:00
Lincoln Stein
d612f11c11
initialize InvokeAIGenerator object with model, not manager
2023-03-11 09:06:46 -05:00
Lincoln Stein
fe75b95464
Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator
2023-03-10 19:36:40 -05:00
Lincoln Stein
95954188b2
remove factory pattern
...
Factory pattern is now removed. Typical usage of the InvokeAIGenerator is now:
```
from invokeai.backend.generator import (
InvokeAIGeneratorBasicParams,
Txt2Img,
Img2Img,
Inpaint,
)
params = InvokeAIGeneratorBasicParams(
model_name = 'stable-diffusion-1.5',
steps = 30,
scheduler = 'k_lms',
cfg_scale = 8.0,
height = 640,
width = 640
)
print ('=== TXT2IMG TEST ===')
txt2img = Txt2Img(manager, params)
outputs = txt2img.generate(prompt='banana sushi', iterations=2)
for i in outputs:
print(f'image={output.image}, seed={output.seed}, model={output.params.model_name}, hash={output.model_hash}, steps={output.params.steps}')
```
The `params` argument is optional, so if you wish to accept default
parameters and selectively override them, just do this:
```
outputs = Txt2Img(manager).generate(prompt='banana sushi',
steps=50,
scheduler='k_heun',
model_name='stable-diffusion-2.1'
)
```
2023-03-10 19:33:04 -05:00
Jonathan
370e8281b3
Merge branch 'main' into refactor/nodes-on-generator
2023-03-10 12:34:00 -06:00
Lincoln Stein
685df33584
fix bug that caused black images when converting ckpts to diffusers in RAM ( #2914 )
...
Cause of the problem was inadvertent activation of the safety checker.
When conversion occurs on disk, the safety checker is disabled during loading.
However, when converting in RAM, the safety checker was not removed, resulting
in it activating even when user specified --no-nsfw_checker.
This PR fixes the problem by detecting when the caller has requested the InvokeAi
StableDiffusionGeneratorPipeline class to be returned and setting safety checker
to None. Do not do this with diffusers models destined for disk because then they
will be incompatible with the merge script!!
Closes #2836
2023-03-10 18:11:32 +00:00
Lincoln Stein
14c8738a71
fix dangling reference to _model_to_cpu and missing variable model_description
2023-03-09 21:41:45 -05:00
Kevin Turner
ad7b1fa6fb
model_manager: model to/from CPU methods are implemented on the Pipeline
2023-03-09 18:15:12 -08:00
Lincoln Stein
b679a6ba37
model manager defaults to consistent values of device and precision
2023-03-09 01:09:54 -05:00
Lincoln Stein
7c60068388
Merge branch 'main' into bugfix/fix-convert-sd-to-diffusers-error
2023-03-06 08:20:29 -05:00
Lincoln Stein
94daaa4abf
fix call signature of import_diffuser_model()
2023-03-05 23:37:59 -05:00
Lincoln Stein
2f9dcd7906
support both epsilon and v-prediction v2 inference
...
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:
1) "epsilon" prediction type for Stable Diffusion v2 Base
2) "v-prediction" type for Stable Diffusion v2-768
This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so
automatically.
2023-03-05 22:51:40 -05:00
blessedcoolant
e537b5d8e1
Revert "Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram"
...
This reverts commit e0e70c9222
, reversing
changes made to 0b184913b9
.
2023-03-06 14:29:39 +13:00
blessedcoolant
e0e70c9222
Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram
2023-03-06 14:27:30 +13:00
Lincoln Stein
fc187f263e
deal with non-directories in diffusers/
2023-03-05 12:29:52 -05:00
Lincoln Stein
4e9e1b660d
respect HF_HOME setting when migrating
2023-03-05 12:08:29 -05:00
Lincoln Stein
d01adedff5
give user chance to back out before migration
2023-03-05 12:04:31 -05:00
Lincoln Stein
b33655b0d6
restore automatic conversion of legacy files to diffusers pipelines
2023-03-05 11:45:25 -05:00
Lincoln Stein
81dee04dc9
during migration do not overwrite symlinks
2023-03-05 08:40:12 -05:00
Lincoln Stein
ef8cf83b28
migrate to new HF diffusers cache location
2023-03-05 08:20:24 -05:00
Lincoln Stein
44400d2a66
fix incorrect import of merge code
2023-03-03 01:07:31 -05:00
Lincoln Stein
60a98cacef
all vestiges of ldm.invoke removed
2023-03-03 01:02:00 -05:00
Lincoln Stein
6a990565ff
all files migrated; tweaks needed
2023-03-03 00:02:15 -05:00