Commit Graph

487 Commits

Author SHA1 Message Date
Lincoln Stein
5759a390f9 introduce gpu_mem_reserved configuration parameter 2023-07-09 18:35:04 -04:00
Lincoln Stein
8d7dba937d fix undefined variable 2023-07-09 14:37:45 -04:00
Lincoln Stein
d6cb0e54b3 don't unload models from GPU until the space is needed 2023-07-09 14:26:30 -04:00
Lincoln Stein
10d3bccf32
Mac MPS FP16 fixes (#3641)
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
2023-07-07 17:43:23 -04:00
Eugene Brodsky
97b2ec58e2
Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-07 14:18:12 -04:00
Lincoln Stein
56f4712814 fix checkpoint VAE handling in migrate script 2023-07-07 09:34:42 -04:00
Lincoln Stein
9f58ed35cf improve user migration experience
- No longer fail root directory probing if invokeai.yaml is missing
  (test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
  with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
blessedcoolant
7aa918677e Merge branch 'main' into feat/clip_skip 2023-07-07 16:21:53 +12:00
Lincoln Stein
54f3686e3b merge with main, fix conflicts 2023-07-06 15:21:45 -04:00
Lincoln Stein
e9352227f3 add merge api 2023-07-06 15:12:34 -04:00
blessedcoolant
bc5371eeee
Merge branch 'main' into feat/clip_skip 2023-07-07 06:03:39 +12:00
Lincoln Stein
e573a533ae remove redundant import 2023-07-06 13:24:58 -04:00
Lincoln Stein
581be42c75
Merge branch 'main' into lstein/model-manager-router-api 2023-07-06 13:20:36 -04:00
Lincoln Stein
90c66aab3d merge with upstream 2023-07-06 13:17:02 -04:00
Lincoln Stein
3e925fbf34 model merging API ready for testing 2023-07-06 13:15:15 -04:00
Lincoln Stein
ec7c2f07c6 model merge backend, CLI and TUI working 2023-07-06 12:21:42 -04:00
blessedcoolant
b229fe19aa
Merge branch 'main' into lstein/configure-max-cache-size 2023-07-07 01:52:12 +12:00
Sergey Borisov
04b57c408f Add clip skip option to prompt node 2023-07-06 16:09:40 +03:00
blessedcoolant
6f1268e2b1
Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-07 00:32:22 +12:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api 2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT 2023-07-05 23:13:01 -04:00
Lincoln Stein
3691b55565 fix autoimport crash 2023-07-05 21:53:08 -04:00
Lincoln Stein
f610045a14
Merge branch 'main' into mps-fp16-fixes 2023-07-05 21:01:48 -04:00
Lincoln Stein
a7cbcae176 expose max_cache_size to invokeai-configure interface 2023-07-05 20:59:57 -04:00
Lincoln Stein
0a6dccd607 expose max_cache_size to invokeai-configure interface 2023-07-05 20:59:14 -04:00
Lincoln Stein
43c51ff157
Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-05 20:48:15 -04:00
Lincoln Stein
cfa3b2419c partial implementation of merge 2023-07-05 20:25:47 -04:00
Lincoln Stein
d4550b3059 clean up lint errors in lora.py 2023-07-05 19:18:25 -04:00
Lincoln Stein
83d3a043da merge latest changes from main 2023-07-05 19:15:53 -04:00
gogurtenjoyer
169ff6368b
Update mps_fixes.py - additional torch op for nodes
This fixes scaling in the nodes UI.
2023-07-05 17:47:23 -04:00
Lincoln Stein
71dad6d404
Merge branch 'main' into ti-ui 2023-07-05 16:57:31 -04:00
Lincoln Stein
685a47cc7d fix crash during lora application 2023-07-05 16:40:47 -04:00
Lincoln Stein
cb947bcbf0
Merge branch 'main' into lstein/fix-migrate3-textencoder 2023-07-05 16:23:00 -04:00
Lincoln Stein
f8bbec8572 Recognize and load diffusers-style LoRAs (.bin)
Prevent double-reporting of autoimported models
- closes #3636

Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:21:23 -04:00
Lincoln Stein
863336acbb Recognize and load diffusers-style LoRAs (.bin)
Prevent double-reporting of autoimported models
- closes #3636

Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:19:16 -04:00
Lincoln Stein
90ae8ce26a prevent model install crash "torch needs to be restarted with spawn" 2023-07-05 16:18:20 -04:00
Lincoln Stein
ad5d90aca8 prevent model install crash "torch needs to be restarted with spawn" 2023-07-05 15:38:07 -04:00
Lincoln Stein
5b6dd47b9f add API for model convert 2023-07-05 15:13:21 -04:00
Lincoln Stein
5027d0a603 accept @psychedelicious suggestions above 2023-07-05 14:50:57 -04:00
Lincoln Stein
9f9ce08e44
Merge branch 'main' into lstein/remove-hardcoded-cuda-device 2023-07-05 13:38:33 -04:00
Lincoln Stein
021e1eca8e
Merge branch 'main' into mps-fp16-fixes 2023-07-05 13:19:52 -04:00
Lincoln Stein
5fe722900d allow clip-vit-large-patch14 text encoder to coexist with tokenizer in same directory 2023-07-05 13:15:08 -04:00
Lincoln Stein
cf173b522b allow clip-vit-large-patch14 text encoder to coexist with tokenizer in same directory 2023-07-05 13:14:41 -04:00
blessedcoolant
9e2d63ef97
Merge branch 'main' into fix/ckpt_convert_scan 2023-07-06 05:01:34 +12:00
Sergey Borisov
0ac9dca926 Fix loading diffusers ti 2023-07-05 19:46:00 +03:00
Lincoln Stein
bd82c4ace0 model installer confirms deletion of models 2023-07-05 09:57:23 -04:00
Lincoln Stein
9edf78dd2e merge with main 2023-07-05 09:12:54 -04:00
Lincoln Stein
6112197edf convert implemented; need router 2023-07-05 09:05:05 -04:00
gogurtenjoyer
ba7345deb4
Merge branch 'main' into mps-fp16-fixes 2023-07-05 07:38:41 -04:00
Sergey Borisov
ee042ab76d Fix ckpt scanning on conversion 2023-07-05 14:18:30 +03:00