Lincoln Stein
0d3addc69b
added textual inversion and lora loaders
2024-03-01 10:42:33 +11:00
Lincoln Stein
67eb715093
loaders for main, controlnet, ip-adapter, clipvision and t2i
2024-03-01 10:42:33 +11:00
psychedelicious
0f8af643d1
chore(backend): rename ModelInfo
-> LoadedModelInfo
...
We have two different classes named `ModelInfo` which might need to be used by API consumers. We need to export both but have to deal with this naming collision.
The `ModelInfo` I've renamed here is the one that is returned when a model is loaded. It's the object least likely to be used by API consumers.
2024-03-01 10:42:33 +11:00
blessedcoolant
ff9bd040cc
possible fix: Seamless not working with Custom VAE's
2024-02-14 16:13:11 -05:00
Kent Keirsey
17d5f7bebd
Critical Space Removal
2024-02-14 16:13:11 -05:00
Kent Keirsey
30dae0f5aa
adding back skipped layer
2024-02-14 16:13:11 -05:00
psychedelicious
3339ad4df8
feat(nodes): seamless.py minor cleanup
2024-02-13 13:34:48 +11:00
Kent Keirsey
c3b2a8cb27
Quick Seamless Fixes
2024-02-13 13:34:48 +11:00
Lincoln Stein
8e500283b6
Fix broken import in checkpoint_convert ( #5635 )
...
* Fix broken import in checkpoint_convert
* simplify the fix
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-04 12:56:51 +00:00
Brandon Rising
a04981b418
This seems to work now
2024-01-30 21:32:08 -05:00
Lincoln Stein
bd31b5606c
Retain suffix (.safetensors, .bin) when renaming a checkpoint file or LoRA
...
- closes #5518
2024-01-16 17:27:50 -05:00
Lincoln Stein
7269c9f02e
Enable correct probing of LoRA latent-consistency/lcm-lora-sdxl ( #5449 )
...
- Closes #5435
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-08 17:18:26 -05:00
Lincoln Stein
ffa05a0bb3
Only replace vae when it is the broken SDXL 1.0 version
2024-01-06 14:06:47 -05:00
Lincoln Stein
a20e17330b
blackify
2024-01-06 14:06:47 -05:00
Lincoln Stein
4e83644433
if sdxl-vae-fp16-fix model is available then bake it in when converting ckpts
2024-01-06 14:06:47 -05:00
Ryan Dick
f7f697849c
Skip weight initialization when resizing text encoder token embeddings to accomodate new TI embeddings. This saves time.
2024-01-05 15:16:00 -05:00
Millun Atluri
53b835945f
Updated with ruff formatting
2023-12-28 11:05:19 +11:00
woweenie
e38d0e39b7
fix bug when there are two multi vector TI in a prompt
2023-12-27 22:14:14 +01:00
Ryan Dick
cb698ff1fb
Update model_probe to work with diffuser-format SD TI embeddings.
2023-12-18 09:51:16 -05:00
Lincoln Stein
212dbaf9a2
fix comment
2023-12-15 00:25:27 -05:00
Lincoln Stein
ac3cf48d7f
make probe recognize lora format at https://civitai.com/models/224641
2023-12-15 00:25:27 -05:00
Lincoln Stein
75089b7a9d
merge in changes from main
2023-12-01 09:18:07 -05:00
Lincoln Stein
5a3f1f2b22
fix ruff github format errors
2023-12-01 01:59:26 -05:00
Lincoln Stein
f95ce1870c
fix ruff format check
2023-12-01 01:46:12 -05:00
Lincoln Stein
0719a46372
add support for SDXL textual inversion/embeddings
2023-12-01 01:28:28 -05:00
Lincoln Stein
ecd3dcd5df
Merge branch 'main' into refactor/model-manager-3
2023-11-27 22:15:51 -05:00
Steven Frank
e509d719ee
Fix attempt to deserialize on CUDA on Mac
...
Without specifying "cpu", attempts to use non-existent CUDA to deserialize embeddings on macOS, resulting in a warning / failure to load.
2023-11-28 09:24:57 +11:00
Lincoln Stein
8ef596eac7
further changes for ruff
2023-11-26 17:13:31 -05:00
Lincoln Stein
8c7a7bc897
Merge branch 'main' into refactor/model-manager-3
2023-11-22 22:29:23 -05:00
Lincoln Stein
4aab728590
move name/description logic into model_probe.py
2023-11-22 22:29:02 -05:00
Lincoln Stein
98a4930a52
add probe support for LCM main models
2023-11-22 14:58:27 -05:00
psychedelicious
1a596a5684
fix(backend): fix unintentional change to import orders
...
- Ignore I001 (isort rules) for this file
- Ignore F401 (unused imports) for this file
2023-11-21 20:22:27 +11:00
psychedelicious
da443973cb
chore: ruff
2023-11-21 20:22:27 +11:00
psychedelicious
6494e8e551
chore: ruff format
2023-11-11 10:55:40 +11:00
psychedelicious
99a8ebe3a0
chore: ruff check - fix flake8-bugbear
2023-11-11 10:55:28 +11:00
psychedelicious
3a136420d5
chore: ruff check - fix flake8-comprensions
2023-11-11 10:55:23 +11:00
Wubbbi
6001d3d71d
Change pad_to_multiple_of to be 8 for all cases. Add comment about it's temporary status
2023-11-10 17:51:59 -05:00
Wubbbi
8831d1ee41
Update Documentation
2023-11-10 17:51:59 -05:00
Wubbbi
a0be83e370
Update Transformers to 4.34 and fix pad_to_multiple_of
2023-11-10 17:51:59 -05:00
Lincoln Stein
8702a63197
add support for downloading and installing LCM lora diffusers models
2023-11-10 17:51:30 -05:00
psychedelicious
6aa87f973e
fix(nodes): create app/shared/
module to prevent circular imports
...
We have a number of shared classes, objects, and functions that are used in multiple places. This causes circular import issues.
This commit creates a new `app/shared/` module to hold these shared classes, objects, and functions.
Initially, only `FreeUConfig` and `FieldDescriptions` are moved here. This resolves a circular import issue with custom nodes.
Other shared classes, objects, and functions will be moved here in future commits.
2023-11-09 16:41:55 +11:00
Kent Keirsey
e66d0f7372
Merge branch 'main' into feat/nodes/freeu
2023-11-06 05:39:58 -08:00
Ryan Dick
aa02ebf8f5
Fix model cache gc.collect() condition.
2023-11-04 08:52:10 -04:00
Ryan Dick
fb3d0c4b12
Fix bug in model cache reference count checking.
2023-11-03 13:50:40 -07:00
Ryan Dick
8488ab0134
Reduce frequency that we call gc.collect() in the model cache.
2023-11-03 13:50:40 -07:00
Ryan Dick
875231ed3d
Add reminder to clean up our model cache clearing logic.
2023-11-03 13:50:40 -07:00
Ryan Dick
43b300498f
Remove explicit gc.collect() after transferring models from device to CPU. I'm not sure why this was there in the first place, but it was taking a significant amount of time (up to ~1sec in my tests).
2023-11-03 13:50:40 -07:00
Ryan Dick
e391f3c9a8
Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model.
2023-11-02 19:41:33 -07:00
Ryan Dick
6e7a3f0546
(minor) Fix static checks and typo.
2023-11-02 19:20:37 -07:00
Ryan Dick
4a683cc669
Add a app config parameter to control the ModelCache logging behavior.
2023-11-02 19:20:37 -07:00