This website requires JavaScript.
Explore
Help
Sign In
Mirrored_Repos
/
InvokeAI
Watch
1
Star
0
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI
synced
2024-08-30 20:32:17 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
InvokeAI
/
invokeai
/
backend
/
model_manager
/
load
History
Lincoln Stein
21a60af881
when unlocking models, offload_unlocked_models should prune to vram limit only (
#6450
)
...
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-29 03:01:21 +00:00
..
convert_cache
[mm] Do not write diffuser model to disk when convert_cache set to zero (
#6072
)
2024-03-29 16:11:08 -04:00
model_cache
when unlocking models, offload_unlocked_models should prune to vram limit only (
#6450
)
2024-05-29 03:01:21 +00:00
model_loaders
chore: clean up merge conflicts
2024-04-03 20:28:00 +05:30
__init__.py
…
load_base.py
[mm] Do not write diffuser model to disk when convert_cache set to zero (
#6072
)
2024-03-29 16:11:08 -04:00
load_default.py
[util] Add generic torch device class (
#6174
)
2024-04-15 13:12:49 +00:00
memory_snapshot.py
…
model_loader_registry.py
Experiment with using absolute paths within model management
2024-03-08 15:36:14 -05:00
model_util.py
…
optimizations.py
…