This website requires JavaScript.
Explore
Help
Sign In
Mirrored_Repos
/
InvokeAI
Watch
1
Star
0
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI
synced
2024-08-30 20:32:17 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
InvokeAI
/
invokeai
/
backend
/
model_manager
/
load
/
model_cache
History
Lincoln Stein
21a60af881
when unlocking models, offload_unlocked_models should prune to vram limit only (
#6450
)
...
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-29 03:01:21 +00:00
..
__init__.py
BREAKING CHANGES: invocations now require model key, not base/type/name
2024-03-01 10:42:33 +11:00
model_cache_base.py
Optimize RAM to VRAM transfer (
#6312
)
2024-05-24 17:06:09 +00:00
model_cache_default.py
Optimize RAM to VRAM transfer (
#6312
)
2024-05-24 17:06:09 +00:00
model_locker.py
when unlocking models, offload_unlocked_models should prune to vram limit only (
#6450
)
2024-05-29 03:01:21 +00:00