mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
dab03fb646
To be consistent with max_cache_size, the amount of memory to hold in VRAM for model caching is now controlled by the max_vram_cache_size configuration parameter. |
||
---|---|---|
.. | ||
models | ||
__init__.py | ||
convert_ckpt_to_diffusers.py | ||
lora.py | ||
model_cache.py | ||
model_manager.py | ||
model_merge.py | ||
model_probe.py |