mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
d32f9f7cb0
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved for model caching (similar to max_cache_size). |
||
---|---|---|
.. | ||
models | ||
__init__.py | ||
convert_ckpt_to_diffusers.py | ||
lora.py | ||
model_cache.py | ||
model_manager.py | ||
model_merge.py | ||
model_probe.py |