mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
0c8f0e3386
- ldm.generate.Generator() now takes an argument named `max_load_models`. This is an integer that limits the model cache size. When the cache reaches the limit, it will start purging older models from cache. - CLI takes an argument --max_load_models, default to 2. This will keep one model in GPU and the other in CPU and switch back and forth quickly. - To not cache models at all, pass --max_load_models=1 |
||
---|---|---|
.. | ||
orig_scripts | ||
dream.py | ||
images2prompt.py | ||
invoke.py | ||
legacy_api.py | ||
merge_embeddings.py | ||
preload_models.py | ||
sd-metadata.py |