mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
dc556cb1a7
- ldm.generate.Generator() now takes an argument named `max_load_models`. This is an integer that limits the model cache size. When the cache reaches the limit, it will start purging older models from cache. - CLI takes an argument --max_load_models, default to 2. This will keep one model in GPU and the other in CPU and switch back and forth quickly. - To not cache models at all, pass --max_load_models=1 |
||
---|---|---|
.. | ||
data | ||
invoke | ||
models | ||
modules | ||
generate.py | ||
lr_scheduler.py | ||
simplet2i.py | ||
util.py |