mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
0c8f0e3386
- ldm.generate.Generator() now takes an argument named `max_load_models`. This is an integer that limits the model cache size. When the cache reaches the limit, it will start purging older models from cache. - CLI takes an argument --max_load_models, default to 2. This will keep one model in GPU and the other in CPU and switch back and forth quickly. - To not cache models at all, pass --max_load_models=1 |
||
---|---|---|
.. | ||
generator | ||
restoration | ||
args.py | ||
conditioning.py | ||
devices.py | ||
image_util.py | ||
log.py | ||
model_cache.py | ||
pngwriter.py | ||
prompt_parser.py | ||
readline.py | ||
seamless.py | ||
server_legacy.py | ||
server.py | ||
txt2mask.py |