InvokeAI/ldm
Lincoln Stein dc556cb1a7 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:55:53 -04:00
..
data Textual Inversion for M1 2022-09-27 01:39:17 +02:00
invoke add max_load_models parameter for model cache control 2022-10-31 08:55:53 -04:00
models fix incorrect thresholding reporting for karras noise; close #1300 2022-10-30 10:35:55 -04:00
modules remove unneeded warnings from attention.py 2022-10-27 22:50:06 -04:00
generate.py add max_load_models parameter for model cache control 2022-10-31 08:55:53 -04:00
lr_scheduler.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
simplet2i.py Squashed commit of the following: 2022-09-12 14:31:48 -04:00
util.py add ability to import and edit alternative models online 2022-10-13 23:48:07 -04:00