InvokeAI/ldm/invoke
Lincoln Stein 0c8f0e3386 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:53:16 -04:00
..
generator Don't try to tile fill if image doesn't have an alpha layer 2022-10-29 04:25:27 +11:00
restoration hard-code strength to 0.9 during outcropping 2022-10-31 07:52:34 -04:00
args.py add max_load_models parameter for model cache control 2022-10-31 08:53:16 -04:00
conditioning.py refactor hybrid and cross attention control codepaths for readability 2022-10-27 19:40:37 +02:00
devices.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
image_util.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
log.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
model_cache.py add max_load_models parameter for model cache control 2022-10-31 08:53:16 -04:00
pngwriter.py fix a few more metadata bugs 2022-10-25 00:31:43 -04:00
prompt_parser.py handle all unicode characters 2022-10-28 10:39:12 -04:00
readline.py Merge branch 'model-switching' into development 2022-10-21 21:27:59 -04:00
seamless.py Split seamless config into separate file 2022-10-17 19:31:20 -04:00
server_legacy.py Add back old dream.py as legacy_api.py 2022-10-12 20:35:56 -04:00
server.py Generalize facetool strength argument 2022-10-14 00:03:06 -04:00
txt2mask.py cleanup 2022-10-21 15:07:11 +02:00