InvokeAI/invokeai/app/api/routers
Lincoln Stein 3d6d89feb4
[mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
..
app_info.py fix(config): fix nsfw_checker handling 2024-03-19 09:24:28 +11:00
board_images.py Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
boards.py feat: refactor services folder/module structure 2023-10-12 12:15:06 -04:00
download_queue.py fix a number of typechecking errors 2024-03-01 10:42:33 +11:00
images.py feat(bulk_download): update response model, messages 2024-03-01 10:42:33 +11:00
model_manager.py [mm] Do not write diffuser model to disk when convert_cache set to zero (#6072) 2024-03-29 16:11:08 -04:00
session_queue.py feat: remove enqueue_graph routes/methods (#4922) 2023-10-17 18:00:40 +00:00
utilities.py feat(api): add max_prompts constraints 2023-12-29 08:26:14 -05:00
workflows.py feat: workflow library (#5148) 2023-12-09 09:48:38 +11:00