InvokeAI/invokeai/backend/stable_diffusion
Lincoln Stein 7ea995149e fixes to env parsing, textual inversion & help text
- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
..
diffusion fixes to env parsing, textual inversion & help text 2023-05-18 10:48:23 -04:00
image_degradation rename log to logger throughout 2023-04-29 09:43:40 -04:00
schedulers Add Heun Karras Scheduler 2023-05-14 11:45:08 +10:00
__init__.py remove legacy ldm code 2023-03-04 18:16:59 -08:00
concepts_lib.py fixes to env parsing, textual inversion & help text 2023-05-18 10:48:23 -04:00
diffusers_pipeline.py fixes to env parsing, textual inversion & help text 2023-05-18 10:48:23 -04:00
offloading.py all vestiges of ldm.invoke removed 2023-03-03 01:02:00 -05:00
textual_inversion_manager.py rename log to logger throughout 2023-04-29 09:43:40 -04:00