fixes to env parsing, textual inversion & help text

- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
This commit is contained in:
Lincoln Stein
2023-05-18 10:48:23 -04:00
parent f9710dd6ed
commit 7ea995149e
22 changed files with 118 additions and 116 deletions

View File

@ -9,10 +9,10 @@ from invokeai.app.services.config import get_invokeai_config
CPU_DEVICE = torch.device("cpu")
CUDA_DEVICE = torch.device("cuda")
MPS_DEVICE = torch.device("mps")
config = get_invokeai_config()
def choose_torch_device() -> torch.device:
"""Convenience routine for guessing which GPU device to run model on"""
config = get_invokeai_config()
if config.always_use_cpu:
return CPU_DEVICE
if torch.cuda.is_available():
@ -32,6 +32,7 @@ def choose_precision(device: torch.device) -> str:
def torch_dtype(device: torch.device) -> torch.dtype:
config = get_invokeai_config()
if config.full_precision:
return torch.float32
if choose_precision(device) == "float16":