InvokeAI/invokeai/backend
psychedelicious fbe3afa5e1 fix(config): fix nsfw_checker handling
This setting was hardcoded to True. Rework logic around it to not conditionally check the setting.
2024-03-19 09:24:28 +11:00
..
embeddings Tidy names and locations of modules 2024-03-01 10:42:33 +11:00
image_util fix(config): fix nsfw_checker handling 2024-03-19 09:24:28 +11:00
install Remove core safetensors->diffusers conversion models 2024-03-17 19:13:18 -04:00
ip_adapter final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
model_hash tidy(mm): remove misplaced comment 2024-03-14 15:54:42 +11:00
model_manager Remove core safetensors->diffusers conversion models 2024-03-17 19:13:18 -04:00
onnx final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
stable_diffusion Update l2i invoke and seamless to support AutoencoderTiny, remove attention processors if no mid_block is detected 2024-03-12 12:00:24 -04:00
tiles feat(nodes): extract LATENT_SCALE_FACTOR to constants.py 2024-03-01 10:42:33 +11:00
training Run ruff 2024-03-08 15:36:14 -05:00
util tidy(mm): remove convenience methods from high level model manager service 2024-03-07 10:56:59 +11:00
__init__.py consolidate model manager parts into a single class 2024-03-01 10:42:33 +11:00
lora.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
model_patcher.py chore: ruff 2024-03-01 10:42:33 +11:00
raw_model.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
textual_inversion.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00