94e8d1b6d5
- Replace legacy model manager service with the v2 manager. - Update invocations to use new load interface. - Fixed many but not all type checking errors in the invocations. Most were unrelated to model manager - Updated routes. All the new routes live under the route tag `model_manager_v2`. To avoid confusion with the old routes, they have the URL prefix `/api/v2/models`. The old routes have been de-registered. - Added a pytest for the loader. - Updated documentation in contributing/MODEL_MANAGER.md |
||
---|---|---|
.. | ||
__init__.py | ||
attention_processor.py | ||
ip_adapter.py | ||
ip_attention_weights.py | ||
README.md | ||
resampler.py | ||
unet_patcher.py |
IP-Adapter Model Formats
The official IP-Adapter models are released here: h94/IP-Adapter
This official model repo does not integrate well with InvokeAI's current approach to model management, so we have defined a new file structure for IP-Adapter models. The InvokeAI format is described below.
CLIP Vision Models
CLIP Vision models are organized in `diffusers`` format. The expected directory structure is:
ip_adapter_sd_image_encoder/
├── config.json
└── model.safetensors
IP-Adapter Models
IP-Adapter models are stored in a directory containing two files
image_encoder.txt
: A text file containing the model identifier for the CLIP Vision encoder that is intended to be used with this IP-Adapter model.ip_adapter.bin
: The IP-Adapter weights.
Sample directory structure:
ip_adapter_sd15/
├── image_encoder.txt
└── ip_adapter.bin
Why save the weights in a .safetensors file?
The weights in ip_adapter.bin
are stored in a nested dict, which is not supported by safetensors
. This could be solved by splitting ip_adapter.bin
into multiple files, but for now we have decided to maintain consistency with the checkpoint structure used in the official h94/IP-Adapter repo.
InvokeAI Hosted IP-Adapters
Image Encoders:
IP-Adapters: