mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
- Discord member @marcus.llewellyn reported that some civitai 2.1-derived checkpoints were not converting properly (probably dreambooth-generated): https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070 - @blessedcoolant tracked this down to a missing key that was used to derive vector length of the CLIP model used by fetching the second dimension of the tensor at "cond_stage_model.model.text_projection". His proposed solution was to hardcode a value of 1024. - On inspection, I found that the same second dimension can be recovered from key 'cond_stage_model.model.ln_final.bias', and use that instead. I hope this is correct; tested on multiple v1, v2 and inpainting models and they converted correctly. - While debugging this, I found and fixed several other issues: - model download script was not pre-downloading the OpenCLIP text_encoder or text_tokenizer. This is fixed. - got rid of legacy code in `ckpt_to_diffuser.py` and replaced with calls into `model_manager` - more consistent status reporting in the CLI. |
||
---|---|---|
.. | ||
__init__.py | ||
invokeai_configure.py | ||
invokeai_update.py | ||
model_install_backend.py | ||
model_install.py | ||
widgets.py |