mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
d0e6a57e48
Fixed a couple of bugs: 1. The original config file for the ckpt file is derived from the entry in `models.yaml` rather than relying on the user to select. The implication of this is that V2 ckpt models need to be assigned `v2-inference-v.yaml` when they are first imported. Otherwise they won't convert right. Note that currently V2 ckpts are imported with `v1-inference.yaml`, which isn't right either. 2. Fixed a backslash in the output diffusers path, which was causing load failures on Linux. Remaining issues: 1. The radio buttons for selecting the model type are nonfunctional. It feels to me like these should be moved into the dialogue for importing ckpt/safetensors files, because this is where the algorithm needs help from the user. 2. The output diffusers model is written into the same directory as the input ckpt file. The CLI does it differently and stores the diffusers model in `ROOTDIR/models/converted-ckpts`. We should settle on one way or the other. |
||
---|---|---|
.. | ||
modules | ||
__init__.py | ||
invoke_ai_web_server.py |