Graph, metadata and workflow all take stringified JSON only. This makes the API consistent and means we don't need to do a round-trip of pydantic parsing when handling this data.
It also prevents a failure mode where an uploaded image's metadata, workflow or graph are old and don't match the current schema.
As before, the frontend does strict validation and parsing when loading these values.
The previous super-minimal implementation had a major issue - the saved workflow didn't take into account batched field values. When generating with multiple iterations or dynamic prompts, the same workflow with the first prompt, seed, etc was stored in each image.
As a result, when the batch results in multiple queue items, only one of the images has the correct workflow - the others are mismatched.
To work around this, we can store the _graph_ in the image metadata (alongside the workflow, if generated via workflow editor). When loading a workflow from an image, we can choose to load the workflow or the graph, preferring the workflow.
Internally, we need to update images router image-saving services. The changes are minimal.
To avoid pydantic errors deserializing the graph, when we extract it from the image, we will leave it as stringified JSON and let the frontend's more sophisticated and flexible parsing handle it. The worklow is also changed to just return stringified JSON, so the API is consistent.
Compare the installed paths to determine if the model is already installed. Fixes an issue where installed models showed up as uninstalled or vice-versa. Related to relative vs absolute path handling.
* pass model config to _load_model
* make conversion work again
* do not write diffusers to disk when convert_cache set to 0
* adding same model to cache twice is a no-op, not an assertion error
* fix issues identified by psychedelicious during pr review
* following conversion, avoid redundant read of cached submodels
* fix error introduced while merging
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
These two changes are interrelated.
## Autoimport
The autoimport feature can be easily replicated using the scan folder tab in the model manager. Removing the implicit autoimport reduces surface area and unifies all model installation into the UI.
This functionality is removed, and the `autoimport_dir` config setting is removed.
## Startup model dir scanning
We scanned the invoke-managed models dir on startup and took certain actions:
- Register orphaned model files
- Remove model records from the db when the model path doesn't exist
### Orphaned model files
We should never have orphaned model files during normal use - we manage the models directory, and we only delete files when the user requests it.
During testing or development, when a fresh DB or memory DB is used, we could end up with orphaned models that should be registered.
Instead of always scanning for orphaned models and registering them, we now only do the scan if the new `scan_models_on_startup` config flag is set.
The description for this setting indicates it is intended for use for testing only.
### Remove records for missing model files
This functionality could unexpectedly wipe models from the db.
For example, if your models dir was on external media, and that media was inaccessible during startup, the scan would see all your models as missing and delete them from the db.
The "proactive" scan is removed. Instead, we will scan for missing models and log a warning if we find a model whose path doesn't exist. No possibility for data loss.
I had added this because I mistakenly believed the HF token was required to download HF models.
Turns out this is not the case, and the vast majority of HF models do not need the API token to download.
- Enriched dependencies to not just be a string - allows reuse of a dependency as a starter model _and_ dependency of another model. For example, all the SDXL models have the fp16 VAE as a dependency, but you can also download it on its own.
- Looked at popular models on the major model sites to select the list. No SD2 models. All hosted on HF.
* Fix minor bugs involving model manager handling of model paths
- Leave models found in the `autoimport` directory there. Do not move them
into the `models` hierarchy.
- If model name, type or base is updated and model is in the `models` directory,
update its path as appropriate.
- On startup during model scanning, if a model's path is a symbolic link, then resolve
to an absolute path before deciding it is a new model that must be hashed and
registered. (This prevents needless hashing at startup time).
* fix issue with dropped suffix
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
The models from INITIAL_MODELS.yaml have been recreated as a structured python object. This data is served on a new route. The model sources are compared against currently-installed models to determine if they are already installed or not.
Rename MM routes to be consistent:
- "import" -> "install"
- "model_record" -> "model"
Comment several unused routes while I work (may end up removing them?):
- list model summary (we use the search route instead)
- add model record
- convert model
- merge models
- Metadata is merged with the config. We can simplify the MM substantially and remove the handling for metadata.
- Per discussion, we don't have an ETA for frontend implementation of tags, and with the realization that the tags from CivitAI are largely useless, there's no reason to keep tags in the MM right now. When we are ready to implement tags on the frontend, we can refer back to the implementation here and use it if it supports the design.
- Fix all tests.
* UI in MM to create trigger phrases
* add scheduler and vaePrecision to config
* UI for configuring default settings for models'
* hook MM default model settings up to API
* add button to set default settings in parameters
* pull out trigger phrases
* back-end for default settings
* lint
* remove log;
gi
* ruff
* ruff format
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>