tidy(mm): remove current_hash

This commit is contained in:
psychedelicious
2024-03-01 15:54:25 +11:00
parent 4347d1c7f7
commit 0cce582f2f
6 changed files with 114 additions and 139 deletions

View File

@ -32,7 +32,6 @@ model. These are the:
Responsible for loading a model from disk Responsible for loading a model from disk
into RAM and VRAM and getting it ready for inference. into RAM and VRAM and getting it ready for inference.
## Location of the Code ## Location of the Code
The four main services can be found in The four main services can be found in
@ -68,7 +67,6 @@ provides the following fields:
| `base_model` | BaseModelType | The base model that the model is compatible with | | `base_model` | BaseModelType | The base model that the model is compatible with |
| `path` | str | Location of model on disk | | `path` | str | Location of model on disk |
| `original_hash` | str | Hash of the model when it was first installed | | `original_hash` | str | Hash of the model when it was first installed |
| `current_hash` | str | Most recent hash of the model's contents |
| `description` | str | Human-readable description of the model (optional) | | `description` | str | Human-readable description of the model (optional) |
| `source` | str | Model's source URL or repo id (optional) | | `source` | str | Model's source URL or repo id (optional) |
@ -78,8 +76,7 @@ contents at install time obtained by sampling several parts of the
model's files using the `imohash` library. Over the course of the model's files using the `imohash` library. Over the course of the
model's lifetime it may be transformed in various ways, such as model's lifetime it may be transformed in various ways, such as
changing its precision or converting it from a .safetensors to a changing its precision or converting it from a .safetensors to a
diffusers model. When this happens, `original_hash` is unchanged, but diffusers model.
`current_hash` is updated to indicate the current contents.
`ModelType`, `ModelFormat` and `BaseModelType` are string enums that `ModelType`, `ModelFormat` and `BaseModelType` are string enums that
are defined in `invokeai.backend.model_manager.config`. They are also are defined in `invokeai.backend.model_manager.config`. They are also
@ -94,7 +91,6 @@ The `path` field can be absolute or relative. If relative, it is taken
to be relative to the `models_dir` setting in the user's to be relative to the `models_dir` setting in the user's
`invokeai.yaml` file. `invokeai.yaml` file.
### CheckpointConfig ### CheckpointConfig
This adds support for checkpoint configurations, and adds the This adds support for checkpoint configurations, and adds the
@ -217,20 +213,20 @@ for use in the InvokeAI web server. Its signature is:
``` ```
def open( def open(
cls, cls,
config: InvokeAIAppConfig, config: InvokeAIAppConfig,
conn: Optional[sqlite3.Connection] = None, conn: Optional[sqlite3.Connection] = None,
lock: Optional[threading.Lock] = None lock: Optional[threading.Lock] = None
) -> Union[ModelRecordServiceSQL, ModelRecordServiceFile]: ) -> Union[ModelRecordServiceSQL, ModelRecordServiceFile]:
``` ```
The way it works is as follows: The way it works is as follows:
1. Retrieve the value of the `model_config_db` option from the user's 1. Retrieve the value of the `model_config_db` option from the user's
`invokeai.yaml` config file. `invokeai.yaml` config file.
2. If `model_config_db` is `auto` (the default), then: 2. If `model_config_db` is `auto` (the default), then:
- Use the values of `conn` and `lock` to return a `ModelRecordServiceSQL` object * Use the values of `conn` and `lock` to return a `ModelRecordServiceSQL` object
opened on the passed connection and lock. opened on the passed connection and lock.
- Open up a new connection to `databases/invokeai.db` if `conn` * Open up a new connection to `databases/invokeai.db` if `conn`
and/or `lock` are missing (see note below). and/or `lock` are missing (see note below).
3. If `model_config_db` is a Path, then use `from_db_file` 3. If `model_config_db` is a Path, then use `from_db_file`
to return the appropriate type of ModelRecordService. to return the appropriate type of ModelRecordService.
@ -255,7 +251,7 @@ store = ModelRecordServiceBase.open(config, db_conn, lock)
Configurations can be retrieved in several ways. Configurations can be retrieved in several ways.
#### get_model(key) -> AnyModelConfig: #### get_model(key) -> AnyModelConfig
The basic functionality is to call the record store object's The basic functionality is to call the record store object's
`get_model()` method with the desired model's unique key. It returns `get_model()` method with the desired model's unique key. It returns
@ -272,28 +268,28 @@ print(model_conf.path)
If the key is unrecognized, this call raises an If the key is unrecognized, this call raises an
`UnknownModelException`. `UnknownModelException`.
#### exists(key) -> AnyModelConfig: #### exists(key) -> AnyModelConfig
Returns True if a model with the given key exists in the databsae. Returns True if a model with the given key exists in the databsae.
#### search_by_path(path) -> AnyModelConfig: #### search_by_path(path) -> AnyModelConfig
Returns the configuration of the model whose path is `path`. The path Returns the configuration of the model whose path is `path`. The path
is matched using a simple string comparison and won't correctly match is matched using a simple string comparison and won't correctly match
models referred to by different paths (e.g. using symbolic links). models referred to by different paths (e.g. using symbolic links).
#### search_by_name(name, base, type) -> List[AnyModelConfig]: #### search_by_name(name, base, type) -> List[AnyModelConfig]
This method searches for models that match some combination of `name`, This method searches for models that match some combination of `name`,
`BaseType` and `ModelType`. Calling without any arguments will return `BaseType` and `ModelType`. Calling without any arguments will return
all the models in the database. all the models in the database.
#### all_models() -> List[AnyModelConfig]: #### all_models() -> List[AnyModelConfig]
Return all the model configs in the database. Exactly equivalent to Return all the model configs in the database. Exactly equivalent to
calling `search_by_name()` with no arguments. calling `search_by_name()` with no arguments.
#### search_by_tag(tags) -> List[AnyModelConfig]: #### search_by_tag(tags) -> List[AnyModelConfig]
`tags` is a list of strings. This method returns a list of model `tags` is a list of strings. This method returns a list of model
configs that contain all of the given tags. Examples: configs that contain all of the given tags. Examples:
@ -312,11 +308,11 @@ commercializable_models = [x for x in store.all_models() \
if x.license.contains('allowCommercialUse=Sell')] if x.license.contains('allowCommercialUse=Sell')]
``` ```
#### version() -> str: #### version() -> str
Returns the version of the database, currently at `3.2` Returns the version of the database, currently at `3.2`
#### model_info_by_name(name, base_model, model_type) -> ModelConfigBase: #### model_info_by_name(name, base_model, model_type) -> ModelConfigBase
This method exists to ease the transition from the previous version of This method exists to ease the transition from the previous version of
the model manager, in which `get_model()` took the three arguments the model manager, in which `get_model()` took the three arguments
@ -337,7 +333,7 @@ model and pass its key to `get_model()`.
Several methods allow you to create and update stored model config Several methods allow you to create and update stored model config
records. records.
#### add_model(key, config) -> AnyModelConfig: #### add_model(key, config) -> AnyModelConfig
Given a key and a configuration, this will add the model's Given a key and a configuration, this will add the model's
configuration record to the database. `config` can either be a subclass of configuration record to the database. `config` can either be a subclass of
@ -352,7 +348,7 @@ model with the same key is already in the database, or an
`InvalidModelConfigException` if a dict was passed and Pydantic `InvalidModelConfigException` if a dict was passed and Pydantic
experienced a parse or validation error. experienced a parse or validation error.
### update_model(key, config) -> AnyModelConfig: ### update_model(key, config) -> AnyModelConfig
Given a key and a configuration, this will update the model Given a key and a configuration, this will update the model
configuration record in the database. `config` can be either a configuration record in the database. `config` can be either a
@ -370,31 +366,31 @@ The `ModelInstallService` class implements the
shop for all your model install needs. It provides the following shop for all your model install needs. It provides the following
functionality: functionality:
- Registering a model config record for a model already located on the * Registering a model config record for a model already located on the
local filesystem, without moving it or changing its path. local filesystem, without moving it or changing its path.
- Installing a model alreadiy located on the local filesystem, by * Installing a model alreadiy located on the local filesystem, by
moving it into the InvokeAI root directory under the moving it into the InvokeAI root directory under the
`models` folder (or wherever config parameter `models_dir` `models` folder (or wherever config parameter `models_dir`
specifies). specifies).
- Probing of models to determine their type, base type and other key * Probing of models to determine their type, base type and other key
information. information.
- Interface with the InvokeAI event bus to provide status updates on * Interface with the InvokeAI event bus to provide status updates on
the download, installation and registration process. the download, installation and registration process.
- Downloading a model from an arbitrary URL and installing it in * Downloading a model from an arbitrary URL and installing it in
`models_dir`. `models_dir`.
- Special handling for Civitai model URLs which allow the user to * Special handling for Civitai model URLs which allow the user to
paste in a model page's URL or download link paste in a model page's URL or download link
- Special handling for HuggingFace repo_ids to recursively download * Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative the contents of the repository, paying attention to alternative
variants such as fp16. variants such as fp16.
- Saving tags and other metadata about the model into the invokeai database * Saving tags and other metadata about the model into the invokeai database
when fetching from a repo that provides that type of information, when fetching from a repo that provides that type of information,
(currently only Civitai and HuggingFace). (currently only Civitai and HuggingFace).
@ -427,8 +423,8 @@ queue.start()
installer = ModelInstallService(app_config=config, installer = ModelInstallService(app_config=config,
record_store=record_store, record_store=record_store,
download_queue=queue download_queue=queue
) )
installer.start() installer.start()
``` ```
@ -443,7 +439,6 @@ required parameters:
| `metadata_store` | Optional[ModelMetadataStore] | Metadata storage object | | `metadata_store` | Optional[ModelMetadataStore] | Metadata storage object |
|`session` | Optional[requests.Session] | Swap in a different Session object (usually for debugging) | |`session` | Optional[requests.Session] | Swap in a different Session object (usually for debugging) |
Once initialized, the installer will provide the following methods: Once initialized, the installer will provide the following methods:
#### install_job = installer.heuristic_import(source, [config], [access_token]) #### install_job = installer.heuristic_import(source, [config], [access_token])
@ -457,12 +452,12 @@ The `source` is a string that can be any of these forms
1. A path on the local filesystem (`C:\\users\\fred\\model.safetensors`) 1. A path on the local filesystem (`C:\\users\\fred\\model.safetensors`)
2. A Url pointing to a single downloadable model file (`https://civitai.com/models/58390/detail-tweaker-lora-lora`) 2. A Url pointing to a single downloadable model file (`https://civitai.com/models/58390/detail-tweaker-lora-lora`)
3. A HuggingFace repo_id with any of the following formats: 3. A HuggingFace repo_id with any of the following formats:
- `model/name` -- entire model * `model/name` -- entire model
- `model/name:fp32` -- entire model, using the fp32 variant * `model/name:fp32` -- entire model, using the fp32 variant
- `model/name:fp16:vae` -- vae submodel, using the fp16 variant * `model/name:fp16:vae` -- vae submodel, using the fp16 variant
- `model/name::vae` -- vae submodel, using default precision * `model/name::vae` -- vae submodel, using default precision
- `model/name:fp16:path/to/model.safetensors` -- an individual model file, fp16 variant * `model/name:fp16:path/to/model.safetensors` -- an individual model file, fp16 variant
- `model/name::path/to/model.safetensors` -- an individual model file, default variant * `model/name::path/to/model.safetensors` -- an individual model file, default variant
Note that by specifying a relative path to the top of the HuggingFace Note that by specifying a relative path to the top of the HuggingFace
repo, you can download and install arbitrary models files. repo, you can download and install arbitrary models files.
@ -491,9 +486,9 @@ following illustrates basic usage:
``` ```
from invokeai.app.services.model_install import ( from invokeai.app.services.model_install import (
LocalModelSource, LocalModelSource,
HFModelSource, HFModelSource,
URLModelSource, URLModelSource,
) )
source1 = LocalModelSource(path='/opt/models/sushi.safetensors') # a local safetensors file source1 = LocalModelSource(path='/opt/models/sushi.safetensors') # a local safetensors file
@ -513,12 +508,12 @@ for source in [source1, source2, source3, source4, source5, source6, source7]:
source2job = installer.wait_for_installs(timeout=120) source2job = installer.wait_for_installs(timeout=120)
for source in sources: for source in sources:
job = source2job[source] job = source2job[source]
if job.complete: if job.complete:
model_config = job.config_out model_config = job.config_out
model_key = model_config.key model_key = model_config.key
print(f"{source} installed as {model_key}") print(f"{source} installed as {model_key}")
elif job.errored: elif job.errored:
print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}") print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}")
``` ```
@ -566,7 +561,6 @@ details.
This is used for a model that is located on a locally-accessible Posix This is used for a model that is located on a locally-accessible Posix
filesystem, such as a local disk or networked fileshare. filesystem, such as a local disk or networked fileshare.
| **Argument** | **Type** | **Default** | **Description** | | **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------| |------------------|------------------------------|-------------|-------------------------------------------|
| `path` | str | Path | None | Path to the model file or directory | | `path` | str | Path | None | Path to the model file or directory |
@ -625,7 +619,6 @@ HuggingFace has the most complicated `ModelSource` structure:
| `subfolder` | Path | None | Look for the model in a subfolder of the repo. | | `subfolder` | Path | None | Look for the model in a subfolder of the repo. |
| `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. | | `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. |
The `repo_id` is the repository ID, such as `stabilityai/sdxl-turbo`. The `repo_id` is the repository ID, such as `stabilityai/sdxl-turbo`.
The `variant` is one of the various diffusers formats that HuggingFace The `variant` is one of the various diffusers formats that HuggingFace
@ -661,7 +654,6 @@ in. To download these files, you must provide an
`HfFolder.get_token()` will be called to fill it in with the cached `HfFolder.get_token()` will be called to fill it in with the cached
one. one.
#### Monitoring the install job process #### Monitoring the install job process
When you create an install job with `import_model()`, it launches the When you create an install job with `import_model()`, it launches the
@ -682,7 +674,6 @@ The `ModelInstallJob` class has the following structure:
| `error_type` | `str` | Name of the exception that led to an error status | | `error_type` | `str` | Name of the exception that led to an error status |
| `error` | `str` | Traceback of the error | | `error` | `str` | Traceback of the error |
If the `event_bus` argument was provided, events will also be If the `event_bus` argument was provided, events will also be
broadcast to the InvokeAI event bus. The events will appear on the bus broadcast to the InvokeAI event bus. The events will appear on the bus
as an event of type `EventServiceBase.model_event`, a timestamp and as an event of type `EventServiceBase.model_event`, a timestamp and
@ -702,14 +693,13 @@ following keys:
| `total_bytes` | int | Total size of all the files that make up the model | | `total_bytes` | int | Total size of all the files that make up the model |
| `parts` | List[Dict]| Information on the progress of the individual files that make up the model | | `parts` | List[Dict]| Information on the progress of the individual files that make up the model |
The parts is a list of dictionaries that give information on each of The parts is a list of dictionaries that give information on each of
the components pieces of the download. The dictionary's keys are the components pieces of the download. The dictionary's keys are
`source`, `local_path`, `bytes` and `total_bytes`, and correspond to `source`, `local_path`, `bytes` and `total_bytes`, and correspond to
the like-named keys in the main event. the like-named keys in the main event.
Note that downloading events will not be issued for local models, and Note that downloading events will not be issued for local models, and
that downloading events occur *before* the running event. that downloading events occur _before_ the running event.
##### `model_install_running` ##### `model_install_running`
@ -752,7 +742,6 @@ properties: `waiting`, `downloading`, `running`, `complete`, `errored`
and `cancelled`, as well as `in_terminal_state`. The last will return and `cancelled`, as well as `in_terminal_state`. The last will return
True if the job is in the complete, errored or cancelled states. True if the job is in the complete, errored or cancelled states.
#### Model configuration and probing #### Model configuration and probing
The install service uses the `invokeai.backend.model_manager.probe` The install service uses the `invokeai.backend.model_manager.probe`
@ -774,11 +763,11 @@ attributes. Here is an example of setting the
``` ```
install_job = installer.import_model( install_job = installer.import_model(
source=HFModelSource(repo_id='stabilityai/stable-diffusion-2-1',variant='fp32'), source=HFModelSource(repo_id='stabilityai/stable-diffusion-2-1',variant='fp32'),
config=dict( config=dict(
prediction_type=SchedulerPredictionType('v_prediction') prediction_type=SchedulerPredictionType('v_prediction')
name='stable diffusion 2 base model', name='stable diffusion 2 base model',
) )
) )
``` ```
### Other installer methods ### Other installer methods
@ -862,7 +851,6 @@ This method is similar to `unregister()`, but also unconditionally
deletes the corresponding model weights file(s), regardless of whether deletes the corresponding model weights file(s), regardless of whether
they are inside or outside the InvokeAI models hierarchy. they are inside or outside the InvokeAI models hierarchy.
#### path = installer.download_and_cache(remote_source, [access_token], [timeout]) #### path = installer.download_and_cache(remote_source, [access_token], [timeout])
This utility routine will download the model file located at source, This utility routine will download the model file located at source,
@ -974,7 +962,7 @@ is in its lifecycle. Values are defined in the string enum
`DownloadJobStatus`, a symbol available from `DownloadJobStatus`, a symbol available from
`invokeai.app.services.download_manager`. Possible values are: `invokeai.app.services.download_manager`. Possible values are:
| **Value** | **String Value** | ** Description ** | | **Value** | **String Value** | **Description** |
|--------------|---------------------|-------------------| |--------------|---------------------|-------------------|
| `IDLE` | idle | Job created, but not submitted to the queue | | `IDLE` | idle | Job created, but not submitted to the queue |
| `ENQUEUED` | enqueued | Job is patiently waiting on the queue | | `ENQUEUED` | enqueued | Job is patiently waiting on the queue |
@ -1040,11 +1028,11 @@ While a job is being downloaded, the queue will emit events at
periodic intervals. A typical series of events during a successful periodic intervals. A typical series of events during a successful
download session will look like this: download session will look like this:
- enqueued * enqueued
- running * running
- running * running
- running * running
- completed * completed
There will be a single enqueued event, followed by one or more running There will be a single enqueued event, followed by one or more running
events, and finally one `completed`, `error` or `cancelled` events, and finally one `completed`, `error` or `cancelled`
@ -1053,12 +1041,12 @@ events.
It is possible for a caller to pause download temporarily, in which It is possible for a caller to pause download temporarily, in which
case the events may look something like this: case the events may look something like this:
- enqueued * enqueued
- running * running
- running * running
- paused * paused
- running * running
- completed * completed
The download queue logs when downloads start and end (unless `quiet` The download queue logs when downloads start and end (unless `quiet`
is set to True at initialization time) but doesn't log any progress is set to True at initialization time) but doesn't log any progress
@ -1120,11 +1108,11 @@ A typical initialization sequence will look like:
from invokeai.app.services.download_manager import DownloadQueueService from invokeai.app.services.download_manager import DownloadQueueService
def log_download_event(job: DownloadJobBase): def log_download_event(job: DownloadJobBase):
logger.info(f'job={job.id}: status={job.status}') logger.info(f'job={job.id}: status={job.status}')
queue = DownloadQueueService( queue = DownloadQueueService(
event_handlers=[log_download_event] event_handlers=[log_download_event]
) )
``` ```
Event handlers can be provided to the queue at initialization time as Event handlers can be provided to the queue at initialization time as
@ -1155,9 +1143,9 @@ To use the former method, follow this example:
``` ```
job = DownloadJobRemoteSource( job = DownloadJobRemoteSource(
source='http://www.civitai.com/models/13456', source='http://www.civitai.com/models/13456',
destination='/tmp/models/', destination='/tmp/models/',
event_handlers=[my_handler1, my_handler2], # if desired event_handlers=[my_handler1, my_handler2], # if desired
) )
queue.submit_download_job(job, start=True) queue.submit_download_job(job, start=True)
``` ```
@ -1172,11 +1160,11 @@ To have the queue create the job for you, follow this example instead:
``` ```
job = queue.create_download_job( job = queue.create_download_job(
source='http://www.civitai.com/models/13456', source='http://www.civitai.com/models/13456',
destdir='/tmp/models/', destdir='/tmp/models/',
filename='my_model.safetensors', filename='my_model.safetensors',
event_handlers=[my_handler1, my_handler2], # if desired event_handlers=[my_handler1, my_handler2], # if desired
start=True, start=True,
) )
``` ```
The `filename` argument forces the downloader to use the specified The `filename` argument forces the downloader to use the specified
@ -1187,7 +1175,6 @@ and is equivalent to manually specifying a destination of
Here is the full list of arguments that can be provided to Here is the full list of arguments that can be provided to
`create_download_job()`: `create_download_job()`:
| **Argument** | **Type** | **Default** | **Description** | | **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------| |------------------|------------------------------|-------------|-------------------------------------------|
| `source` | Union[str, Path, AnyHttpUrl] | | Download remote or local source | | `source` | Union[str, Path, AnyHttpUrl] | | Download remote or local source |
@ -1275,7 +1262,7 @@ for getting the model to run. For example "author" is metadata, while
"type", "base" and "format" are not. The latter fields are part of the "type", "base" and "format" are not. The latter fields are part of the
model's config, as defined in `invokeai.backend.model_manager.config`. model's config, as defined in `invokeai.backend.model_manager.config`.
### Example Usage: ### Example Usage
``` ```
from invokeai.backend.model_manager.metadata import ( from invokeai.backend.model_manager.metadata import (
@ -1328,7 +1315,6 @@ This is the common base class for metadata:
| `author` | str | Model's author | | `author` | str | Model's author |
| `tags` | Set[str] | Model tags | | `tags` | Set[str] | Model tags |
Note that the model config record also has a `name` field. It is Note that the model config record also has a `name` field. It is
intended that the config record version be locally customizable, while intended that the config record version be locally customizable, while
the metadata version is read-only. However, enforcing this is expected the metadata version is read-only. However, enforcing this is expected
@ -1348,7 +1334,6 @@ This descends from `ModelMetadataBase` and adds the following fields:
| `last_modified`| datetime | Date of last commit of this model to the repo | | `last_modified`| datetime | Date of last commit of this model to the repo |
| `files` | List[Path] | List of the files in the model repo | | `files` | List[Path] | List of the files in the model repo |
#### `CivitaiMetadata` #### `CivitaiMetadata`
This descends from `ModelMetadataBase` and adds the following fields: This descends from `ModelMetadataBase` and adds the following fields:
@ -1415,7 +1400,6 @@ testing suite to avoid hitting the internet.
The HuggingFace and Civitai fetcher subclasses add additional The HuggingFace and Civitai fetcher subclasses add additional
repo-specific fetching methods: repo-specific fetching methods:
#### HuggingFaceMetadataFetch #### HuggingFaceMetadataFetch
This overrides its base class `from_json()` method to return a This overrides its base class `from_json()` method to return a
@ -1434,7 +1418,6 @@ retrieves its metadata. Functionally equivalent to `from_id()`, the
only difference is that it returna a `CivitaiMetadata` object rather only difference is that it returna a `CivitaiMetadata` object rather
than an `AnyModelRepoMetadata`. than an `AnyModelRepoMetadata`.
### Metadata Storage ### Metadata Storage
The `ModelMetadataStore` provides a simple facility to store model The `ModelMetadataStore` provides a simple facility to store model
@ -1535,16 +1518,16 @@ from invokeai.app.services.model_load import ModelLoadService, ModelLoaderRegist
config = InvokeAIAppConfig.get_config() config = InvokeAIAppConfig.get_config()
ram_cache = ModelCache( ram_cache = ModelCache(
max_cache_size=config.ram_cache_size, max_vram_cache_size=config.vram_cache_size, logger=logger max_cache_size=config.ram_cache_size, max_vram_cache_size=config.vram_cache_size, logger=logger
) )
convert_cache = ModelConvertCache( convert_cache = ModelConvertCache(
cache_path=config.models_convert_cache_path, max_size=config.convert_cache_size cache_path=config.models_convert_cache_path, max_size=config.convert_cache_size
) )
loader = ModelLoadService( loader = ModelLoadService(
app_config=config, app_config=config,
ram_cache=ram_cache, ram_cache=ram_cache,
convert_cache=convert_cache, convert_cache=convert_cache,
registry=ModelLoaderRegistry registry=ModelLoaderRegistry
) )
``` ```
@ -1567,7 +1550,6 @@ The returned `LoadedModel` object contains a copy of the configuration
record returned by the model record `get_model()` method, as well as record returned by the model record `get_model()` method, as well as
the in-memory loaded model: the in-memory loaded model:
| **Attribute Name** | **Type** | **Description** | | **Attribute Name** | **Type** | **Description** |
|----------------|-----------------|------------------| |----------------|-----------------|------------------|
| `config` | AnyModelConfig | A copy of the model's configuration record for retrieving base type, etc. | | `config` | AnyModelConfig | A copy of the model's configuration record for retrieving base type, etc. |
@ -1581,7 +1563,6 @@ return `AnyModel`, a Union `ModelMixin`, `torch.nn.Module`,
models, `EmbeddingModelRaw` is used for LoRA and TextualInversion models, `EmbeddingModelRaw` is used for LoRA and TextualInversion
models. The others are obvious. models. The others are obvious.
`LoadedModel` acts as a context manager. The context loads the model `LoadedModel` acts as a context manager. The context loads the model
into the execution device (e.g. VRAM on CUDA systems), locks the model into the execution device (e.g. VRAM on CUDA systems), locks the model
in the execution device for the duration of the context, and returns in the execution device for the duration of the context, and returns
@ -1590,14 +1571,14 @@ the model. Use it like this:
``` ```
model_info = loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae')) model_info = loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with model_info as vae: with model_info as vae:
image = vae.decode(latents)[0] image = vae.decode(latents)[0]
``` ```
`get_model_by_key()` may raise any of the following exceptions: `get_model_by_key()` may raise any of the following exceptions:
- `UnknownModelException` -- key not in database * `UnknownModelException` -- key not in database
- `ModelNotFoundException` -- key in database but model not found at path * `ModelNotFoundException` -- key in database but model not found at path
- `NotImplementedException` -- the loader doesn't know how to load this type of model * `NotImplementedException` -- the loader doesn't know how to load this type of model
### Emitting model loading events ### Emitting model loading events
@ -1609,15 +1590,15 @@ following payload:
``` ```
payload=dict( payload=dict(
queue_id=queue_id, queue_id=queue_id,
queue_item_id=queue_item_id, queue_item_id=queue_item_id,
queue_batch_id=queue_batch_id, queue_batch_id=queue_batch_id,
graph_execution_state_id=graph_execution_state_id, graph_execution_state_id=graph_execution_state_id,
model_key=model_key, model_key=model_key,
submodel_type=submodel, submodel_type=submodel,
hash=model_info.hash, hash=model_info.hash,
location=str(model_info.location), location=str(model_info.location),
precision=str(model_info.precision), precision=str(model_info.precision),
) )
``` ```
@ -1724,6 +1705,7 @@ object, or in `context.services.model_manager` from within an
invocation. invocation.
In the examples below, we have retrieved the manager using: In the examples below, we have retrieved the manager using:
``` ```
mm = ApiDependencies.invoker.services.model_manager mm = ApiDependencies.invoker.services.model_manager
``` ```

View File

@ -71,7 +71,6 @@ example_model_config = {
"config_path": "string", "config_path": "string",
"key": "string", "key": "string",
"original_hash": "string", "original_hash": "string",
"current_hash": "string",
"description": "string", "description": "string",
"source": "string", "source": "string",
"converted_at": 0, "converted_at": 0,

View File

@ -130,15 +130,12 @@ class ModelConfigBase(BaseModel):
original_hash: Optional[str] = Field( original_hash: Optional[str] = Field(
description="original fasthash of model contents", default=None description="original fasthash of model contents", default=None
) # this is assigned at install time and will not change ) # this is assigned at install time and will not change
current_hash: Optional[str] = Field(
description="current fasthash of model contents", default=None
) # if model is converted or otherwise modified, this will hold updated hash
description: Optional[str] = Field(description="human readable description of the model", default=None) description: Optional[str] = Field(description="human readable description of the model", default=None)
source: Optional[str] = Field(description="model original source (path, URL or repo_id)", default=None) source: Optional[str] = Field(description="model original source (path, URL or repo_id)", default=None)
@staticmethod @staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None: def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
schema["required"].extend(["key", "base", "type", "format", "original_hash", "current_hash", "source"]) schema["required"].extend(["key", "base", "type", "format", "original_hash", "source"])
model_config = ConfigDict( model_config = ConfigDict(
use_enum_values=False, use_enum_values=False,

View File

@ -162,7 +162,6 @@ class ModelProbe(object):
) )
fields["format"] = fields.get("format") or probe.get_format() fields["format"] = fields.get("format") or probe.get_format()
fields["original_hash"] = fields.get("original_hash") or hash fields["original_hash"] = fields.get("original_hash") or hash
fields["current_hash"] = fields.get("current_hash") or hash
if format_type == ModelFormat.Diffusers and hasattr(probe, "get_repo_variant"): if format_type == ModelFormat.Diffusers and hasattr(probe, "get_repo_variant"):
fields["repo_variant"] = fields.get("repo_variant") or probe.get_repo_variant() fields["repo_variant"] = fields.get("repo_variant") or probe.get_repo_variant()

View File

@ -59,12 +59,11 @@ def test_registration_meta_override_fail(mm2_installer: ModelInstallServiceBase,
def test_registration_meta_override_succeed(mm2_installer: ModelInstallServiceBase, embedding_file: Path) -> None: def test_registration_meta_override_succeed(mm2_installer: ModelInstallServiceBase, embedding_file: Path) -> None:
store = mm2_installer.record_store store = mm2_installer.record_store
key = mm2_installer.register_path( key = mm2_installer.register_path(
embedding_file, {"name": "banana_sushi", "source": "fake/repo_id", "current_hash": "New Hash", "key": "xyzzy"} embedding_file, {"name": "banana_sushi", "source": "fake/repo_id", "key": "xyzzy"}
) )
model_record = store.get_model(key) model_record = store.get_model(key)
assert model_record.name == "banana_sushi" assert model_record.name == "banana_sushi"
assert model_record.source == "fake/repo_id" assert model_record.source == "fake/repo_id"
assert model_record.current_hash == "New Hash"
assert model_record.key == "xyzzy" assert model_record.key == "xyzzy"

View File

@ -77,7 +77,6 @@ def test_add(store: ModelRecordServiceBase):
assert config1.base == BaseModelType.StableDiffusion1 assert config1.base == BaseModelType.StableDiffusion1
assert config1.name == "model1" assert config1.name == "model1"
assert config1.original_hash == "111222333444" assert config1.original_hash == "111222333444"
assert config1.current_hash is None
def test_dup(store: ModelRecordServiceBase): def test_dup(store: ModelRecordServiceBase):