Commit Graph

5961 Commits

Author SHA1 Message Date
psychedelicious
13f410478a fix(mm): typing issues in vae loader 2024-04-01 12:34:49 +11:00
psychedelicious
25ff0bf80f fix(mm): return converted vae model instead of path
This was missed in #6072.
2024-04-01 12:34:49 +11:00
psychedelicious
f83edcf990 feat(nodes): simplify processor loop with an early continue
Prefer an early return/continue to reduce the indentation of the processor loop. Easier to read.

There are other ways to improve its structure but at first glance, they seem to involve changing the logic in scarier ways.
2024-04-01 08:39:25 +11:00
psychedelicious
a6dd50aeaf fix(nodes): 100% cpu usage when processor paused
Should be waiting on the resume event instead of checking it in a loop
2024-04-01 08:39:25 +11:00
Lincoln Stein
1badf0f32f refactor if/else logic slightly 2024-03-31 12:42:39 -04:00
Lincoln Stein
3c9c58e0fa fix 100% CPU load in session_processor_default._process() 2024-03-31 12:42:39 -04:00
psychedelicious
9a1b35fa37 fix(queue): pause & resume
This must not have been tested after the processors were unified. Needed to shift the logic around so the resume event is handled correctly. Clear and easy fix.
2024-03-30 08:25:33 -04:00
Lincoln Stein
5be69f191d remove debug statement 2024-03-29 17:37:04 -04:00
Lincoln Stein
3d6d89feb4
[mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
Lincoln Stein
0ac1c0f339 use is_relative_to() rather than relying on string matching to determine relative directory positioning 2024-03-29 10:56:06 -04:00
Lincoln Stein
c308654442 migrate legacy conf files that were incorrectly relative to root 2024-03-29 10:56:06 -04:00
psychedelicious
b0ffe36d21 feat(mm): update v3 models.yaml migration logic to handle relative paths for legacy config files 2024-03-29 10:56:06 -04:00
psychedelicious
6b3fdb8a93 fix(mm): handle relative model paths in _register_orphaned_models 2024-03-29 10:56:06 -04:00
psychedelicious
7639e05dd2 feat(mm): add migration for RC users to migrate their dbs 2024-03-29 10:56:06 -04:00
psychedelicious
6d261a5a13 fix(mm): handle relative conversion config paths
I have tested main, controlnet and vae checkpoint conversions.
2024-03-29 10:56:06 -04:00
psychedelicious
c5d1bd1360 feat(mm): use relative paths for invoke-managed models
We switched all model paths to be absolute in #5900. In hindsight, this is a mistake, because it makes the `models_dir` non-portable.

This change reverts to the previous model pathing:
- Invoke-managed models (in the `models_dir`) are stored with relative paths
- Non-invoke-managed models (outside the `models_dir`, i.e. in-place installed models) still have absolute paths.

## Why absolute paths make things non-portable

Let's say my `models_dir` is `/media/rhino/invokeai/models/`. In the DB, all model paths will be absolute children of this path, like this:

- `/media/rhino/invokeai/models/sd-1/main/model1.ckpt`

I want to change my `models_dir` to `/home/bat/invokeai/models/`. I update my `invokeai.yaml` file and physically move the files to that directory.

On startup, the app checks for missing models. Because all of my model paths were absolute, they now point to a nonexistent path. All models are broken.

There are a couple options to recover from this situation, neither of which are reasonable:

1. The user must manually update every model's path. Unacceptable UX.
2. On startup, we check for missing models. For each missing model, we compare its path with the last-known models dir. If there is a match, we replace that portion of the path with the new models dir. Then we re-check to see if the path exists. If it does, we update the models DB entry. Brittle and requires a new DB entry for last-known models dir.

It's better to use relative paths for Invoke-managed models.
2024-03-29 10:56:06 -04:00
Lincoln Stein
3409711ed3 close #6080 2024-03-28 22:51:45 -04:00
brandonrising
3681e34d5a Use defaults for db_dir and outdir since config no longer writes defaults to invokeai.yaml 2024-03-28 22:39:48 -04:00
psychedelicious
2526ef52c5 fix(nodes): workaround seamless multi gpu error #6010
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.

This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.

Closes #6010
2024-03-29 08:56:38 +11:00
brandonrising
43bcedee10 Run ruff 2024-03-29 08:45:34 +11:00
brandonrising
98cc9b963c Only cancel session processor if current generating queue item is cancelled 2024-03-29 08:45:34 +11:00
psychedelicious
a6188a1eda chore: v4.0.0rc6 2024-03-28 13:48:00 +11:00
psychedelicious
c545262e3b revert: unrelated docstring change 2024-03-28 12:35:41 +11:00
psychedelicious
48e1a0c217 fix(ui): remove sync models functionality
The backend functionality was removed in the previous couple commits. Removing the frontend endpoints and components.
2024-03-28 12:35:41 +11:00
psychedelicious
ce2923533a chore(ui): typegen 2024-03-28 12:35:41 +11:00
psychedelicious
73c326680a feat(mm): remove autoimport; revise startup model scanning
These two changes are interrelated.

## Autoimport

The autoimport feature can be easily replicated using the scan folder tab in the model manager. Removing the implicit autoimport reduces surface area and unifies all model installation into the UI.

This functionality is removed, and the `autoimport_dir` config setting is removed.

## Startup model dir scanning

We scanned the invoke-managed models dir on startup and took certain actions:

- Register orphaned model files
- Remove model records from the db when the model path doesn't exist

### Orphaned model files

We should never have orphaned model files during normal use - we manage the models directory, and we only delete files when the user requests it.

During testing or development, when a fresh DB or memory DB is used, we could end up with orphaned models that should be registered.

Instead of always scanning for orphaned models and registering them, we now only do the scan if the new `scan_models_on_startup` config flag is set.

The description for this setting indicates it is intended for use for testing only.

### Remove records for missing model files

This functionality could unexpectedly wipe models from the db.

For example, if your models dir was on external media, and that media was inaccessible during startup, the scan would see all your models as missing and delete them from the db.

The "proactive" scan is removed. Instead, we will scan for missing models and log a warning if we find a model whose path doesn't exist. No possibility for data loss.
2024-03-28 12:35:41 +11:00
psychedelicious
2f6cce48af docs(mm): update ModelSearch 2024-03-28 12:35:41 +11:00
psychedelicious
3cf196dbb0 tidy(api): remove commented routes 2024-03-28 12:35:41 +11:00
psychedelicious
c113a822d0 chore(ui): knip 2024-03-28 12:24:32 +11:00
psychedelicious
77038a8ef6 build(ui): harden knip config, support @knipignore jsdoc tag 2024-03-28 12:24:32 +11:00
psychedelicious
6e5b4b595a tidy(ui): remove unused files, exports 2024-03-28 12:24:32 +11:00
psychedelicious
33900d2419 fix(ui): ts issue from react-virtuoso 2024-03-28 12:24:32 +11:00
psychedelicious
9179a2f96b chore(ui): bump deps 2024-03-28 12:24:32 +11:00
psychedelicious
f6fec34311 chore(ui): upgrade storybook to v8 2024-03-28 12:24:32 +11:00
psychedelicious
627a98770f feat(ui): optimize size when using CA image size
When using a Control Adapter image's dimensions as the main output dimensions, fit the dimensions to the model's optimal dimensions.
2024-03-28 08:47:07 +11:00
Mark E. Shoulson
0bb0e226dc Flip default ordering of workflow library; #5477 2024-03-28 07:36:03 +11:00
Ryan Dick
86d536755d Check for cuDNN version compatibility issues on startup. Prior to this check, the app would silently run with ~50% performance degradation caused by a cuDNN version mismatch. 2024-03-28 07:32:06 +11:00
psychedelicious
3655a47200 feat(ui): remove references to HF token handling 2024-03-27 18:59:55 +05:30
psychedelicious
7167699015 chore(ui): typegen 2024-03-27 18:59:55 +05:30
psychedelicious
b8ac524712 feat(mm): remove hf token handling
I had added this because I mistakenly believed the HF token was required to download HF models.

Turns out this is not the case, and the vast majority of HF models do not need the API token to download.
2024-03-27 18:59:55 +05:30
psychedelicious
a397fdbd25 chore: ruff 2024-03-27 08:16:27 -04:00
psychedelicious
a291a42abc feat: display torch device on startup
This functionality disappeared at some point.
2024-03-27 08:16:27 -04:00
psychedelicious
5c8bbb926a fix(mm): add missing v2-midas-inference.yaml
This is needed for SD2 depth models.

Note: These models don't seem to work.
2024-03-27 07:48:54 -04:00
psychedelicious
21758e7b49 fix(mm): move depth variant config to sd2
Looks like a copy/paste got mixed up.
2024-03-27 07:48:54 -04:00
psychedelicious
eb33303e79 fix(mm): handle depth and inpainting models when converting to diffusers
"Normal" models have 4 in-channels, while "Depth" models have 5 and "Inpaint" models have 9.

We need to explicitly tell diffusers the channel count when converting models.

Closes  #6058
2024-03-27 07:48:54 -04:00
psychedelicious
536bb4f053 chore(ui): lint 2024-03-27 10:52:32 +11:00
psychedelicious
3beb9b764a feat(ui): <GalleryImage /> styling 2024-03-27 10:52:32 +11:00
Rohinish
171f2aa20d fix(ui): image not getting selected 2024-03-27 10:52:32 +11:00
psychedelicious
243de683d7 fix(mm): handle integer state dict keys in probe
It's possible for a model's state dict to have integer keys, though we do not actually support such models.

As part of probing, we call `key.startswith(...)` on the state dict keys. This raises an `AttributeError` for integer keys.

This logic is in `invokeai/backend/model_manager/probe.py:get_model_type_from_checkpoint`

To fix this, we can cast the keys to strings first. The models w/ integer keys will still fail to be probed, but we'll get a `InvalidModelConfigException` instead of `AttributeError`.

Closes #6044
2024-03-27 09:30:25 +11:00
psychedelicious
16dad07294 feat(mm): improved install error log message in terminal 2024-03-27 08:34:00 +11:00