Commit Graph

2135 Commits

Author SHA1 Message Date
812f10730f adjust free vram calculation for models that will be removed by lazy offloading (#6150)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-04 22:51:12 -04:00
d6ccd5bc81 feat(nodes): disable mosaic fill
Needs a bit of tweaking, leaving the code in just disabled/commented it out.
2024-04-05 08:49:13 +11:00
f0b1bb0327 feat(nodes): redo tile infill
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.

The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.

Finally, paste the original image over the tile image.

I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.

The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.

Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
2024-04-05 08:49:13 +11:00
b061db414f tidy(nodes): abstractmethod is noop 2024-04-05 08:49:13 +11:00
32a6b758cd wip: Initial Infill Methods Refactor 2024-04-05 08:49:13 +11:00
3659219f46 Fix IdealSizeInvocation (#6145) 2024-04-05 08:38:40 +11:00
d284e0567a fix: ip adapter clip selection being broken 2024-04-05 07:49:04 +11:00
8c15d14099 fix: use locale encoding
We have had a few bugs with v4 related to file encodings, especially on Windows.

Windows uses its own character encodings instead of `utf-8`, often `cp1252`. Some characters cannot be decoded using `utf-8`, causing `UnicodeDecodeError`.

There are a couple places where this can cause problems:
- In the installer bootstrap, we install or upgrade `pip` and decode the result, using `subprocess`.

  The input to this includes the user's home dir. In #6105, the user had one of the problematic characters in their username. `subprocess` attempts and fails to decode the username, which crashes the installer.

  To fix this, we need to use `locale.getpreferredencoding()` when executing the command.
- Similarly, in the model install service and config class, we attempt to load a yaml config file. If a problematic character is in the path to the file (which often includes the user's home dir), we can get the same error.

  One example is  #6129 in which the models.yaml migration fails.

  To fix this, we need to open the file with `locale.getpreferredencoding()`.
2024-04-04 15:30:47 +11:00
9cc1f20ad5 add simplified model manager install API to InvocationContext 2024-04-03 23:26:48 -04:00
9c51abb46e fix(config): get root from venv
This logic was a bit wonky. It only selected the `venv` parent if there was already an `invokeai.yaml` file in it. Removed this constraint.
2024-04-04 10:54:23 +11:00
3a10062b53 feat(mm): more reliable mm scan folder
Compare the installed paths to determine if the model is already installed. Fixes an issue where installed models showed up as uninstalled or vice-versa. Related to relative vs absolute path handling.
2024-04-04 07:58:11 +11:00
7ff2371c07 fix(mm): do not rename model file if model record is renamed
Renaming the model file to the model name introduces unnecessary contraints on model names.

For example, a model name can technically be any length, but a model _filename_ cannot be too long.

There are also constraints on valid characters for filenames which shouldn't be applied to model record names.

I believe the old behaviour is a holdover from the old system.
2024-04-04 07:17:38 +11:00
dc1681a0de fix: clip vision model auto param
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
2024-04-03 12:40:11 +05:30
a14ce0edab chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-04-03 12:40:10 +05:30
91a70c8d07 feat: Let users pick CLIP Vision model for Checkpoint IP Adapters 2024-04-03 12:40:05 +05:30
5829b87b8d ui: update the new ip adapter configs on the frontend 2024-04-03 12:40:01 +05:30
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
e655399324 fix(config): handle windows paths in invokeai.yaml migration for legacy_conf_dir
The logic incorrectly set the `legacy_conf_dir` on windows, where the slashes go the other direction. Handle this case and update tests to catch it.
2024-04-02 08:06:59 -04:00
f75de8a35c feat(db): add migration 9 - empty session queue
Empties the session queue. This is done to prevent any lingering session queue items from causing pydantic errors due to changed schemas.
2024-04-02 13:25:14 +11:00
d4be945dde fix(nodes): gracefully handle custom nodes init error
Previously, exceptions raised as custom nodes are initialized were fatal errors, causing the app to exit.

With this change, any error on import is caught and the error message printed. App continues to start up without the node.

For example, a custom node that isn't updated for v4.0.0 may raise an error on import if it is attempting to import things that no longer exist.
2024-04-02 13:25:14 +11:00
9adb15f86c working but filled with debug statements 2024-04-01 18:44:24 -04:00
4049217728 feat(db): back up database before running migrations
Just in case.
2024-04-02 09:10:53 +11:00
3d69372785 implement session-level reservation of gpus 2024-04-01 16:01:43 -04:00
9df0980c46 parallel processing working on single-GPU, not tested on multi 2024-04-01 00:07:47 -04:00
f83edcf990 feat(nodes): simplify processor loop with an early continue
Prefer an early return/continue to reduce the indentation of the processor loop. Easier to read.

There are other ways to improve its structure but at first glance, they seem to involve changing the logic in scarier ways.
2024-04-01 08:39:25 +11:00
a6dd50aeaf fix(nodes): 100% cpu usage when processor paused
Should be waiting on the resume event instead of checking it in a loop
2024-04-01 08:39:25 +11:00
cef51ad80d Merge branch 'psyche/fix/nodes/processor-cpu-usage' into lstein/feat/multi-gpu 2024-03-31 17:05:23 -04:00
83356ec74c fix merge conflicts 2024-03-31 17:04:57 -04:00
32d3e4dc5c feat(nodes): simplify processor loop with an early continue
Prefer an early return/continue to reduce the indentation of the processor loop. Easier to read.

There are other ways to improve its structure but at first glance, they seem to involve changing the logic in scarier ways.
2024-04-01 07:55:42 +11:00
a1dcab9c38 remove references to vram_cache in tests 2024-03-31 16:52:01 -04:00
bd9b00a6bf fix(nodes): 100% cpu usage when processor paused
Should be waiting on the resume event instead of checking it in a loop
2024-04-01 07:45:36 +11:00
eaa2c68693 remove vram_cache and don't move VRAM models back into CPU 2024-03-31 16:37:13 -04:00
1badf0f32f refactor if/else logic slightly 2024-03-31 12:42:39 -04:00
3c9c58e0fa fix 100% CPU load in session_processor_default._process() 2024-03-31 12:42:39 -04:00
9a1b35fa37 fix(queue): pause & resume
This must not have been tested after the processors were unified. Needed to shift the logic around so the resume event is handled correctly. Clear and easy fix.
2024-03-30 08:25:33 -04:00
5be69f191d remove debug statement 2024-03-29 17:37:04 -04:00
3d6d89feb4 [mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
0ac1c0f339 use is_relative_to() rather than relying on string matching to determine relative directory positioning 2024-03-29 10:56:06 -04:00
c308654442 migrate legacy conf files that were incorrectly relative to root 2024-03-29 10:56:06 -04:00
b0ffe36d21 feat(mm): update v3 models.yaml migration logic to handle relative paths for legacy config files 2024-03-29 10:56:06 -04:00
6b3fdb8a93 fix(mm): handle relative model paths in _register_orphaned_models 2024-03-29 10:56:06 -04:00
7639e05dd2 feat(mm): add migration for RC users to migrate their dbs 2024-03-29 10:56:06 -04:00
6d261a5a13 fix(mm): handle relative conversion config paths
I have tested main, controlnet and vae checkpoint conversions.
2024-03-29 10:56:06 -04:00
c5d1bd1360 feat(mm): use relative paths for invoke-managed models
We switched all model paths to be absolute in #5900. In hindsight, this is a mistake, because it makes the `models_dir` non-portable.

This change reverts to the previous model pathing:
- Invoke-managed models (in the `models_dir`) are stored with relative paths
- Non-invoke-managed models (outside the `models_dir`, i.e. in-place installed models) still have absolute paths.

## Why absolute paths make things non-portable

Let's say my `models_dir` is `/media/rhino/invokeai/models/`. In the DB, all model paths will be absolute children of this path, like this:

- `/media/rhino/invokeai/models/sd-1/main/model1.ckpt`

I want to change my `models_dir` to `/home/bat/invokeai/models/`. I update my `invokeai.yaml` file and physically move the files to that directory.

On startup, the app checks for missing models. Because all of my model paths were absolute, they now point to a nonexistent path. All models are broken.

There are a couple options to recover from this situation, neither of which are reasonable:

1. The user must manually update every model's path. Unacceptable UX.
2. On startup, we check for missing models. For each missing model, we compare its path with the last-known models dir. If there is a match, we replace that portion of the path with the new models dir. Then we re-check to see if the path exists. If it does, we update the models DB entry. Brittle and requires a new DB entry for last-known models dir.

It's better to use relative paths for Invoke-managed models.
2024-03-29 10:56:06 -04:00
3409711ed3 close #6080 2024-03-28 22:51:45 -04:00
43bcedee10 Run ruff 2024-03-29 08:45:34 +11:00
98cc9b963c Only cancel session processor if current generating queue item is cancelled 2024-03-29 08:45:34 +11:00
c545262e3b revert: unrelated docstring change 2024-03-28 12:35:41 +11:00