## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
[fix(nodes): fix missing generation
modes](8615d53e65)
Lax typing on the metadata util functions allowed a typing issue to slip
through. Fixed the lax typing, updated core metadata node.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4959 (thanks @coder543)
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
fix(nodes): explicitly include custom nodes files
setuptools ignores markdown files - explicitly include all files in
`"invokeai.app.invocations"` to ensure all custom node files are
included
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
- updates the Docker image with ubuntu23.04 base, python3.11
- use the newer pytorch wheel with cuda12.1 support
- corrects `docker compose` CLI in shell script wrappers and docs
- update / overhaul Docker docs
- clean up obsolete lines in `.gitignore`
## QA Instructions, Screenshots, Recordings
Follow the documentation changes, or simply:
```bash
cd docker
cp .env.sample .env
# Set your INVOKEAI_ROOT in .env
docker compose up
```
## Added/updated tests?
- [ ] Yes
- [x] No : N/A
Custom nodes may be places in `$INVOKEAI_ROOT/nodes/` (configurable with `custom_nodes_dir` option).
On app startup, an `__init__.py` is copied into the custom nodes dir, which recursively loads all python files in the directory as modules (files starting with `_` are ignored). The custom nodes dir is now a python module itself.
When we `from invocations import *` to load init all invocations, we load the custom nodes dir, registering all custom nodes.
Also added config options for metadata and workflow debounce times (`metadataFetchDebounce` & `workflowFetchDebounce`).
Falls back to 0 if not provided.
In OSS, because we have no major latency concerns, the debounce is 0. But in other environments, it may be desirable to set this to something like 300ms.
- Refactor how metadata is handled to support a user-defined metadata in graphs
- Update workflow embed handling
- Update UI to work with these changes
- Update tests to support metadata/workflow changes
This fixes a weird issue where the list images method needed to handle `None` for its `limit` and `offset` arguments, in order to get a count of all intermediates.
On our local installs this will be a very minor change. For those running on remote servers, load times should be slightly improved.
It's a small change but I think correct.
This should prevent `index.html` from *ever* being cached, so UIs will never be out of date.
Minor organisation to accomodate this.
Deleting old unused files from the early days
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
This PR adds the ability to pass multiple images to a single IP-Adapter
(note the difference from using _multiple IP-Adapters at once_, which is
already supported.). The image embeddings are combined in the IP-Adapter
attention layers. This is the same strategy for combining multiple
images as used in Insta-LoRA workflows
(https://civitai.com/articles/2345).
This PR only adds multi-image support in the backend and the node
editor. The Linear UI still needs to be updated.
## QA Instructions, Screenshots, Recordings
I have manually tested the following via the workflow editor:
- Multiple images with a single IP-Adapter
- Multiple images per IP-Adapter, and multiple IP-Adapters
- Both standard and sequential conditioning
- IP-Adapters still work in the Linear UI.
Please hammer at this feature some more with manual testing.
## Added/updated tests?
- [x] Yes
- [ ] No
I updated the existing IP-Adapter smoke test, but it provides pretty
limited coverage of this feature. This feature would probably be best
tested by an end-to-end workflow test, which is not currently supported.
(I'm hoping to put some effort into workflow-level testing soon.)
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
fix(ui): use pidi processor for sketch control adapters
Also, the PREVIOUS commit (@8d3885d, which was already pushed to github repo) was wrongly commented, but too late to fix without a force push or other mucking that I'm reluctant to do. That commit is actually the one that has all the changes to diffusers_pipeline.py to use additional arg down_intrablock_additional_residuals (introduced in diffusers PR https://github.com/huggingface/diffusers/pull/5362) to detangle T2I-Adapter from ControlNet inputs to main UNet.
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
There's a bug in chrome that screws with headers on fetch requests and 307 responses. This causes images to fail to copy in the commercial environment.
This change attempts to get around this by copying images in a different way (similar to how the canvas works). When the user requests a copy we:
- create an `<img />` element
- set `crossOrigin` if needed
- add an onload handler:
- create a canvas element
- draw image onto it
- export canvas to blob
This is wrapped in a promise which resolves to the blob, which can then be copied to clipboard.
---
A customized version of Konva's `useImage` hook is also included, which returns the image blob in addition to the `<img />` element. Unfortunately, this hook is not suitable for use across the app, because it does all the image fetching up front, regardless of whether we actually want to copy the image.
In other words, we'd have to fetch the whole image file even if the user is just skipping through image metadata, in order to have the blob to copy. The callback approach means we only fetch the image when the user clicks copy. The hook is thus currently unused.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Fix for breaking change in `python-socketio` 5.10.0 in which
`enter_room` and `leave_room` were made coroutines.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4899
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
fix(ui): fix control adapter translation string
Missed this during a previous change
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
Reported by @Harvester62 :
https://discord.com/channels/1020123559063990373/1054129386447716433/1162018775437148160
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This PR strips leading and trailing whitespace from URLs that are
entered into either the Web Model Manager import field, or using the
TUI.
## Related Tickets & Documents
Closes#4536
## QA Instructions, Screenshots, Recordings
Try to import a URL with leading or trailing whitespace. Should not work
in current main. This PR should fix it.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Mac users have a recurring issue in which a `.DS_Store` directory is
created in their `models` hierarchy, causing the new model scanner to
freak out. This PR skips over any paths that begin with a dot. I haven't
tested it on a Macintosh, so I'm not 100% certain it will do the trick.
## Related Tickets & Documents
- Related Issue #4815
## QA Instructions, Screenshots, Recordings
Someone with a Mac please try to reproduce the `.DS_Store` crash and
then see if applying this PR addresses the issue.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
This was in the original fix in #4829 but I must have removed it
accidentally.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4889
## QA Instructions, Screenshots, Recordings
- Start from a fresh canvas session (may need to let a generation finish
or reset web UI if yours is locked)
- Invoke/add to queue
- Immediately cancel current, clear queue, or clear batch (can do this
from the queue tab)
- Canvas should return to normal state
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
Facetools nodes were cutting off faces that extended beyond chunk boundaries in some cases. All faces found are considered and are coalesced rather than pruned, meaning that you should not see half a face any more.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
[fix(nodes,ui): optional
metadata](78b8cfede3)
- Make all metadata items optional. This will reduce errors related to
metadata not being provided when we update the backend but old queue
items still exist
- Fix a bug in t2i adapter metadata handling where it checked for ip
adapter metadata instaed of t2i adapter metadata
- Fix some metadata fields that were not using `InputField`
- Make all metadata items optional. This will reduce errors related to metadata not being provided when we update the backend but old queue items still exist
- Fix a bug in t2i adapter metadata handling where it checked for ip adapter metadata instaed of t2i adapter metadata
- Fix some metadata fields that were not using `InputField`
Currently translated at 91.4% (1112 of 1216 strings)
translationBot(ui): update translation (Italian)
Currently translated at 90.4% (1100 of 1216 strings)
translationBot(ui): update translation (Italian)
Currently translated at 90.4% (1100 of 1216 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
`mallinfo2` is not available on `glibc` < 2.33.
On these systems, we successfully load the library but get an `AttributeError` on attempting to access `mallinfo2`.
I'm not sure if the old `mallinfo` will work, and not sure how to install it safely to test, so for now we just handle the `AttributeError`.
This means the enhanced memory snapshot logic will be skipped for these systems, which isn't a big deal.
## What type of PR is this? (check all applicable)
- [X] Optimization
-
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This PR changes the pypi-release workflow so that it will upload to PyPi
whenever a release is initiated from the `main` branch or another branch
beginning with `release/`. Previous support for v2.3 branches has been
removed.
* added HrfScale type with initial value
* working
* working
* working
* working
* working
* added addHrfToGraph
* continueing to implement this
* working on this
* comments
* working
* made hrf into its own collapse
* working on adding strength slider
* working
* working
* refactoring
* working
* change of this working: 0
* removed onnx support since apparently its not used
* working
* made scale integer
* trying out psycicpebbles idea
* working
* working on this
* working
* added toggle
* comments
* self review
* fixing things
* remove 'any' type
* fixing typing
* changed initial strength value to 3 (large values cause issues)
* set denoising start to be 1 - strength to resemble image to image
* set initial value
* added image to image
* pr1
* pr2
* updating to resolution finding
* working
* working
* working
* working
* working
* working
* working
* working
* working
* use memo
* connect rescale hw to noise
* working
* fixed min bug
* nit
* hides elements conditionally
* style
* feat(ui): add config for HRF, disable if feature disabled or ONNX model in use
* fix(ui): use `useCallback` for HRF toggle
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* #4665 hides value of the corresponding metadata item by click on arrow
* #4787 return recall button back:)
* #4787 optional hide of metadata item, truncation and scrolling
* remove unused import
* #4787 recall parameters as separate tab in panel
* #4787 remove debug code
* fix(ui): undo changes to dist/locales/en.json
This file is autogenerated by our translation system and shouldn't be modified directly
* feat(ui): use scrollbar-enabled component for parameter recall tab
* fix(ui): revert unnecessary changes to DataViewer component
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
v3.3.0 was accidentally released with more changes than intended. This workflows change will allow us release to pypi from a separate branch rather than main.
## What type of PR is this? (check all applicable)
v3.3.0 release
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
The `invokeai-configure` TUI's slider for the RAM cache was not picking
up the current settings in `invokeai.yaml`, leading users to think their
change hadn't taken effect. This is fixed in this PR.
## Related Tickets & Documents
First described here:
https://discord.com/channels/1020123559063990373/1161919551441735711/1162058518417907743
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
A regression in 3.2.0 causes a seemingly nonsensical multiple choice
menu to appear when importing an SD-1 checkpoint model from the
autoimport directory. The menu asks the user to identify which type of
SD-2 model they are trying to import, which makes no sense.
In fact, the menu is popping up because there are now both "epsilon" and
"vprediction" SchedulerPredictionTypes for SD-1 as well as SD-2 models,
and the prober can't determine which prediction type to use. This PR
does two things:
1) rewords the menu as shown below
2) defaults to the most likely choice -- epsilon for v1 models and
vprediction for v2s
Here is the revised multiple-choice menu:
```
Please select the scheduler prediction type of the checkpoint named v1-5-pruned-emaonly.safetensors:
[1] "epsilon" - most v1.5 models and v2 models trained on 512 pixel images
[2] "vprediction" - v2 models trained on 768 pixel images and a few v1.5 models
[3] Accept the best guess; you can fix it in the Web UI later
select [3]>
```
Note that one can also put the appropriate config file into the same
directory as the checkpoint you wish to import. Give it the same name as
the model file, but with the extension `.yaml`. For example
`v1-5-pruned-emaonly.yaml`. The system will notice the yaml file and use
that, suppressing the quiz entirely.
## Related Tickets & Documents
- Closes#4768
- Closes#4827
Refactor services folder/module structure.
**Motivation**
While working on our services I've repeatedly encountered circular imports and a general lack of clarity regarding where to put things. The structure introduced goes a long way towards resolving those issues, setting us up for a clean structure going forward.
**Services**
Services are now in their own folder with a few files:
- `services/{service_name}/__init__.py`: init as needed, mostly empty now
- `services/{service_name}/{service_name}_base.py`: the base class for the service
- `services/{service_name}/{service_name}_{impl_type}.py`: the default concrete implementation of the service - typically one of `sqlite`, `default`, or `memory`
- `services/{service_name}/{service_name}_common.py`: any common items - models, exceptions, utilities, etc
Though it's a bit verbose to have the service name both as the folder name and the prefix for files, I found it is _extremely_ confusing to have all of the base classes just be named `base.py`. So, at the cost of some verbosity when importing things, I've included the service name in the filename.
There are some minor logic changes. For example, in `InvocationProcessor`, instead of assigning the model manager service to a variable to be used later in the file, the service is used directly via the `Invoker`.
**Shared**
Things that are used across disparate services are in `services/shared/`:
- `default_graphs.py`: previously in `services/`
- `graphs.py`: previously in `services/`
- `paginatation`: generic pagination models used in a few services
- `sqlite`: the `SqliteDatabase` class, other sqlite-specific things
**Service Dependencies**
Services that depend on other services now access those services via the `Invoker` object. This object is provided to the service as a kwarg to its `start()` method.
Until now, most services did not utilize this feature, and several services required their dependencies to be initialized and passed in on init.
Additionally, _all_ services are now registered as invocation services - including the low-level services. This obviates issues with inter-dependent services we would otherwise experience as we add workflow storage.
**Database Access**
Previously, we were passing in a separate sqlite connection and corresponding lock as args to services in their init. A good amount of posturing was done in each service that uses the db.
These objects, along with the sqlite startup and cleanup logic, is now abstracted into a simple `SqliteDatabase` class. This creates the shared connection and lock objects, enables foreign keys, and provides a `clean()` method to do startup db maintenance.
This is not a service as it's only used by sqlite services.
Currently translated at 98.0% (1186 of 1210 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 98.0% (1179 of 1203 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 97.9% (1175 of 1199 strings)
Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 92.0% (1104 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 92.1% (1105 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 83.2% (998 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 83.0% (996 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 67.5% (810 of 1199 strings)
Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 85.5% (1026 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.7% (1016 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.7% (1016 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.4% (1012 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.3% (1011 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 83.5% (1002 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.5% (978 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 80.8% (969 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 80.7% (968 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (607 of 607 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (605 of 605 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 65.5% (643 of 981 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (605 of 605 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 81.2% (958 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.2% (958 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 76.6% (904 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 76.5% (903 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 71.9% (848 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 71.7% (845 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 71.7% (845 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 67.8% (799 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 58.5% (689 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 59.8% (640 of 1069 strings)
translationBot(ui): update translation (Italian)
Currently translated at 57.2% (612 of 1069 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (607 of 607 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (605 of 605 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (605 of 605 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (602 of 602 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 97.8% (589 of 602 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (603 of 603 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (599 of 599 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (596 of 596 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (595 of 595 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (595 of 595 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (593 of 593 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (592 of 592 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 99.6% (601 of 603 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 99.5% (600 of 603 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (599 of 599 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (596 of 596 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 99.8% (594 of 595 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (593 of 593 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (592 of 592 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (591 of 591 strings)
translationBot(ui): update translation (Italian)
Currently translated at 99.3% (587 of 591 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (586 of 586 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (578 of 578 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (563 of 563 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (559 of 559 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (559 of 559 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (551 of 551 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 99.5% (602 of 605 strings)
translationBot(ui): update translation (Russian)
Currently translated at 99.8% (605 of 606 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (596 of 596 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (595 of 595 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (593 of 593 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (592 of 592 strings)
translationBot(ui): update translation (Russian)
Currently translated at 90.2% (534 of 592 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (543 of 543 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (550 of 550 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (548 of 548 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (546 of 546 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (541 of 541 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (544 of 544 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (543 of 543 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 88.0% (477 of 542 strings)
Co-authored-by: Song, Pengcheng <17528592@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Russian)
Currently translated at 98.8% (536 of 542 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (533 of 533 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (540 of 540 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (538 of 538 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 99.8% (535 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (533 of 533 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (533 of 533 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (591 of 591 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (586 of 586 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (578 of 578 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (563 of 563 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (550 of 550 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (550 of 550 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (548 of 548 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (546 of 546 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (544 of 544 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (543 of 543 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (540 of 540 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (533 of 533 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 99.8% (532 of 533 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (526 of 526 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (519 of 519 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (526 of 526 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (523 of 523 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (519 of 519 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (515 of 515 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (526 of 526 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (523 of 523 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (519 of 519 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (515 of 515 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 98.0% (1186 of 1210 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 98.0% (1179 of 1203 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 97.9% (1175 of 1199 strings)
Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 92.0% (1104 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 92.1% (1105 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 83.2% (998 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 83.0% (996 of 1199 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 67.5% (810 of 1199 strings)
Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 87.1% (1054 of 1210 strings)
translationBot(ui): update translation (Italian)
Currently translated at 85.5% (1026 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.7% (1016 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.7% (1016 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.4% (1012 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 84.3% (1011 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 83.5% (1002 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.5% (978 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 80.8% (969 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 80.7% (968 of 1199 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.3% (959 of 1179 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (607 of 607 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (605 of 605 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 65.5% (643 of 981 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (605 of 605 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 81.2% (958 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 81.2% (958 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 76.6% (904 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 76.5% (903 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 71.9% (848 of 1179 strings)
translationBot(ui): update translation (Italian)
Currently translated at 71.7% (845 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 71.7% (845 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 67.8% (799 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 58.5% (689 of 1177 strings)
translationBot(ui): update translation (Italian)
Currently translated at 59.8% (640 of 1069 strings)
translationBot(ui): update translation (Italian)
Currently translated at 57.2% (612 of 1069 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (607 of 607 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (605 of 605 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (605 of 605 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (602 of 602 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 97.8% (589 of 602 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (603 of 603 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (599 of 599 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (596 of 596 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (595 of 595 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (595 of 595 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (593 of 593 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (592 of 592 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 99.6% (601 of 603 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 99.5% (600 of 603 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (599 of 599 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (596 of 596 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 99.8% (594 of 595 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (593 of 593 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (592 of 592 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (591 of 591 strings)
translationBot(ui): update translation (Italian)
Currently translated at 99.3% (587 of 591 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (586 of 586 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (578 of 578 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (563 of 563 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (559 of 559 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (559 of 559 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (551 of 551 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 99.5% (602 of 605 strings)
translationBot(ui): update translation (Russian)
Currently translated at 99.8% (605 of 606 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (596 of 596 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (595 of 595 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (593 of 593 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (592 of 592 strings)
translationBot(ui): update translation (Russian)
Currently translated at 90.2% (534 of 592 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (543 of 543 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (550 of 550 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (548 of 548 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (546 of 546 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (541 of 541 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (544 of 544 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (543 of 543 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 88.0% (477 of 542 strings)
Co-authored-by: Song, Pengcheng <17528592@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Russian)
Currently translated at 98.8% (536 of 542 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (533 of 533 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (540 of 540 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (538 of 538 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 99.8% (535 of 536 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (533 of 533 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (533 of 533 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (591 of 591 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (586 of 586 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (578 of 578 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (563 of 563 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (550 of 550 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (550 of 550 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (548 of 548 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (546 of 546 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (544 of 544 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (543 of 543 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (542 of 542 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (540 of 540 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (536 of 536 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (533 of 533 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 99.8% (532 of 533 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (526 of 526 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (519 of 519 strings)
Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (526 of 526 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (523 of 523 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (519 of 519 strings)
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (515 of 515 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (526 of 526 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (523 of 523 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (519 of 519 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (515 of 515 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
feat(ui): add translation strings for clear intermediates
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4851
## [optional] Are there any post deployment tasks we need to perform?
@Millu this can go into 3.3.0
* UI for bulk downloading boards or groups of images
* placeholder route for bulk downloads that does nothing
* lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
@Millu pointed out this safetensors PR a few weeks ago, which claimed to
offer a performance benefit:
https://github.com/huggingface/safetensors/pull/362 . It was superseded
by https://github.com/huggingface/safetensors/pull/363 and included in
the latest [safetensors 0.4.0
release](https://github.com/huggingface/safetensors/releases/tag/v0.4.0).
Here are the results from my local performance comparison:
```
Before(0.3.1) / After(0.4.0)
sdxl:main:tokenizer from disk to cpu in 0.46s / 0.46s
sdxl:main:text_encoder from disk to cpu in 2.12s / 2.32s
embroidered_style_v1_sdxl.safetensors:sdxl:lora' from disk to cpu in 0.67s / 0.36s
VoxelXL_v1.safetensors:sdxl:lora' from disk to cpu in 1.64s / 0.60s
ryan_db_sdxl_epoch640.safetensors:sdxl:lora' from disk to cpu in 2.46s / 1.40s
sdxl:main:tokenizer_2 from disk to cpu in 0.37s / 0.39s
sdxl:main:text_encoder_2 from disk to cpu in 3.78s / 4.70s
sdxl:main:unet from disk to cpu in 4.66s / 3.08s
sdxl:main:scheduler from disk to cpu in 0.34s / 0.33s
sdxl:main:vae from disk to cpu in 0.66s / 0.51s
TOTAL GRAPH EXECUTION TIME: 56.489s / 53.416s
```
The benefit was marginal on my system (maybe even within measurement
error), but I figured we might as well pull it.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR optimizes the time to load models from disk.
In my local testing, SDXL text_encoder_2 models saw the greatest
improvement:
- Before change, load time (disk to cpu): 14 secs
- After change, load time (disk to cpu): 4 secs
See the in-code documentation for an explanation of how this speedup is
achieved.
## Related Tickets & Documents
This change was previously proposed on the HF transformers repo, but did
not get any traction:
https://github.com/huggingface/transformers/issues/18505#issue-1330728188
## QA Instructions, Screenshots, Recordings
I don't expect any adverse effects, but the new context manager is
applied while loading **all** models, so it would make sense to exercise
everything.
## Added/updated tests?
- [x] Yes
- [ ] No
The canvas needs to be set to staging mode as soon as a canvas-destined batch is enqueued. If the batch is is fully canceled before an image is generated, we need to remove that batch from the canvas `batchIds` watchlist, else canvas gets stuck in staging mode with no way to exit.
The changes here allow the batch status to be tracked, and if a batch has all its items completed, we can remove it from the `batchIds` watchlist. The `batchIds` watchlist now accurately represents *incomplete* canvas batches, fixing this cause of soft lock.
The UI will always re-fetch queue and batch status on receiving this event, so we may as well jsut include that data in the event and save the extra network roundtrips.
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] No, because: Non-controversial
## Have you updated all relevant documentation?
- [ ] Yes
- [X] N/A
## Description
This adds a list of T2I adapters to the “starter models” offered by the
TUI installer. None of the models is selected by default; this can be
done easily if requested. The models offered to the user are:
```
TencentARC/t2iadapter_canny_sd15v2
TencentARC/t2iadapter_sketch_sd15v2
TencentARC/t2iadapter_depth_sd15v2
TencentARC/t2iadapter_zoedepth_sd15v1
TencentARC/t2i-adapter-canny-sdxl-1.0
TencentARC/t2i-adapter-depth-zoe-sdxl-1.0
TencentARC/t2i-adapter-lineart-sdxl-1.0
TencentARC/t2i-adapter-sketch-sdxl-1.0
```
## Related Tickets & Documents
PR #4612
## QA Instructions, Screenshots, Recordings
The revised installer has a new IP-ADAPTERS tab that looks like this:

## Added/updated tests?
- [ ] Yes
- [X] No : It would be good to have a suite of model download tests, but
not set up yet.
- Update backend metadata for t2i adapter
- Fix typo in `T2IAdapterInvocation`: `ip_adapter_model` -> `t2i_adapter_model`
- Update linear graphs to use t2i adapter
- Add client metadata recall for t2i adapter
- Fix bug with controlnet metadata recall - processor should be set to 'none' when recalling a control adapter
Control adapters logic/state/ui is now generalized to hold controlnet, ip_adapter and t2i_adapter. In the future, other control adapter types can be added.
TODO:
- Limit IP adapter to 1
- Add T2I adapter to linear graphs
- Fix autoprocess
- T2I metadata saving & recall
- Improve on control adapters UI
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR adds support for slow unit tests that depend on models. It
includes:
- Documentation explaining the handling of fast vs. slow unit tests.
- Utilities to assist with writing tests that depend on models.
- A sample test that loads and runs an IP-Adapter model. This is far
from complete test coverage of IP-Adapter - it's just intended as a
first example of how to write tests with models.
**Suggestion for reviewers**: Start with docs/contributing/TESTS.md
## QA Instructions, Screenshots, Recordings
I've tested it all, but it would make sense for others to try running
both the fast tests and the slow tests.
## Added/updated tests?
- [x] Yes
- [ ] No
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR adds detailed debug logging to the model cache in order to give
more visibility into the model cache's memory utilization. **This PR
does not make any functional changes to the model cache.**
Every time a model is moved from disk to CPU, or between CPU/CUDA, a log
like this is emitted:
```bash
[2023-10-03 15:17:20,599]::[InvokeAI]::DEBUG --> Moved model '/home/ryan/invokeai/models/.cache/63742ed45b499e55620c402d6df26a20:sdxl:main:unet' from cpu to cuda in 1.23s.
Estimated model size: 4.782 GB.
Process RAM (-4.722): 6.987GB -> 2.265GB
libc mmap allocated (-4.722): 6.030GB -> 1.308GB
libc arena used (-0.061): 0.402GB -> 0.341GB
libc arena free (+0.061): 0.006GB -> 0.067GB
libc total allocated (-4.722): 6.439GB -> 1.717GB
libc total used (-4.783): 6.433GB -> 1.649GB
VRAM (+4.881): 1.538GB -> 6.418GB
```
## Related Tickets & Documents
https://github.com/invoke-ai/InvokeAI/pull/4694 contains related fixes
to some known memory issues.
## QA Instructions, Screenshots, Recordings
Make sure debug logs are enabled and you should see the new logs.
We should test each of the following environments:
- [x] Linux
- [x] Mac OS + MPS
- [x] Windows
## Added/updated tests?
- [x] Yes
- [ ] No
Added unit tests for the new utilities. Test coverage is still low for
the ModelCache, but not worse than before.
* Bump diffusers to 0.21.2.
* Add T2IAdapterInvocation boilerplate.
* Add T2I-Adapter model to model-management.
* (minor) Tidy prepare_control_image(...).
* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.
* Add logic for applying T2I-Adapter weights and accumulating.
* Add T2IAdapter to MODEL_CLASSES map.
* yarn typegen
* Add model probes for T2I-Adapter models.
* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.
* Add T2IAdapterModel.convert_if_required(...).
* Fix errors in T2I-Adapter input image sizing logic.
* Fix bug with handling of multiple T2I-Adapters.
* black / flake8
* Fix typo
* yarn build
* Add num_channels param to prepare_control_image(...).
* Link to upstream diffusers bugfix PR that currently requires a workaround.
* feat: Add Color Map Preprocessor
Needed for the color T2I Adapter
* feat: Add Color Map Preprocessor to Linear UI
* Revert "feat: Add Color Map Preprocessor"
This reverts commit a1119a00bf.
* Revert "feat: Add Color Map Preprocessor to Linear UI"
This reverts commit bd8a9b82d8.
* Fix T2I-Adapter field rendering in workflow editor.
* yarn build, yarn typegen
---------
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
When the processor has an error and it has a queue item, mark that item failed.
This addresses processor errors resulting in `in_progress` queue items, which create a soft lock of the processor, requiring the user to cancel the `in_progress` item before anything else processes.
Makes graph validation logic more rigorous, validating graphs when they are created as part of a session or batch.
`validate_self()` method added to `Graph` model. It does all the validation that `is_valid()` did, plus a few extras:
- unique `node.id` values across graph
- node ids match their key in `Graph.nodes`
- recursively validate subgraphs
- validate all edges
- validate graph is acyclical
The new method is required because `is_valid()` just returned a boolean. That behaviour is retained, but `validate_self()` now raises appropriate exceptions for validation errors. This are then surfaced to the client.
The function is named `validate_self()` because pydantic reserves `validate()`.
There are two main places where graphs are created - in batches and in sessions.
Field validators are added to each of these for their `graph` fields, which call the new validation logic.
**Closes #4744**
In this issue, a batch is enqueued with an invalid graph. The output field is typed as optional while the input field is required. The field types themselves are not relevant - this change addresses the case where an invalid graph was created.
The mismatched types problem is not noticed until we attempt to invoke the graph, because the graph was never *fully* validated. An error is raised during the call to `graph_execution_state.next()` in `invoker.py`. This function prepares the edges and validates them, raising an exception due to the mismatched types.
This exception is caught by the session processor, but it doesn't handle this situation well - the graph is not marked as having an error and the queue item status is never changed. The queue item is therefore forever `in_progress`, so no new queue items are popped - the app won't do anything until the queue item is canceled manually.
This commit addresses this by preventing invalid graphs from being created in the first place, addressing a substantial number of fail cases.
The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize
Closes#4786
This is fired when the dnd image is moved over the 'none' board. Weren't defaulting to 'none' for the image's board_id, resulting in it being possible to drag a 'none' image onto 'none'.
Selections were not being `uniqBy()`'d, or were `uniqBy()`'d without a proper iteratee. This results in duplicate images in selections in certain situations.
Add correct `uniqBy()` to the reducer to prevent this in the future.
This caused a crapload of network requests any time an image was generated.
The counts are necessary to handle the logic for inserting images into existing image list caches; we have to keep track of the counts.
Replace tag invalidation with manual cache updates in all cases, except the initial request (which is necessary to get the initial image counts).
One subtle change is to make the counts an object instead of a number. This is required for `immer` to handle draft states. This should be raised as a bug with RTK Query, as no error is thrown when attempting to update a primitive immer draft.
The helper function `generate_face_box_mask()` had a bug that prevented larger faces from being detected in some situations. This is resolved, and its dependent nodes (all the FaceTools nodes) have a patch version bump.
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
This PR causes the font "Inter-Regular.ttf", which is needed by the
facetools Face Identifier node, to be installed along with other assets
in the virtual environment. It also fixes the font path resolution logic
in the invocation to work with both package and editable installs.
## Related Tickets & Documents
Closes#4771
## What type of PR is this? (check all applicable)
Release v3.2.0
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
Need to update prompting docs
## Description
3.2.0 release version
## [optional] Are there any post deployment tasks we need to perform?
* feat(ui): max upscale pixels config
Add `maxUpscalePixels: number` to the app config. The number should be the *total* number of pixels eg `maxUpscalePixels: 4096 * 4096`.
If not provided, any size image may be upscaled.
If the config is provided, users will see be advised if their image is too large for either model, or told to switch to an x2 model if it's only too large for x4.
The message is via tooltip in the popover and via toast if the user uses the hotkey to upscale.
* feat(ui): "mayUpscale" -> "isAllowedToUpscale"
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
Grid to Gif is two custom nodes, one that divides a grid image into an
image collection, the other converts an image collection into a animated
gif
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ x ] No, because:
## Have you updated all relevant documentation?
- [x ] Yes
- [ ] No
cv2 infill node was missing a version in its decorator, resulting in a
red exclamation mark on the node
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: is tiny
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
Very tall IP adapter images didn't get fit to the panel. Now they do
* Initial commit of edge drag feature.
* Fixed build warnings
* code cleanup and drag to existing node
* improved isValidConnection check
* fixed build issues, removed cyclic dependency
* edge created nodes now spawn at cursor
* Add Node popover will no longer show when using drag to delete an edge.
* Fixed collection handling, added priority for handles matching name of source handle, removed current image/notes nodes from filtered list
* Fixed not properly clearing startParams when closing the Add Node popover
* fix(ui): do not allow Collect -> Iterate connection
This can be removed when #3956 is resolved
* feat(ui): use existing node validation logic in add-node-on-drop
This logic handles a number of special cases
---------
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* node-FaceTools
* Added more documentation for facetools
* invert FaceMask masking
- FaceMask had face protected and surroundings change by default (face white, else black)
- Change to how FaceOff/others work: the opposite where surroundings protected, face changes by default (face black, else white)
* reflect changed facemask behaviour in docs
* add FaceOff+FaceMask workflows
- Add FaceOff and FaceMask example workflows to docs/workflows
* add FaceMask+FaceOff workflows to exampleworkflows.md
- used invokeai URL paths mimicking other workflow URLs, hopefully they translate when/if merged
* inheriting, typehints, black/isort/flake8
- modified FaceMask and FaceOff output classes to inherit base image, height, width from ImageOutput
- Added type annotations to helper functions, required some reworking of code's stored data
* remove credit header
- Was in my personal/repo copy, don't think it's necessary if merged.
* Optionals & image declaration duplication
- Added Optional[] to optional outputs and types
- removed duplication of image = context.services.images.get_pil_images(self.image.image_name) declaration
- Still need to find a way to deal with mask_pil None typing errors
* face(facetools): fix typing issues, add validation, clean up structure
* feat(facetools): update field descriptions
* Update FaceOff_FaceScale2x.json
- update FaceOff workflow after Bounded Image field removed in place of inheriting Image out field from ImageOutput
* feat(facetools): pass through original image on facemask if invalid face ids requested
* feat(facetools): tidy variable names & fn calls
* feat(facetools): bundle inter font, draw ids with it
Inter is a SIL Open Font license. The license is included and is fully permissive. Inter is the same font the UI and commercial application already uses.
Only the "regular" version is bundled.
* chore(facetools): isort & fix mypy issues
* docs(facetools): update and format docs
---------
Co-authored-by: Millun Atluri <millun.atluri@gmail.com>
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* add control net to useRecallParams
* got recall controlnets working
* fix metadata viewer controlnet
* fix type errors
* fix controlnet metadata viewer
* add ip adapter to metadata
* added ip adapter to recall parameters
* got ip adapter recall working, still need to fix type errors
* fix type issues
* clean up logs
* python formatting
* cleanup
* fix(ui): only store `image_name` as ip adapter image
* fix(ui): use nullish coalescing operator for numbers
Need to use the nullish coalescing operator `??` instead of false-y coalescing operator `||` when the value being check is a number. This prevents unintended coalescing when the value is zero and therefore false-y.
* feat(ui): fall back on default values for ip adapter metadata
* fix(ui): remove unused schema
* feat(ui): re-use existing schemas in metadata schema
* fix(ui): do not disable invocationCache
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
[## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [X] Yes
## Description
Very rarely a model lives in the subfolder of a non-pipeline HuggingFace
repo_id. The example I've been working with is
https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster/tree/main,
where the improved monster QR code controlnet model lives in the `v2`
subdirectory.
In order to accommodate installing such files, I have made two changes
to the model installer.
1. At installation/configuration time, if a stanza in
`INITIAL_MODELS.yaml` contains the field `subfolder`, then the model
will be installed from the indicated subfolder. The syntax in this case
is:
```
sd-1/controlnet/qrcode_monster:
repo_id: monster-labs/control_v1p_sd15_qrcode_monster
subfolder: v2
```
2. From within the Web GUI or the installer TUI, if you wish to indicate
that the model resides in a subfolder, you can tack ":_subfoldername_"
to the end of the repo_id. The resulting repo_id will look like:
```
monster-labs/control_v1p_sd15_qrcode_monster:v2
```
The code for introducing these changes is obscure and somewhat hacky.
However, the whole installer code base has been rewritten for the model
manager refactor (#4252 ) and I will reimplement this feature in a more
elegant way in that PR.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
This hook was rerendering any time anything changed. Moved it to a logical component, put its useEffects inside the component. This reduces the effect of the rerenders to just that tiny always-null component.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
The IP-Adapter memory footprint was not being calculated correctly.
I think we could put checks in place to catch this type of error in the
future, but for now I'm just fixing the bug.
## QA Instructions, Screenshots, Recordings
I tested manually in a debugger. There are 3 pathways for calculating
the model size. All were tested:
- From file
- From state_dict
- From model weights
## Added/updated tests?
- [ ] Yes
- [x] No : This would require the ability to run tests that depend on
models. I'm working on this in another branch, but not ready quite yet.
* add control net to useRecallParams
* got recall controlnets working
* fix metadata viewer controlnet
* fix type errors
* fix controlnet metadata viewer
* set control image and use correct processor type and node
* clean up logs
* recall processor using substring
* feat(ui): enable controlNet when recalling one
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
- Current image number & total are displayed
- Left/right wrap around instead of stopping on first/last image
- Disable the left/right/number buttons when showing base layer
- improved translations
- Drag the end of an edge away from its handle to disconnect it
- Drop in empty space to delete the edge
- Drop on valid handle to reconnect it
- Update connection logic slightly to allow edge updates
* feat(ui): add error handling for enqueueBatch route, remove sessions
This re-implements the handling for the session create/invoke errors, but for batches.
Also remove all references to the old sessions routes in the UI.
* feat(ui): improve canvas image error UI
* make canvas error state gray instead of red
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] No - this should go into release notes.
## Description
During installation, the installer will now ask the user whether they
wish to perform a manual or automatic configuration of invokeai. If they
choose automatic (the default), then the install is performed without
running the TUI of the `invokeai-configure` script. Otherwise the
console-based interface is activated as usual.
This script also bumps up the default model RAM cache size to 7.5, which
improves performance on SDXL models.
* Add 'Random Float' node <3
does what it says on the tin :)
* Add random float + random seeded float nodes
altered my random float node as requested by Millu, kept the seeded version as an alternate variant for those that would like to control the randomization seed :)
* Update math.py
* Update math.py
* feat(nodes): standardize fields to match other nodes
---------
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* fix(nodes): do not disable invocation cache delete methods
When the runtime disabled flag is on, do not skip the delete methods. This could lead to a hit on a missing resource.
Do skip them when the cache size is 0, because the user cannot change this (must restart app to change it).
* fix(nodes): do not use double-underscores in cache service
* Thread lock for cache
* Making cache LRU
* Bug fixes
* bugfix
* Switching to one Lock and OrderedDict cache
* Removing unused imports
* Move lock cache instance
* Addressing PR comments
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Martin Kristiansen <martin@modyfi.io>
* add skeleton loading state for queue lit
* hide use cache checkbox if cache is disabled
* undo accidental add
* feat(ui): hide node footer entirely if nothing to show there
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Skeletons are for when we know the number of specific content items that are loading. When the queue is loading, we don't know how many items there are, or how many will load, so the whole list should be replaced with loading state.
The previous behaviour rendered a static number of skeletons. That number would rarely be the right number - the app shouldn't say "I'm loading 7 queue items", then load none, or load 50.
A future enhancement could use the queue item skeleton component and go by the total number of queue items, as reported by the queue status. I tried this but had some layout jankiness, not worth the effort right now.
The queue item skeleton component's styling was updated to support this future enhancement, making it exactly the same size as a queue item (it was a bit smaller before).
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Description
I left a dangling debug statement in a recent merged PR (#4674 ). This
removes it.
Updates my Image & Mask Composition Pack from 4 to 14 nodes, and moves
the Enhance Image node into it.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [X] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
This is an update of my existing community nodes entries.
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Adds 9 more nodes to my Image & Mask Composition pack including Clipseg,
Image Layer Blend, Masked Latent/Noise Blend, Image Dilate/Erode,
Shadows/Highlights/Midtones masks from image, and more.
## Related Tickets & Documents
n/a
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : out of scope, tested the nodes, will integrate tests with my
own repo in time as is helpful
Adds 9 more of my nodes to the Image & Mask Composition Pack in the community nodes page, and integrates the Enhance Image node into that pack as well (formerly it was its own entry).
Add some instructions about installing the frontend toolchain when doing
a git-based install.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Description
[Update
020_INSTALL_MANUAL.md](73ca8ccdb3)
Add some instructions about installing the frontend toolchain when doing
a git-based install.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
This is actually a platform-specific issue. `madge` is complaining about
a circular dependency on a single file -
`invokeai/frontend/web/src/features/queue/store/nanoStores.ts`. In that
file, we import from the `nanostores` package. Very similar name to the
file itself.
The error only appears on Windows and macOS, I imagine because those
systems both resolve `nanostores` to itself before resolving to the
package.
The solution is simple - rename `nanoStores.ts`. It's now
`queueNanoStore.ts`.
## Related Tickets & Documents
https://discord.com/channels/1020123559063990373/1155434451979993140
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [X] Yes
## Description
This PR adds support for selecting and installing IP-Adapters at
configure time. The user is offered the four existing InvokeAI IP
Adapters in the UI as shown below. The matching image encoders are
selected and installed behind the scenes. That is, if the user selects
one of the three sd15 adapters, then the SD encoder will be installed.
If they select the sdxl adapter, then the SDXL encoder will be
installed.

Note that the automatic selection of the encoder does not work when the
installer is run in headless mode. I may be able to fix that soon, but
I'm out of time today.
This is actually a platform-specific issue. `madge` is complaining about a circular dependency on a single file - `invokeai/frontend/web/src/features/queue/store/nanoStores.ts`. In that file, we import from the `nanostores` package. Very similar name to the file itself.
The error only appears on Windows and macOS, I imagine because those systems both resolve `nanostores` to itself before resolving to the package.
The solution is simple - rename `nanoStores.ts`. It's now `queueNanoStore.ts`.
## What type of PR is this? (check all applicable)
- [X] Bug Fix
- [ ] Optimizatio
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] Np
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
ip_adapter models live in a folder containing the file
`image_encoder.txt` and a safetensors file. The load-time probe for new
models was detecting the files contained within the folder rather than
the folder itself, and so models.yaml was not getting correctly updated.
This fixes the issue.
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [X] Yes
## Description
It turns out that there are a few SD-1 models that use the
`v_prediction` SchedulerPredictionType. Examples here:
https://huggingface.co/zatochu/EasyFluff/tree/main . Previously we only
allowed the user to set the prediction type for sd-2 models. This PR
does three things:
1. Add a new checkpoint configuration file `v1-inference-v.yaml`. This
will install automatically on new installs, but for existing installs
users will need to update and then run `invokeai-configure` to get it.
2. Change the prompt on the web model install page to indicate that some
SD-1 models use the "v_prediction" method
3. Provide backend support for sd-1 models that use the v_prediction
method.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4277
## QA Instructions, Screenshots, Recordings
Update, run `invoke-ai-configure --yes --skip-sd --skip-support`, and
then use the web interface to install
https://huggingface.co/zatochu/EasyFluff/resolve/main/EasyFluffV11.2.safetensors
with the prediction type set to "v_prediction." Check that the installed
model uses configuration `v1-inference-v.yaml`.
If "None" is selected from the install menu, check that SD-1 models
default to `v1-inference.yaml` and SD-2 default to
`v2-inference-v.yaml`.
Also try installing a checkpoint at a local path if a like-named config
.yaml file is located next to it in the same directory. This should
override everything else and use the local path .yaml.
## Added/updated tests?
- [ ] Yes
- [X] No
## What type of PR is this? (check all applicable)
- [X] Refactor
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: trivial fix
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
It annoyed me that the class method to get the invokeai logger was
`InvokeAILogger.getLogger()`. We do not use camelCase anywhere else. So
this PR renames the method `get_logger()`.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
Pydantic handles the casting so this is always safe.
Also de-duplicate some validation logic code that was needlessly
duplicated.
- Change translations to use arrays of paragraphs instead of a single paragraph.
- Change component to accept a `feature` prop to identify the feature which the popover describes.
- Add optional `wrapperProps`: passed to the wrapper element, allowing more flexibility when using the popover
- Add optional `popoverProps`: passed to the `<Popover />` component, allowing for overriding individual instances of the popover's props
- Move definitions of features and popover settings to `invokeai/frontend/web/src/common/components/IAIInformationalPopover/constants.ts`
- Add some type safety to the `feature` prop
- Edit `POPOVER_DATA` to provide `image`, `href`, `buttonLabel`, and any popover props. The popover props are applied to all instances of the popover for the given feature. Note that the component prop `popoverProps` will override settings here.
- Remove the popover's arrow. Because the popover is wrapping groups of components, sometimes the error ends up pointing to nothing, which looks kinda janky. I've just removed the arrow entirely, but feel free to add it back if you think it looks better.
- Use a `link` variant button with external link icon to better communicate that clicking the button will open a new tab.
- Default the link button label to "Learn More" (if a label is provided, that will be used instead)
- Make default position `top`, but set manually set some to `right` - namely, anything with a dropdown. This prevents the popovers from obscuring or being obscured by the dropdowns.
- Do a bit more restructuring of the Popover component itself, and how it is integrated with other components
- More ref forwarding
- Make the open delay 1s
- Set the popovers to use lazy mounting (eg do not mount until the user opens the thing)
- Update the verbiage for many popover items and add missing dynamic prompts stuff
When the runtime disabled flag is on, do not skip the delete methods. This could lead to a hit on a missing resource.
Do skip them when the cache size is 0, because the user cannot change this (must restart app to change it).
- No longer need to make network request to add image to board after it's finished - removed
- Update linear graphs & upscale graph to save image to the board
- Update autoSwitch logic so when image is generated we still switch to the right board
- Remove the add-to-board node
- Create `BoardField` field type & add it to `save_image` node
- Add UI for `BoardField`
- Tighten up some loose types
- Make `save_image` node, in workflow editor, default to not intermediate
- Patch bump `save_image`
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [ ] Yes
- [X] N/A
## Description
Pedantic was misconfigured and was not picking up the INVOKEAI_ prefix
on environment variables. Therefore, if the system had an unrelated
environment variable such as `version`, this caused pedantic validation
errors.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4098
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [X] Yes — regression tests run; new regression test added.
* break out separate functions for preselected images, remove recallAllParameters dep as it causes circular logic with model being set
* lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
…nd move on
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- New routes to clear, enable, disable and get the status of the cache
- Status includes hits, misses, size, max size, enabled
- Add client cache queries and mutations, abstracted into hooks
- Add invocation cache status area (next to queue status) w/ buttons
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
Fixes failure on SDXL metadata node, introduced by me in #4625
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
[feat(ui): enable control adapters on image
drop](aa4b56baf2)
- Dropping/uploading an image on control adapter enables it (controlnet
& ip adapter)
- The image components are always enabled to allow this
Hide it until #4624 is ready
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
feat(ui): hide clipskip on sdxl; do not add to metadata
Hide it until #4624 is ready
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4618
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
fix(ui): add control adapters to canvas coherence pass
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4619
- Closes#4589
## QA Instructions, Screenshots, Recordings
I cannot figure out how to get the CLIP Vision model installed but I can
confirm that the graph is correct, because I get a Model Not Found error
that references this model, when invoking with IP adapter enabled..
* Initial commit. Feature works, but code might need some cleanup
* Cleaned up diff
* Made mousePosition a XYPosition again so its nicely typed
* Fixed yarn issues
* Paste now properly takes node width/height into account when pasting
* feat(ui): use react's types in the `onMouseMove` `reactflow` handler
* feat(ui): use refs to access `reactflow`'s DOM elements
* feat(ui): use a ref to store cursor position in nodes
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Polymorphic fields now render the appropriate input component for their base type.
For example, float polymorphics will render the number input box.
You no longer need to specify ui_type to force it to display.
TODO: The UI *may* break if a list is provided as the default value for a polymorphic field.
* Remove fastapi-socketio dependency, doesn't really do much for us and isn't well maintained
* Run python black
* Remove fastapi_socketio import
* Add __app as class variable in case we ever need it later
* Run isort
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
[TAESD - Tiny Autoencoder for Stable
Diffusion](https://github.com/madebyollin/taesd) - is a tiny VAE that
provides significantly better results than my single-multiplication hack
but is still very fast.
The entire TAESD model weights are under 10 MB!
This PR requires diffusers 0.20:
- [x] #4311
## To Do
Test with
- [x] SD 1.x
- [ ] SD 2.x: #4415
- [x] SDXL
## Have you discussed this change with the InvokeAI team?
- See [TAESD Invocation
API](https://discord.com/channels/1020123559063990373/1137857402453119166)
## Have you updated all relevant documentation?
- [ ] No
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
Should be able to import these models:
- [madebyollin/taesd](https://huggingface.co/madebyollin/taesd)
- [madebyollin/taesdxl](https://huggingface.co/madebyollin/taesdxl)
and use them as VAE.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [x] Some. There are new tests for VaeFolderProbe based on VAE
configurations, but no tests that require the full model weights.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
This is a doc file that was missing from PR #4587 . Since that PR was
already merged. I’m pushing it in now.
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] No, because it is trivial
## Have you updated all relevant documentation?
- [X] Yes -- added a new page listing all the command-line scripts and
their most useful options.
## Description
InvokeAI version 2.3 had a script called `invokeai-metadata` that
accepted a list of png images and printed out JSON-formatted embedded
metadata. I used to use the script for sorting and tagging images
outside of the InvokeAI Web UI framework, and I think people might still
find it useful.
This script stopped working in 3.0 and I didn't notice that until just
now. This PR restores it to a functional state.
## Related Tickets & Documents
None
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Adds a new common component `IAIInformationPopover` that composes JSX to
be rendered within a popover as a tooltip. We were not able to use the
`Tooltip` component provided by chakra because you cannot interact with
elements within those (at least not that I could get working).
This just a sample over positive prompt. We need content from
@hipsterusername and @Millu before we can roll this out.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
This change enhances the invocation cache logic to delete cache entries
when the resources to which they refer are deleted.
For example, a cached output may refer to "some_image.png". If that
image is deleted, and this particular cache entry is later retrieved by
a node, that node's successors will receive references to the now
non-existent "some_image.png". When they attempt to use that image, they
will fail.
To resolve this, we need to invalidate the cache when the resources to
which it refers are deleted. Two options:
- Invalidate the whole cache on every image/latents/etc delete
- Selectively invalidate cache entries when their resources are deleted
Node outputs can be any shape, with any number of resource references in
arbitrarily nested pydantic models. Traversing that structure to
identify resources is not trivial.
But invalidating the whole cache is a bit heavy-handed. It would be nice
to be more selective.
Simple solution:
- Invocation outputs' resource references are always string identifiers
- like the image's or latents' name
- Invocation outputs can be stringified, which includes said identifiers
- When the invocation is cached, we store the stringified output
alongside the "live" output classes
- When a resource is deleted, pass its identifier to the cache service,
which can then invalidate any cache entries that refer to it
The images and latents storage services have been outfitted with
`on_deleted()` callbacks, and the cache service registers itself to
handle those events. This logic was copied from `ItemStorageABC`.
`on_changed()` callback are also added to the images and latents
services, though these are not currently used. Just following the
existing pattern.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
Reproduce the issue on main:
- Create a graph in workflow editor with two connected resize nodes
- Add an image to the first
- Enable cache on both
- Run the graph
- Clear Intermediates (in settings)
- Disable cache on the *second* node
- Run the graph, it should fail
Switch to the PR branch and start over, doing the exact same steps. You
shouldn't get any errors.
Example graph to start with:

## Added/updated tests?
- [~] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
* feat(ui): tweak queue UI components
* fix(ui): manually dispatch queue status query on queue item status change
RTK Query occasionally aborts the query that occurs when the tag is invalidated, especially if multples of them fire in rapid succession.
This resulted in the queue status and progress bar sometimes not reseting when the queue finishes its last item.
Manually dispatch the query now to get around this. Eventually should probably move this to a socket so we don't need to keep responding to socket with HTTP requests. Just send ti directly via socket
* chore(ui): remove errant console.logs
* fix(ui): do not accumulate node outputs in outputs area
* fix(ui): fix merge issue
---------
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Add `batch_id` to outbound events. This necessitates adding it to both `InvocationContext` and `InvocationQueueItem`. This allows the canvas to receive images.
When the user enqueues a batch on the canvas, it is expected that all images from that batch are directed to the canvas.
The simplest, most flexible solution is to add the `batch_id` to the invocation context-y stuff. Then everything knows what batch it came from, and we can have the canvas pick up images associated with its list of canvas `batch_id`s.
This change enhances the invocation cache logic to delete cache entries when the resources to which they refer are deleted.
For example, a cached output may refer to "some_image.png". If that image is deleted, and this particular cache entry is later retrieved by a node, that node's successors will receive references to the now non-existent "some_image.png". When they attempt to use that image, they will fail.
To resolve this, we need to invalidate the cache when the resources to which it refers are deleted. Two options:
- Invalidate the whole cache on every image/latents/etc delete
- Selectively invalidate cache entries when their resources are deleted
Node outputs can be any shape, with any number of resource references in arbitrarily nested pydantic models. Traversing that structure to identify resources is not trivial.
But invalidating the whole cache is a bit heavy-handed. It would be nice to be more selective.
Simple solution:
- Invocation outputs' resource references are always string identifiers - like the image's or latents' name
- Invocation outputs can be stringified, which includes said identifiers
- When the invocation is cached, we store the stringified output alongside the "live" output classes
- When a resource is deleted, pass its identifier to the cache service, which can then invalidate any cache entries that refer to it
The images and latents storage services have been outfitted with `on_deleted()` callbacks, and the cache service registers itself to handle those events. This logic was copied from `ItemStorageABC`.
`on_changed()` callback are also added to the images and latents services, though these are not currently used. Just following the existing pattern.
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description (edit by @blessedcoolant , @RyanJDick )
This PR adds support for IP-Adapters (a technique for image-based
prompts) in Invoke AI. Currently only available in the Node UI.
IP-Adapter Paper: [IP-Adapter: Text Compatible Image Prompt Adapter for
Text-to-Image Diffusion Models](https://arxiv.org/abs/2308.06721)
IP-Adapter reference code: https://github.com/tencent-ailab/IP-Adapter
On order to test, install the following models via the InvokeAI UI:
Image Encoders:
[InvokeAI/ip_adapter_sd_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder)
[InvokeAI/ip_adapter_sdxl_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sdxl_image_encoder)
IP-Adapters:
[InvokeAI/ip_adapter_sd15](https://huggingface.co/InvokeAI/ip_adapter_sd15)
[InvokeAI/ip_adapter_plus_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_sd15)
[InvokeAI/ip_adapter_plus_face_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15)
[InvokeAI/ip_adapter_sdxl](https://huggingface.co/InvokeAI/ip_adapter_sdxl)
Old instructions (for reference only):
> In order to test, you need to download and place the following models
in your InvokeAI models directory.
>
> - SD 1.5 - https://huggingface.co/h94/IP-Adapter/tree/main/models -->
Download the models and the `image_encoder` folder to
`models/core/ip_adapters/sd-1`
> - SDXL - https://huggingface.co/h94/IP-Adapter/tree/main/sdxl_models
-Download the models and the `image_encoder` folder to
`models/core/ip_adapaters/sdxl`
>
> This is only temporary. This needs to be handled differently. I
outlined them here.
https://github.com/invoke-ai/InvokeAI/pull/4429#issuecomment-1705776570
## Examples using this PR
### Image variations, no text prompt
Leftmost image in each row is original image used for input to
IP-Adapter. The other rows are example outputs with different seeds,
other parameters identical.

## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
A few Missed Translations From the Translation Update
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ X ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No
## Description
Mask Edge was set to default, and producing poor results. I've updated
the default back to Unmasked.
Note: The target branch is `feat/ip-adapter`, not `main`. After a
cursory review here, I'll merge for an in-depth review as part of
https://github.com/invoke-ai/InvokeAI/pull/4429.
## Description
This branch adds model management support for IP-Adapter models. There
are a few notable/unusual aspects to how it is implemented:
- We have defined a model format that works better with our model
manager than the 'official' IP-Adapter repo, and will be hosting the
IP-Adapter models ourselves (See `invokeai/backend/ip_adapter/README.md`
for a description of the expected model formats.)
- The CLIP Vision models and IP-Adapter models are handled independently
in the model manager. The IP-Adapter model info has a reference to the
CLIP model that it is intended to be run with.
- The `BaseModelType.Any` field was added for CLIP Vision models, as
they don't have a clear 1-to-1 association with a particular base model.
## QA Instructions, Screenshots, Recordings
Install the following models via the InvokeAI UI:
Image Encoders:
-
[InvokeAI/ip_adapter_sd_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder)
-
[InvokeAI/ip_adapter_sdxl_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sdxl_image_encoder)
IP-Adapters:
-
[InvokeAI/ip_adapter_sd15](https://huggingface.co/InvokeAI/ip_adapter_sd15)
-
[InvokeAI/ip_adapter_plus_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_sd15)
-
[InvokeAI/ip_adapter_plus_face_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15)
-
[InvokeAI/ip_adapter_sdxl](https://huggingface.co/InvokeAI/ip_adapter_sdxl)
The immutable and serializable checks for redux can cause substantial performance issues. The immutable check in particular is pretty heavy. It's only run in dev mode, but this and really slow down the already-slower performance of dev mode.
The most important one for us is serializable, which has far less of a performance impact.
The immutable check is largely redundant because we use immer-backed RTK for everything and immer gives us confidence there.
Disable the immutable check, leaving serializable in.
A few weeks back, we changed how the canvas scales in response to changes in window/panel size.
This introduced a bug where if we the user hadn't already clicked the canvas tab once to initialize the stage elements, the stage's dimensions were zero, then the calculation of the stage's scale ends up zero, then something is divided by that zero and Konva dies.
This is only a problem on Chromium browsers - somehow Firefox handles it gracefully.
Now, when calculating the stage scale, never return a 0 - if it's a zero, return 1 instead. This is enough to fix the crash, but the image ends up centered on the top-left corner of the stage (the origin of the canvas).
Because the canvas elements are not initialized at this point (we haven't switched tabs yet), the stage dimensions fall back to (0,0). This means the center of the stage is also (0,0) - so the image is centered on (0,0), the top-left corner of the stage.
To fix this, we need to ensure we:
- Change to the canvas tab before actually setting the image, so the stage elements are able to initialize
- Use `flushSync` to flush DOM updates for this tab change so we actually have DOM elements to work with
- Update the stage dimensions once on first load of it (so in the effect that sets up the resize observer, we update the stage dimensions)
The result now is the expected behaviour - images sent to canvas do not crash and end up in the center of the canvas.
JSX is not serializable, so it cannot be in redux. Non-serializable global state may be put into `nanostores`.
- Use `nanostores` for `customStarUI`
- Use `nanostores` for `headerComponent`
- Re-enable the serializable & immutable check redux middlewares
* Update collections.py
RangeOfSizeInvocation was not taking step into account when generating the end point of the range
* - updated the node description to refelect this mod
- added a gt=0 constraint to ensure only a positive size of the range
- moved the + 1 to be on the size. To ensure the range is the requested size in cases where the step is negative
- formatted with Black
* Removed +1 from the range calculation
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* New classes to support the PromptsFromFileInvocation Class
- PromptPosNegOutput
- PromptSplitNegInvocation
- PromptJoinInvocation
- PromptReplaceInvocation
* - Added PromptsToFileInvocation,
- PromptSplitNegInvocation
- now counts the bracket depth so ensures it cout the numbr of open and close brackets match.
- checks for escaped [ ] so ignores them if escaped e.g \[
- PromptReplaceInvocation - now has a user regex. and no regex in made caseinsesitive
* Update prompt.py
created class PromptsToFileInvocationOutput and use it in PromptsToFileInvocation instead of BaseInvocationOutput
* Update prompt.py
* Added schema_extra title and tags for PromptReplaceInvocation, PromptJoinInvocation, PromptSplitNegInvocation and PromptsToFileInvocation
* Added PTFileds Collect and Expand
* update to nodes v1
* added ui_type to file_path for PromptToFile
* update params for the primitive types used, remove the ui_type filepath, promptsToFile now only accepts collections until a fix is available
* updated the parameters for the StringOutput primitive
* moved the prompt tools nodes out of the prompt.py into prompt_tools.py
* more rework for v1
* added github link
* updated to use "@invocation"
* updated tags
* Adde new nodes PromptStrength and PromptStrengthsCombine
* chore: black
* feat(nodes): add version to prompt nodes
* renamed nodes from prompt related to string related. Also moved them into a strings.py file. Also moved and renamed the PromptsFromFileInvocation from prompt.py to strings.py. The PTfileds still remain in the Prompt_tool.py for now.
* added , version="1.0.0" to the invocations
* removed the PTField related nodes and the prompt-tools.py file all new nodes now live in the
* formatted prompt.py and strings.py with Black and fixed silly mistake in the new StringSplitInvocation
* - Revert Prompt.py back to original
- Update strings.py to be only StringJoin, StringJoinThre, StringReplace, StringSplitNeg, StringSplit
* applied isort to imports
* fix(nodes): typos in `strings.py`
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
This maps values to labels for multiple-choice fields.
This allows "enum" fields (i.e. `Literal["val1", "val2", ...]` fields) to use code-friendly string values for choices, but present this to the UI as human-friendly labels.
* Added crop option to ImagePasteInvocation
ImagePasteInvocation extended the image with transparency when pasting outside of the base image's bounds. This introduces a new option to crop the resulting image back to the original base image.
* Updated version for ImagePasteInvocation as 3.1.1 was released.
## What type of PR is this? (check all applicable)
- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
**NOTE!!!** This PR is against `feat/ip-adapter`, not `main`. I created
a PR because I made some pretty significant changes that I thought might
spark discussion.
I don't think it makes sense to do a full in-depth review here. If
possible, let's try to agree on the high-level approach and then merge
this and do an in-depth review on the original PR.
High-level changes:
- Split `IPAdapterField` from the `ControlField` and make them separate
inputs on the `DenoiseLatentsInvocation`
- Create context manager that handles patching/un-patching the UNet with
IP-Adapter attention blocks (`IPAdapter.apply_ip_adapter_attention()`)
- Pass IP-Adapter conditioning via `cross_attention_kwargs` rather than
concatenating it to the text embedding. This helps avoid breaking other
features (like long prompts).
- Remove unused blocks of the IP-Adapter implementation and do some
general tidying.
Out of scope:
- I haven't looked at model management yet. I'd like to get this merged
into `feat/ip-adapter` and then look at model management separately.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
There was an issue with the responsiveness of the quick links buttons in
the documentation.
## Related Tickets & Documents
- Related Issue #4455
- Closes#4455
## QA Instructions, Screenshots, Recordings
• On the documentation website, go to the Home page, scroll down to the
quick-links section.
[Home - InvokeAI Stable Diffusion Toolkit
Docs.webm](https://github.com/invoke-ai/InvokeAI/assets/92071471/0a7095c1-9d78-47f2-8da7-9c1e796bea3d)
## Added/updated tests?
- [ ] Yes
- [x] No : _It is a minor change in the documentation website._
## [optional] Are there any post deployment tasks we need to perform? No
We need to parse the config before doing anything related to invocations to ensure that the invocations union picks up on denied nodes.
- Move that to the top of api_app and cli_app
- Wrap subsequent imports in `if True:`, as a hack to satisfy flake8 and not have to noqa every line or the whole file
- Add tests to ensure graph validation fails when using a denied node, and that the invocations union does not have denied nodes (this indirectly provides confidence that the generated OpenAPI schema will not include denied nodes)
This simply hides nodes from the workflow editor. The nodes will still work if an API request is made with them. For example, you could hide `iterate` nodes from the workflow editor, but if the Linear UI makes use of those nodes, they will still function.
- Update `AppConfig` with optional property `nodesDenylist: string[]`
- If provided, nodes are filtered out by `type` in the workflow editor
Allow denying and explicitly allowing nodes. When a not-allowed node is used, a pydantic `ValidationError` will be raised.
- When collecting all invocations, check against the allowlist and denylist first. When pydantic constructs any unions related to nodes, the denied nodes will be omitted
- Add `allow_nodes` and `deny_nodes` to `InvokeAIAppConfig`. These are `Union[list[str], None]`, and may be populated with the `type` of invocations.
- When `allow_nodes` is `None`, allow all nodes, else if it is `list[str]`, only allow nodes in the list
- When `deny_nodes` is `None`, deny no nodes, else if it is `list[str]`, deny nodes in the list
- `deny_nodes` overrides `allow_nodes`
## What type of PR is this? (check all applicable)
3.1.1 Release build & updates
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
Adds a configuration option to fetch metadata and workflows from api
isntead of the image file. Needed for commercial.
Minor corrections to spell and grammar in the feature request template.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:
This PR should be self explanatory.
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
Minor corrections to spell and grammar in the feature request template.
No code or behavioural changes.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
N/A
## Added/updated tests?
- [ ] Yes
- [x] No : _please replace this line with details on why tests
have not been included_
There are no tests for the issue template.
## [optional] Are there any post deployment tasks we need to perform?
I added extra steps to update the Cudnnn DLL found in the Torch package
because it wasn't optimised or didn't use the lastest version. So
manually updating it can speed up iteration but the result might differ
from each card. Exemple i passed from 3 it/s to a steady 20 it/s.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [x] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
fix(nodes): add version to iterate and collect
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [x] Feature
## Have you discussed this change with the InvokeAI team?
- [x] Yes
## Description
Scale Before Processing Dimensions now respect the Aspect Ratio that is
locked in. This makes it way easier to control the setting when using it
with locked ratios on the canvas.
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [X] Yes
## Description
Running the config script on Macs triggered an error due to absence of
VRAM on these machines! VRAM setting is now skipped.
## Added/updated tests?
- [ ] Yes
- [X] No : Will add this test in the near future.
I added extra steps to update the Cudnnn DLL found in the Torch package because it wasn't optimised or didn't use the lastest version. So manually updating it can speed up iteration but the result might differ from each card. Exemple i passed from 3 it/s to a steady 20 it/s.
@blessedcoolant Per discussion, have updated codeowners so that we're
not force merging things.
This will, however, necessitate a much more disciplined approval.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [X] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Add textfontimage node to communityNodes.md
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
fix(ui): fix non-nodes validation logic being applied to nodes invoke
button
For example, if you had an invalid controlnet setup, it would prevent
you from invoking on nodes, when node validation was disabled.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes
https://discord.com/channels/1020123559063990373/1028661664519831552/1148431783289966603
## What type of PR is this? (check all applicable)
- [x] Feature
- [x] Optimization
## Have you discussed this change with the InvokeAI team?
- [x] Yes
## Description
# Coherence Mode
A new parameter called Coherence Mode has been added to Coherence Pass
settings. This parameter controls what kind of Coherence Pass is done
after Inpainting and Outpainting.
- Unmasked: This performs a complete unmasked image to image pass on the
entire generation.
- Mask: This performs a masked image to image pass using your input mask
as the coherence mask.
- Mask Edge [DEFAULT] - This performs as masked image to image pass on
the edges of your mask to try and clear out the seams.
# Why The Coherence Masked Modes?
One of the issues with unmasked coherence pass arises when the diffusion
process is trying to align detailed or organic objects. Because Image to
Image tends change the image a little bit even at lower strengths, this
ends up in the paste back process being slightly misaligned. By
providing the mask to the Coherence Pass, we can try to eliminate this
in those cases. While it will be impossible to address this for every
image out there, having these options will allow the user to automate a
lot of this. For everything else there's manual paint over with inpaint.
# Graph Improvements
The graphs have now been refined quite a bit. We no longer do manual
blurring of the masks anymore for outpainting. This is no longer needed
because we now dilate the mask depending on the blur size while pasting
back. As a result we got rid of quite a few nodes that were handling
this in the older graph.
The graphs are also a lot cleaner now because we now tackle Scaled
Dimensions & Coherence Mode completely independently.
Inpainting result seem very promising especially with the Mask Edge
mode.
---
# New Infill Methods [Experimental]
We are currently trying out various new infill methods to see which ones
might perform the best in outpainting. We may keep all of them or keep
none. This will be decided as we test more.
## LaMa Infill
- Renabled LaMA infill in the UI.
- We are trying to get this to work without a memory overhead.
In order to use LaMa, you need to manually download and place the LaMa
JIT model in `models/core/misc/lama/lama.pt`. You can download the JIT
model from Sanster
[here](https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt)
and rename it to `lama.pt` or you can use the script in the original
LaMA repo to convert the base model to a JIT model yourself.
## CV2 Infill
- Added a new infilling method using CV2's Inpaint.
## Patchmatch Rescaling
Patchmatch infill input image is now downscaled and infilled. Patchmatch
can be really slow at large resolutions and this is a pretty decent way
to get around that. Additionally, downscaling might also provide a
better patch match by avoiding larger areas to be infilled with
repeating patches. But that's just the theory. Still testing it out.
## [optional] Are there any post deployment tasks we need to perform?
- If we decide to keep LaMA infill, then we will need to host the model
and update the installer to download it as a core model.
Adds my (@dwringer's) released nodes to the community nodes page.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Adds my released nodes -
Depth Map from Wavefront OBJ
Enhance Image
Generative Grammar-Based Prompt Nodes
Ideal Size Stepper
Image Compositor
Final Size & Orientation / Random Switch (Integers)
Text Mask (Simple 2D)
* Consolidated saturation/luminosity adjust.
Now allows increasing and inverting.
Accepts any color PIL format and channel designation.
* Updated docs/nodes/defaultNodes.md
* shortened tags list to channel types only
* fix typo in mode list
* split features into offset and multiply nodes
* Updated documentation
* Change invert to discrete boolean.
Previous math was unclear and had issues with 0 values.
* chore: black
* chore(ui): typegen
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Revised links to my node py files, replacing them with links to independent repos. Additionally I consolidated some nodes together (Image and Mask Composition Pack, Size Stepper nodes).
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR is based on #4423 and should not be merged until it is merged.
[feat(nodes): add version to node
schemas](c179d4ccb7)
The `@invocation` decorator is extended with an optional `version` arg.
On execution of the decorator, the version string is parsed using the
`semver` package (this was an indirect dependency and has been added to
`pyproject.toml`).
All built-in nodes are set with `version="1.0.0"`.
The version is added to the OpenAPI Schema for consumption by the
client.
[feat(ui): handle node
versions](03de3e4f78)
- Node versions are now added to node templates
- Node data (including in workflows) include the version of the node
- On loading a workflow, we check to see if the node and template
versions match exactly. If not, a warning is logged to console.
- The node info icon (top-right corner of node, which you may click to
open the notes editor) now shows the version and mentions any issues.
- Some workflow validation logic has been shifted around and is now
executed in a redux listener.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4393
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
Loading old workflows should prompt a warning, and the node status icon
should indicate some action is needed.
## [optional] Are there any post deployment tasks we need to perform?
I've updated the default workflows:
- Bump workflow versions from 1.0 to 1.0.1
- Add versions for all nodes in the workflows
- Test workflows
[Default
Workflows.zip](https://github.com/invoke-ai/InvokeAI/files/12511911/Default.Workflows.zip)
I'm not sure where these are being stored right now @Millu
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
### Polymorphic Fields
Initial support for polymorphic field types. Polymorphic types are a
single of or list of a specific type. For example, `Union[str,
list[str]]`.
Polymorphics do not yet have support for direct input in the UI (will
come in the future). They will be forcibly set as Connection-only
fields, in which case users will not be able to provide direct input to
the field.
If a polymorphic should present as a singleton type - which would allow
direct input - the node must provide an explicit type hint.
For example, `DenoiseLatents`' `CFG Scale` is polymorphic, but in the
node editor, we want to present this as a number input. In the node
definition, the field is given `ui_type=UIType.Float`, which tells the
UI to treat this as a `float` field.
The connection validation logic will prevent connecting a collection to
`CFG Scale` in this situation, because it is typed as `float`. The
workaround is to disable validation from the settings to make this
specific connection. A future improvement will resolve this.
### Collection Fields
This also introduces better support for collection field types. Like
polymorphics, collection types are parsed automatically by the client
and do not need any specific type hints.
Also like polymorphics, there is no support yet for direct input of
collection types in the UI.
### Other Changes
- Disabling validation in workflow editor now displays the visual hints
for valid connections, but lets you connect to anything.
- Added `ui_order: int` to `InputField` and `OutputField`. The UI will
use this, if present, to order fields in a node UI. See usage in
`DenoiseLatents` for an example.
- Updated the field colors - duplicate colors have just been lightened a
bit. It's not perfect but it was a quick fix.
- Field handles for collections are the same color as their single
counterparts, but have a dark dot in the center of them.
- Field handles for polymorphics are a rounded square with dot in the
middle.
- Removed all fields that just render `null` from `InputFieldRenderer`,
replaced with a single fallback
- Removed logic in `zValidatedWorkflow`, which checked for existence of
node templates for each node in a workflow. This logic introduced a
circular dependency, due to importing the global redux `store` in order
to get the node templates within a zod schema. It's actually fine to
just leave this out entirely; The case of a missing node template is
handled by the UI. Fixing it otherwise would introduce a substantial
headache.
- Fixed the `ControlNetInvocation.control_model` field default, which
was a string when it shouldn't have one.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4266
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
Add this polymorphic float node to the end of your
`invokeai/app/invocations/primitives.py`:
```py
@invocation("float_poly", title="Float Poly Test", tags=["primitives", "float"], category="primitives")
class FloatPolyInvocation(BaseInvocation):
"""A float polymorphic primitive value"""
value: Union[float, list[float]] = InputField(default_factory=list, description="The float value")
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(value=self.value[0] if isinstance(self.value, list) else self.value)
``
Head over to nodes and try to connecting up some collection and polymorphic inputs.
- Node versions are now added to node templates
- Node data (including in workflows) include the version of the node
- On loading a workflow, we check to see if the node and template versions match exactly. If not, a warning is logged to console.
- The node info icon (top-right corner of node, which you may click to open the notes editor) now shows the version and mentions any issues.
- Some workflow validation logic has been shifted around and is now executed in a redux listener.
The `@invocation` decorator is extended with an optional `version` arg. On execution of the decorator, the version string is parsed using the `semver` package (this was an indirect dependency and has been added to `pyproject.toml`).
All built-in nodes are set with `version="1.0.0"`.
The version is added to the OpenAPI Schema for consumption by the client.
Initial support for polymorphic field types. Polymorphic types are a single of or list of a specific type. For example, `Union[str, list[str]]`.
Polymorphics do not yet have support for direct input in the UI (will come in the future). They will be forcibly set as Connection-only fields, in which case users will not be able to provide direct input to the field.
If a polymorphic should present as a singleton type - which would allow direct input - the node must provide an explicit type hint.
For example, `DenoiseLatents`' `CFG Scale` is polymorphic, but in the node editor, we want to present this as a number input. In the node definition, the field is given `ui_type=UIType.Float`, which tells the UI to treat this as a `float` field.
The connection validation logic will prevent connecting a collection to `CFG Scale` in this situation, because it is typed as `float`. The workaround is to disable validation from the settings to make this specific connection. A future improvement will resolve this.
This also introduces better support for collection field types. Like polymorphics, collection types are parsed automatically by the client and do not need any specific type hints.
Also like polymorphics, there is no support yet for direct input of collection types in the UI.
- Disabling validation in workflow editor now displays the visual hints for valid connections, but lets you connect to anything.
- Added `ui_order: int` to `InputField` and `OutputField`. The UI will use this, if present, to order fields in a node UI. See usage in `DenoiseLatents` for an example.
- Updated the field colors - duplicate colors have just been lightened a bit. It's not perfect but it was a quick fix.
- Field handles for collections are the same color as their single counterparts, but have a dark dot in the center of them.
- Field handles for polymorphics are a rounded square with dot in the middle.
- Removed all fields that just render `null` from `InputFieldRenderer`, replaced with a single fallback
- Removed logic in `zValidatedWorkflow`, which checked for existence of node templates for each node in a workflow. This logic introduced a circular dependency, due to importing the global redux `store` in order to get the node templates within a zod schema. It's actually fine to just leave this out entirely; The case of a missing node template is handled by the UI. Fixing it otherwise would introduce a substantial headache.
- Fixed the `ControlNetInvocation.control_model` field default, which was a string when it shouldn't have one.
## What type of PR is this? (check all applicable)
- [x] Feature
## Have you discussed this change with the InvokeAI team?
- [x] No
## Description
Automatically infer the name of the model from the path supplied IF the
model name slot is empty. If the model name is not empty, we presume
that the user has entered a model name or made changes to it and we do
not touch it in order to not override user changes.
## Related Tickets & Documents
- Addresses: #4443
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
fix(ui): clicking node collapse button does not bring node to front
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue
https://discord.com/channels/1020123559063990373/1130288930319761428/1147333454632071249
- Closes#4438
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [X] Yes
## Description
There is a call in `baseinvocation.invocation_output()` to
`cls.__annotations__`. However, in Python 3.9 not all objects have this
attribute. I have worked around the limitation in the way described in
https://docs.python.org/3/howto/annotations.html , which supposedly will
produce same results in 3.9, 3.10 and 3.11.
## Related Tickets & Documents
See
https://discord.com/channels/1020123559063990373/1146897072394608660/1146939182300799017
for first bug report.
## What type of PR is this? (check all applicable)
- [x] Cleanup
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Used https://github.com/albertas/deadcode to get rough overview of what
is not used, checked everything manually though. App still runs.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4424
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
Ensure it doesn't explode when you run it.
* add StableDiffusionXLInpaintPipeline to probe list
* add StableDiffusionXLInpaintPipeline to probe list
* Blackified (?)
---------
Authored-by: Lincoln Stein <lstein@gmail.com>
Mucked about with to get it merged by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Add a click handler for node wrapper component that exclusively selects that node, IF no other modifier keys are held.
Technically I believe this means we are doubling up on the selection logic, as reactflow handles this internally also. But this is by far the most reliable way to fix the UX.
## What type of PR is this? (check all applicable)
- [x] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
The logic that introduced a circular import was actually extraneous. I
have entirely removed it.
This fixes the frontend lint test.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Copied into InvokeAI since IP-Adapter repo is not a package. Is there a better way to do this for non-packaged Python code while still keeping InvokeAI install easy?
Copied into InvokeAI since IP-Adapter repo is not a package. Is there a better way to do this for non-packaged Python code while still keeping InvokeAI install easy?
description:Commit a idea or Request a new feature
description:Contribute a idea or request a new feature
title: '[enhancement]:'
labels: ['enhancement']
# assignees:
@ -9,14 +9,14 @@ body:
- type:markdown
attributes:
value:|
Thanks for taking the time to fill out this Feature request!
Thanks for taking the time to fill out this feature request!
- type:checkboxes
attributes:
label:Is there an existing issue for this?
description:|
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a simmilar issue already exists for the feature you want to request
to see if a similar issue already exists for the feature you want to request
options:
- label:I have searched the existing issues
required:true
@ -34,12 +34,9 @@ body:
id:whatisexpected
attributes:
label:What should this feature add?
description:Please try to explain the functionality this feature should add
description:Explain the functionality this feature should add. Feature requests should be for single features. Please create multiple requests if you want to request multiple features.
placeholder:|
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
I'd like a button that creates an image of banana sushi every time I press it. Each image should be different. There should be a toggle next to the button that enables strawberry mode, in which the images are of strawberry sushi instead.
validations:
required:true
@ -51,6 +48,6 @@ body:
- type:textarea
attributes:
label:Aditional Content
label:Additional Content
description:Add any other context or screenshots about the feature request here.
placeholder:This is a Mockup of the design how I imagine it <screenshot>
placeholder:This is a mockup of the design how I imagine it <screenshot>
@ -123,7 +123,7 @@ and go to http://localhost:9090.
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.9 through 3.11 installed on your machine. Earlier or
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
@ -368,9 +368,9 @@ InvokeAI offers a locally hosted Web Server & React Frontend, with an industry l
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Node Architecture & Editor (Beta)*
### *Workflows & Nodes*
InvokeAI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### *Board & Gallery Management*
@ -383,8 +383,9 @@ Invoke AI provides an organized gallery system for easily storing, accessing, an
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
### Latest Changes
@ -395,20 +396,18 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
problems and other issues. For more help, please join our [Discord][discord link]
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contributionguidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
@ -5,7 +5,7 @@ All commands are to be run from the `docker` directory: `cd docker`
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04).
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
@ -20,7 +20,6 @@ This is done via Docker Desktop preferences
## Quickstart
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
@ -42,20 +41,22 @@ The Docker daemon on the system must be already set up to use the GPU. In case o
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
You can also set these values in `dockercompose.yml` directly, but `.env` will help avoid conflicts when code is updated.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (most values are optional):
Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
```
```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even Moar Customizing!
See the `dockercompose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
See the `docker-compose.yml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory
@ -63,7 +64,7 @@ Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
## Contributing to Invoke AI
# Methods of Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
## Development
If you’d like to help with development, please see our [development guide](contribution_guides/development.md).
### Areas of contribution:
**New Contributors:** If you’re unfamiliar with contributing to open source projects, take a look at our [new contributor guide](contribution_guides/newContributorChecklist.md).
#### Development
If you’d like to help with development, please see our [development guide](contribution_guides/development.md). If you’re unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Nodes
If you’d like to add a Node, please see our [nodes contributionguide](../nodes/contributingNodes.md).
#### Nodes
If you’d like to help with development, please see our [nodes contribution guide](/nodes/contributingNodes). If you’re unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Support and Triaging
Helping support other users in [Discord](https://discord.gg/ZmtBAhwWhy) and on Github are valuable forms of contribution that we greatly appreciate.
#### Documentation
We receive many issues and requests for help from users. We're limited in bandwidth relative to our the user base, so providing answers to questions or helping identify causes of issues is very helpful. By doing this, you enable us to spend time on the highest priority work.
## Documentation
If you’d like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md).
#### Translation
## Translation
If you'd like to help with translation, please see our[translation guide](contribution_guides/translation.md).
#### Tutorials
## Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
### Contributors
# Contributors
This project is a combined effort of dedicated people from across the world.[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
### Code of Conduct
# Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
@ -47,8 +49,7 @@ By making a contribution to this project, you certify that:
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
### Support
# Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).
See the [tests documentation](./TESTS.md) for information about running and writing tests.
### Reloading Changes
Experimenting with changes to the Python source code is a drag if you have to re-start the server —
@ -167,6 +142,23 @@ and so you'll have access to the same python environment as the InvokeAI app.
This is _super_ handy.
#### Enabling Type-Checking with Pylance
We use python's typing system in InvokeAI. PR reviews will include checking that types are present and correct. We don't enforce types with `mypy` at this time, but that is on the horizon.
Using a code analysis tool to automatically type check your code (and types) is very important when writing with types. These tools provide immediate feedback in your editor when types are incorrect, and following their suggestions lead to fewer runtime bugs.
Pylance, installed at the beginning of this guide, is the de-facto python LSP (language server protocol). It provides type checking in the editor (among many other features). Once installed, you do need to enable type checking manually:
- Open a python file
- Look along the status bar in VSCode for `{ } Python`
- Click the `{ }`
- Turn type checking on - basic is fine
You'll now see red squiggly lines where type issues are detected. Hover your cursor over the indicated symbols to see what's wrong.
In 99% of cases when the type checker says there is a problem, there really is a problem, and you should take some time to understand and resolve what it is pointing out.
#### Debugging configs with `launch.json`
Debugging configs are managed in a `launch.json` file. Like most VSCode configs,
We use `pytest` to run the backend python tests. (See [pyproject.toml](/pyproject.toml) for the default `pytest` options.)
## Fast vs. Slow
All tests are categorized as either 'fast' (no test annotation) or 'slow' (annotated with the `@pytest.mark.slow` decorator).
'Fast' tests are run to validate every PR, and are fast enough that they can be run routinely during development.
'Slow' tests are currently only run manually on an ad-hoc basis. In the future, they may be automated to run nightly. Most developers are only expected to run the 'slow' tests that directly relate to the feature(s) that they are working on.
As a rule of thumb, tests should be marked as 'slow' if there is a chance that they take >1s (e.g. on a CPU-only machine with slow internet connection). Common examples of slow tests are tests that depend on downloading a model, or running model inference.
## Running Tests
Below are some common test commands:
```bash
# Run the fast tests. (This implicitly uses the configured default option: `-m "not slow"`.)
pytest tests/
# Equivalent command to run the fast tests.
pytest tests/ -m "not slow"
# Run the slow tests.
pytest tests/ -m "slow"
# Run the slow tests from a specific file.
pytest tests/path/to/slow_test.py -m "slow"
# Run all tests (fast and slow).
pytest tests -m ""
```
## Test Organization
All backend tests are in the [`tests/`](/tests/) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`.
TODO: The above statement is aspirational. A re-organization of legacy tests is required to make it true.
## Tests that depend on models
There are a few things to keep in mind when adding tests that depend on models.
1. If a required model is not already present, it should automatically be downloaded as part of the test setup.
2. If a model is already downloaded, it should not be re-downloaded unnecessarily.
3. Take reasonable care to keep the total number of models required for the tests low. Whenever possible, re-use models that are already required for other tests. If you are adding a new model, consider including a comment to explain why it is required/unique.
There are several utilities to help with model setup for tests. Here is a sample test that depends on a model:
To review test coverage, append `--cov` to your pytest command:
```bash
pytest tests/ --cov
```
Test outcomes and coverage will be reported in the terminal. In addition, a more detailed report is created in both XML and HTML format in the `./coverage` folder. The HTML output is particularly helpful in identifying untested statements where coverage should be improved. The HTML report can be viewed by opening `./coverage/html/index.html`.
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
## **Get Started**
To get started, take a look at our [new contributors checklist](newContributorChecklist.md)
Once you're setup, for more information, you can review the documentation specific to your area of interest:
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md), [translation](translation.md) or helping support other users and triage issues as they're reported in GitHub.
There are two paths to making a development contribution:
@ -23,69 +30,20 @@ There are two paths to making a development contribution:
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviwers easily understand your contribution
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under**your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If you’d like to learn more about contributing to Open Source projects, here is a[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For frontend related work, **@psychedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@psychedelicious**.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?**
Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.
Our [Code of Conduct](../../CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.
@ -10,4 +10,4 @@ When updating or creating documentation, please keep in mind InvokeAI is a tool
## Help & Questions
Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
Please ping @imic or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../../installation/020_INSTALL_MANUAL.md#developer-install)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Make your first Pull Request with the guide below
- [x] Happy development! Don't be afraid to ask for help - we're happy to help you contribute!
## How do I make a contribution?
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under**your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add -A
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```bash
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository. If you're not sure how to, [follow this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If you’d like to learn more about contributing to Open Source projects, here is a[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
@ -211,8 +211,8 @@ Here are the invoke> command that apply to txt2img:
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](../features/VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](../features/VARIATIONS.md) for now to use this. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [7] in the launcher, "Re-run the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Reading Environment Variables
@ -159,7 +159,7 @@ groups in `invokeia.yaml`:
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
Command-line users can launch the model installer using the command
`invokeai-model-install`.
_Be aware that some ControlNet models require additional code
functionality in order to work properly, so just installing a
@ -65,6 +46,17 @@ third-party ControlNet model may not have the desired effect._ Please
read and follow the documentation for installing a third party model
not currently included among InvokeAI's default list.
Currently InvokeAI **only** supports 🤗 Diffusers-format ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
🤗 Diffusers-format ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname").
#### ControlNet Models
The models currently supported include:
**Canny**:
@ -96,6 +88,8 @@ A model that generates normal maps from input images, allowing for more realisti
**Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**QR Code Monster**:
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
@ -104,7 +98,7 @@ The OpenPose control model allows for the identification of the general pose of
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile (experimental)**:
**Tile**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
@ -117,12 +111,10 @@ The Tile Model can be a powerful tool in your arsenal for enhancing image qualit
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
## Using ControlNet
### Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
@ -134,3 +126,55 @@ Weight - Strength of the Controlnet model applied to the generation for the sect
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
## T2I-Adapter
[T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) is a tool similar to ControlNet that allows for control over the generation process by providing control information during the generation process. T2I-Adapter models tend to be smaller and more efficient than ControlNets.
##### Installation
To install T2I-Adapter Models:
1. The easiest way to install models is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
to the T2I-Adapters section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox.
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the T2I-Adapter. The ID is in the format "author/repoName"
#### Usage
Each T2I Adapter has two settings that are applied.
Weight - Strength of the model applied to the generation for the section, defined by start/end.
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each section can be expanded with the "Show Advanced" button in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in during the generation process.
**Note:** T2I-Adapter models and ControlNet models cannot currently be used together.
## IP-Adapter
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [4] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](https://www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3.**Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
#### Using IP-Adapter
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
Each IP-Adapter has two settings that are applied to the IP-Adapter:
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.
As of version 2.3, InvokeAI comes with a script that allows you to
merge two or three diffusers-type models into a new merged model. The
InvokeAI provides the ability to merge two or three diffusers-type models into a new merged model. The
resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
## How to Merge Models
Model Merging can be be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge th models.
## Settings
* Model Selection: there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
"Convert" option in the Web-based Model Manager tab.
You must select at least two models to merge. The third can be left
at "None" if you desire.
* Alpha: This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method: This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available.
* Save Location: The location you want the merged model to be saved in. Default is in the InvokeAI root folder
* Name for merged model: This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
* Ignore Mismatches / Force: Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
You may run the merge script by starting the invoke launcher
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
(`invoke.sh` or `invoke.bat`) and choosing the option (4) for _merge
models_. This will launch a text-based interactive user interface that
prompts you to select the models to merge, how to merge them, and the
merged model name.
@ -40,34 +74,4 @@ this to get back.
If the merge runs successfully, it will create a new diffusers model
under the selected name and register it with InvokeAI.
## The Settings
* Model Selection -- there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha -- This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method -- This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available. (TODO: cite a reference that describes what these
interpolation methods actually do and how to decide among them).
* Force -- Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
* Name for merged model - This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
If the model you are using has parentheses () or speech marks "" as part of its
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
@ -246,7 +246,7 @@ To create a Dynamic Prompt, follow these steps:
Within the braces, separate each option using a vertical bar |.
If you want to include multiple options from a single group, prefix with the desired number and $$.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {2$$style1|style2|style3}.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {style1|style2|style3}.
### How Dynamic Prompts Work
Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options.
@ -273,3 +273,36 @@ Below are some useful strategies for creating Dynamic Prompts:
Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options.
Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group.
Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands.
## SDXL Prompting
Prompting with SDXL is slightly different than prompting with SD1.5 or SD2.1 models - SDXL expects a prompt _and_ a style.
### Prompting
<figuremarkdown>

</figure>
In the prompt box, enter a positive or negative prompt as you normally would.
For the style box you can enter a style that you want the image to be generated in. You can use styles from this example list, or any other style you wish: anime, photographic, digital art, comic book, fantasy art, analog film, neon punk, isometric, low poly, origami, line art, cinematic, 3d model, pixel art, etc.
### Concatenated Prompts
InvokeAI also has the option to concatenate the prompt and style inputs, by pressing the "link" button in the Positive Prompt box.
This concatenates the prompt & style inputs, and passes the joined prompt and style to the SDXL model.

@ -57,7 +57,9 @@ Prompts provide the models directions on what to generate. As a general rule of
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what you’d like to see. (Like Stable Diffusion!)
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at https://models.invoke.ai
Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*
To install InvokeAI with virtual environments and the PIP package
manager, please follow these steps:
1. Please make sure you are using Python 3.9 through 3.11. The rest of the install
1. Please make sure you are using Python 3.10 through 3.11. The rest of the install
procedure depends on this and will not work with other versions:
```bash
@ -256,6 +256,10 @@ manager, please follow these steps:
*highly recommended** if your virtual environment is located outside of
your runtime directory.
!!! tip
On linux, it is recommended to run invokeai with the following env var: `MALLOC_MMAP_THRESHOLD_=1048576`. For example: `MALLOC_MMAP_THRESHOLD_=1048576 invokeai --web`. This helps to prevent memory fragmentation that can lead to memory accumulation over time. This env var is set automatically when running via `invoke.sh`.
10. Render away!
Browse the [features](../features/index.md) section to learn about all the
@ -287,7 +291,7 @@ manager, please follow these steps:
Leave off the `--gui` option to run the script using command-line arguments. Pass the `--help` argument
to get usage instructions.
### Developer Install
## Developer Install
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
@ -296,18 +300,29 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
1. From the command line, run this command:
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
> **Why do I need the frontend toolchain**?
>
> The InvokeAI project uses trunk-based development. That means our `main` branch is the development branch, and releases are tags on that branch. Because development is very active, we don't keep an updated build of the UI in `main` - we only build it for production releases.
>
> That means that between releases, to have a functioning application when running directly from the repo, you will need to run the UI in dev mode or build it regularly (any time the UI code changes).
1. Create a fork of the InvokeAI repository through the GitHub UI or [this link](https://github.com/invoke-ai/InvokeAI/fork)
Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command.
You can now run `invokeai` and its related commands. The code will be
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files
and watch the code's behavior change.
4. If you wish to contribute to the InvokeAI project, you are
When you pull in new changes to the repo, be sure to re-build the UI.
7. If you wish to contribute to the InvokeAI project, you are
encouraged to establish a GitHub account and "fork"
https://github.com/invoke-ai/InvokeAI into your own copy of the
repository. You can then use GitHub functions to create and submit
### cuDNN Installation for 40/30 Series Optimization* (Optional)
1. Find the InvokeAI folder
2. Click on .venv folder - e.g., YourInvokeFolderHere\\.venv
3. Click on Lib folder - e.g., YourInvokeFolderHere\\.venv\Lib
4. Click on site-packages folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages
5. Click on Torch directory - e.g., YourInvokeFolderHere\InvokeAI\\.venv\Lib\site-packages\torch
6. Click on the lib folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib
7. Copy everything inside the folder and save it elsewhere as a backup.
8. Go to __https://developer.nvidia.com/cudnn__
9. Login or create an Account.
10. Choose the newer version of cuDNN. **Note:**
There are two versions, 11.x or 12.x for the differents architectures(Turing,Maxwell Etc...) of GPUs.
You can find which version you should download from [this link](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html).
13. Download the latest version and extract it from the download location
14. Find the bin folder E\cudnn-windows-x86_64-__Whatever Version__\bin
15. Copy and paste the .dll files into YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib **Make sure to copy, and not move the files**
16. If prompted, replace any existing files
**Notes:**
* If no change is seen or any issues are encountered, follow the same steps as above and paste the torch/lib backup folder you made earlier and replace it. If you didn't make a backup, you can also uninstall and reinstall torch through the command line to repair this folder.
* This optimization is intended for the newer version of graphics card (40/30 series) but results have been seen with older graphics card.
### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md)
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
because Docker containers can not access the GPU on macOS.
!!! tip "For developers"
!!! warning "AMD GPU Users"
For container-related development tasks or for enabling easy
deployment to other environments (on-premises or cloud), follow these
instructions.
Container support for AMD GPUs has been reported to work by the community, but has not received
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
use the `build.sh` script to build the image for this to take effect at build time.
For general use, install locally to leverage your machine's GPU.
!!! tip "Linux and Windows Users"
For optimal performance, configure your Docker daemon to access your machine's GPU.
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
## Why containers?
They provide a flexible, reliable way to build and deploy InvokeAI. You'll also
use a Docker volume to store the largest model files and image outputs as a
first step in decoupling storage and compute. Future enhancements can do this
for other assets. See [Processes](https://12factor.net/processes) under the
Twelve-Factor App methodology for details on why running applications in such a
stateless fashion is important.
They provide a flexible, reliable way to build and deploy InvokeAI.
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
methodology for details on why running applications in such a stateless fashion is important.
You can specify the target platform when building the image and running the
container. You'll also need to specify the InvokeAI requirements file that
matches the container's OS and the architecture it will run on.
The container is configured for CUDA by default, but can be built to support AMD GPUs
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
Developers on Apple silicon (M1/M2): You
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
@ -36,6 +37,16 @@ development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud.
## TL;DR
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
```bash
# docker compose commands should be run from the `docker` directory
cd docker
docker compose up
```
## Installation in a Linux container (desktop)
### Prerequisites
@ -58,222 +69,33 @@ a token and copy it, since you will need in for the next step.
### Setup
Set the fork you want to use and other variables.
Set up your environmnent variables. In the `docker` directory, make a copy of `env.sample` and name it `.env`. Make changes as necessary.
!!! tip
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
I preffer to save my env vars
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply
them when I come back.
The build- and run- scripts contain default values for almost everything,
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
created in the last step.
Some Suggestions of variables you may want to change besides the Token:
At a minimum, you might want to set the `INVOKEAI_ROOT` environment variable
to point to the location where you wish to store your InvokeAI models, configuration, and outputs.
| `HUGGING_FACE_HUB_TOKEN` | No default, but **required**! | This is the only **required** variable, without it you can't download the huggingface models |
| `REPOSITORY_NAME` | The Basename of the Repo folder | This name will used as the container repository/image name |
| `VOLUMENAME` | `${REPOSITORY_NAME,,}_data` | Name of the Docker Volume where model files will be stored |
| `ARCH` | arch of the build machine | Can be changed if you want to build the image for another arch |
| `CONTAINER_REGISTRY` | ghcr.io | Name of the Container Registry to use for the full tag |
| `CONTAINER_REPOSITORY` | `$(whoami)/${REPOSITORY_NAME}` | Name of the Container Repository |
| `CONTAINER_FLAVOR` | `cuda` | The flavor of the image to built, available options are `cuda`, `rocm` and `cpu`. If you choose `rocm` or `cpu`, the extra-index-url will be selected automatically, unless you set one yourself. |
| `CONTAINER_TAG` | `${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}` | The Container Repository / Tag which will be used |
| `INVOKE_DOCKERFILE` | `Dockerfile` | The Dockerfile which should be built, handy for development |
| `PIP_EXTRA_INDEX_URL` | | If you want to use a custom pip-extra-index-url |
| `INVOKEAI_ROOT` | `~/invokeai` | **Required** - the location of your InvokeAI root directory. It will be created if it does not exist.
| `HUGGING_FACE_HUB_TOKEN` | | InvokeAI will work without it, but some of the integrations with HuggingFace (like downloading from models from private repositories) may not work|
| `GPU_DRIVER` | `cuda` | Optionally change this to `rocm` to build the image for AMD GPUs. NOTE: Use the `build.sh` script to build the image for this to take effect.
</figure>
#### Build the Image
I provided a build script, which is located next to the Dockerfile in
`docker/build.sh`. It can be executed from repository root like this:
Use the standard `docker compose build` command from within the `docker` directory.
```bash
./docker/build.sh
```
The build Script not only builds the container, but also creates the docker
volume if not existing yet.
If using an AMD GPU:
a: set the `GPU_DRIVER=rocm` environment variable in `docker-compose.yml` and continue using `docker compose build` as usual, or
b: set `GPU_DRIVER=rocm` in the `.env` file and use the `build.sh` script, provided for convenience
#### Run the Container
After the build process is done, you can run the container via the provided
`docker/run.sh` script
Use the standard `docker compose up` command, and generally the `docker compose` [CLI](https://docs.docker.com/compose/reference/) as usual.
```bash
./docker/run.sh
```
When used without arguments, the container will start the webserver and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
!!! example "run script example"
```bash
./docker/run.sh "banana sushi" -Ak_lms -S42 -s10
```
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
Find out more about available CLI-Parameters at [features/CLI.md](../../features/CLI/#arguments)
---
## Running the container on your GPU
If you have an Nvidia GPU, you can enable InvokeAI to run on the GPU by running
the container with an extra environment variable to enable GPU usage and have
the process run much faster:
```bash
GPU_FLAGS=all ./docker/run.sh
```
This passes the `--gpus all` to docker and uses the GPU.
If you don't have a GPU (or your host is not yet setup to use it) you will see a
message like this:
`docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].`
You can use the full set of GPU combinations documented here:
You can also do text-guided image-to-image translation. For example, turning a
sketch into a detailed drawing.
`strength` is a value between 0.0 and 1.0 that controls the amount of noise that
is added to the input image. Values that approach 1.0 allow for lots of
variations but will also produce images that are not semantically consistent
with the input. 0.0 preserves image exactly, 1.0 replaces it completely.
Make sure your input image size dimensions are multiples of 64 e.g. 512x512.
Otherwise you'll get `Error: product of dimension sizes > 2**31'`. If you still
get the error
[try a different size](https://support.apple.com/guide/preview/resize-rotate-or-flip-an-image-prvw2015/mac#:~:text=image's%20file%20size-,In%20the%20Preview%20app%20on%20your%20Mac%2C%20open%20the%20file,is%20shown%20at%20the%20bottom.)
like 512x256.
If you're on a Docker container, copy your input image into the Docker volume
Once the container starts up (and configures the InvokeAI root directory if this is a new installation), you can access InvokeAI at [http://localhost:9090](http://localhost:9090)
This method is recommended for those familiar with running Docker containers.
We offer a method for creating Docker containers containing InvokeAI and its dependencies. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install.
## Other Installation Guides
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [XFormers](070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md)
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
The workflow editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs.
## Features
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.

### Renaming Fields and Nodes
Any node or input field can be renamed in the workflow editor. If the input field you have renamed has been added to the Linear View, the changed name will be reflected in the Linear View and the node.
### Managing Nodes
* Ctrl+C to copy a node
* Ctrl+V to paste a node
* Backspace/Delete to delete a node
* Shift+Click to drag and select multiple nodes
### Node Caching
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Nodes Editor and build workflows to suit your needs.
## Important Concepts
## Important Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -37,7 +60,7 @@ It is common to want to use both the same seed (for continuity) and random seeds
### ControlNet
The ControlNet node outputs a Control, which can be provided as input to non-image *ToLatents nodes. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
The ControlNet node outputs a Control, which can be provided as input to a Denoise Latents node. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
### Batch / Multiple Image Generation + Random Seeds
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
Batch or multiple image generation in the workflow editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate, meaning this example will generate 4 images. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection. This noise can then be fed to the Denoise Latents node for it to iterate through the denoising process with the different seeds provided.
@ -4,41 +4,68 @@ These are nodes that have been developed by the community, for the community. If
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. If you used the automated installation, this can be found inside the `.venv` folder. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
## Community Nodes
### FaceTools
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
- [Example Node Template](#example-node-template)
- [Disclaimer](#disclaimer)
- [Help](#help)
--------------------------------
### Retroize
### Depth Map from Wavefront OBJ
**Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
**Description:** This node adds a film grain effect to the input image based on the weights, seeds, and blur radii parameters. It works with RGB input images only.
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no nonterminal terms remain in the string.
This includes 3 Nodes:
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
**Description**: Halftone converts the source image to grayscale and then performs halftoning. CMYK Halftone converts the image to CMYK and applies a per-channel halftoning to make the source image look like a magazine or newspaper. For both nodes, you can specify angles and halftone dot spacing.
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 15 Nodes:
- *Adjust Image Hue Plus* - Rotate the hue of an image in one of several different color spaces.
- *Blend Latents/Noise (Masked)* - Use a mask to blend part of one latents tensor [including Noise outputs] into another. Can be used to "renoise" sections during a multi-stage [masked] denoising process.
- *Enhance Image* - Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
- *Equivalent Achromatic Lightness* - Calculates image lightness accounting for Helmholtz-Kohlrausch effect based on a method described by High, Green, and Nussbaum (2023).
- *Text to Mask (Clipseg)* - Input a prompt and an image to generate a mask representing areas of the image matched by the prompt.
- *Text to Mask Advanced (Clipseg)* - Output up to four prompt masks combined with logical "and", logical "or", or as separate channels of an RGBA image.
- *Image Layer Blend* - Perform a layered blend of two images using alpha compositing. Opacity of top layer is selectable, with optional mask and several different blend modes/color spaces.
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
- *Image Dilate or Erode* - Dilate or expand a mask (or any image!). This is equivalent to an expand/contract operation.
- *Image Value Thresholds* - Clip an image to pure black/white beyond specified thresholds.
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
- *Rotate/Flip Image* - Rotate an image in degrees clockwise/counterclockwise about its center, optionally resizing the image boundaries to fit, or flipping it about the vertical and/or horizontal axes.
- *Shadows/Highlights/Midtones* - Extract three masks (with adjustable hard or soft thresholds) representing shadows, midtones, and highlights regions of an image.
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
**Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
**Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such forinvokeAInode experimentation. Think animation + ControlNet outputs.
**Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such forinvokeAInode experimentation. Think animation + ControlNet outputs.
"describe a new mystical creature in its natural environment"
@ -85,7 +214,7 @@ Generated Prompt: An enchanted weapon will be usable by any character regardless
"The mystical creature I am describing to you is called the "Glimmerwing". It is a majestic, iridescent being that inhabits the depths of the most enchanted forests and glimmering lakes. Its body is covered in shimmering scales that reflect every color of the rainbow, and it has delicate, translucent wings that sparkle like diamonds in the sunlight. The Glimmerwing's home is a crystal-clear lake, surrounded by towering trees with leaves that shimmer like jewels. In this serene environment, the Glimmerwing spends its days swimming gracefully through the water, chasing schools of glittering fish and playing with the gentle ripples of the lake's surface.
As the sun sets, the Glimmerwing perches on a branch of one of the trees, spreading its wings to catch the last rays of light. The creature's scales glow softly, casting a rainbow of colors across the forest floor. The Glimmerwing sings a haunting melody, its voice echoing through the stillness of the night air. Its song is said to have the power to heal the sick and bring peace to troubled souls. Those who are lucky enough to hear the Glimmerwing's song are forever changed by its beauty and grace."
@ -93,10 +222,105 @@ a Text-Generation-Webui instance (might work remotely too, but I never tried it)
**Note**
This node works best with SDXL models, especially as the style can be described independantly of the LLM's output.
This node works best with SDXL models, especially as the style can be described independently of the LLM's output.
--------------------------------
### Prompt Tools
**Description:** A set of InvokeAI nodes that add general prompt manipulation tools. These were written to accompany the PromptsFromFile node and other prompt generation nodes.
1. PromptJoin - Joins to prompts into one.
2. PromptReplace - performs a search and replace on a prompt. With the option of using regex.
3. PromptSplitNeg - splits a prompt into positive and negative using the old V2 method of [] for negative.
4. PromptToFile - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.
5. PTFieldsCollect - Converts image generation fields into a Json format string that can be passed to Prompt to file.
6. PTFieldsExpand - Takes Json string and converts it to individual generation parameters This can be fed from the Prompts to file node.
7. PromptJoinThree - Joins 3 prompt together.
8. PromptStrength - This take a string and float and outputs another string in the format of (string)strength like the weighted format of compel.
9. PromptStrengthCombine - This takes a collection of prompt strength strings and outputs a string in the .and() or .blend() format that can be fed into a proper prompt node.
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
**Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
**Description:** This is a set of nodes for calculating the necessary size increments for doing upscaling workflows. Use the *Final Size & Orientation* node to enter your full size dimensions and orientation (portrait/landscape/random), then plug that and your initial generation dimensions into the *Ideal Size Stepper* and get 1, 2, or 3 intermediate pairs of dimensions for upscaling. Note this does not output the initial size or full size dimensions: the 1, 2, or 3 outputs of this node are only the intermediate sizes.
A third node is included, *Random Switch (Integers)*, which is just a generic version of Final Size with no orientation selection.
**Description:** text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line
**Description:** This node generates masks for highlights, midtones, and shadows given an input image. You can optionally specify a blur for the lookup table used in making those masks from the source image.
**Description:** Image to grid nodes and supporting tools.
1. "Images To Grids" node - Takes a collection of images and creates a grid(s) of images. If there are more images than the size of a single grid then multiple grids will be created until it runs out of images.
2. "XYImage To Grid" node - Converts a collection of XYImages into a labeled Grid of images. The XYImages collection has to be built using the supporting nodes. See example node setups for more details.
See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/README.md
@ -4,10 +4,10 @@ To learn about the specifics of creating a new node, please visit our [Node crea
Once you’ve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file
- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](CommunityNodes) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
- Make sure the node is contained in a new Python (.py) file. Preferrably, the node is in a repo with a README detaling the nodes usage & examples to help others more easily use your node.
- Submit a pull request with a link to your node(s) repo in GitHub against the `main` branch to add the node to the [Community Nodes](communityNodes.md) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does. Example output images and workflows are very helpful for other users looking to use your node.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you may be asked for permission to include it in the core project.
| Face ID | The face ID to process, numbered from 0. Multiple faces not supported. Find a face's ID with FaceIdentifier node. |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| X Offset | X-axis offset of the mask |
| Y Offset | Y-axis offset of the mask |
| Padding | All-axis padding around the mask in pixels |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Face IDs | Comma-separated list of face ids to mask eg '0,2,7'. Numbered from 0. Leave empty to mask all. Find face IDs with FaceIdentifier node. |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| X Offset | X-axis offset of the mask |
| Y Offset | Y-axis offset of the mask |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Invert Mask | Toggle to invert the face mask |
| Output | Description |
| ------ | --------------------------------- |
| Image | The original image |
| Width | The width of the image in pixels |
| Height | The height of the image in pixels |
| Mask | The output face mask |
## FaceIdentifier
FaceIdentifier outputs an image with detected face IDs printed in white numbers
onto each face.
Face IDs can then be used in FaceMask and FaceOff to selectively mask all, a
specific combination, or single faces.
The FaceIdentifier output image is generated for user reference, and isn't meant
to be passed on to other image-processing nodes.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing. If an image is changed in the slightest, run
it through FaceIdentifier again to get updated FaceIDs.
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
TODO: Will update once uploading workflows is available.
We've curated some example workflows for you to get started with Workflows in InvokeAI
## Text2Image
To use them, right click on your desired workflow, press "Download Linked File". You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!
## Image2Image
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord.
## ControlNet
## Upscaling
## Inpainting / Outpainting
## LoRAs
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL (with Refiner) Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale w_Canny_ControlNet.json)
seterr_msg=No python was detected on your system. Please install Python version %MINIMUM_PYTHON_VERSION% or higher. We recommend Python 3.10.9 from %PYTHON_URL%
seterr_msg=No python was detected on your system. Please install Python version %MINIMUM_PYTHON_VERSION% or higher. We recommend Python 3.10.12 from %PYTHON_URL%
seterr_msg=Your version of Python is too low. You need at least %MINIMUM_PYTHON_VERSION% but you have %python_version%. We recommend Python 3.10.9 from %PYTHON_URL%
seterr_msg=Your version of Python is too low. You need at least %MINIMUM_PYTHON_VERSION% but you have %python_version%. We recommend Python 3.10.12 from %PYTHON_URL%
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.