## What type of PR is this? (check all applicable)
- [X ] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No - will be in release notes
## Description
On CUDA systems, this PR adds a new slider to the install-time configure
script for adjusting the VRAM cache and suggests a good starting value
based on the user's max VRAM (this is subject to verification).
On non-CUDA systems this slider is suppressed.
Please test on both CUDA and non-CUDA systems using:
```
invokeai-configure --root ~/invokeai-main/ --skip-sd --skip-support
```
To see and test the default values, move `invokeai.yaml` out of the way
before running.
**Note added 8 August 2023**
This PR also fixes the configure and model install scripts so that if
the window is too small to fit the user interface, the user will be
prompted to interactively resize the window and/or change font size
(with the option to give up). This will prevent `npyscreen` from
generating its horrible tracebacks.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X Yes
- [ ] No
## Description
If `models.yaml` is cleared out for some reason, the model manager will
repopulate it by scanning `models`. However, this would fail with a
pydantic validation error if any SDXL checkpoint models were present
because the lack of logic to pick the correct configuration file. This
has now been added.
## What type of PR is this? (check all applicable)
- [X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X No, because small fix
## Have you updated all relevant documentation?
- [X] Yes
## Description
A logic bug was introduced in PR #4109 that caused Web-based model
updates to fail with a pydantic validation error. This corrects the
problem.
## Related Tickets & Documents
PR #4109
* Fix hue adjustment
Hue adjustment wasn't working correctly because color channels got swapped. This has now been fixed and we're using PIL rather than cv2 to do the RGBA->HSV->RGBA conversion. The range of hue adjustment is also the more typical 0..360 degrees.
ApiDependencies.invoker` provides typing for the API's services layer. Marking it `Optional` results in all the routes seeing it as optional, which is not good.
Instead of marking it optional to satisfy the initial assignment to `None`, we can just skip the initial assignment. This preserves the IDE hinting in API layer and is types-legal.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
At install time, when the user's config specified "auto" precision, the
installer was downloading the fp32 models even when an fp16 model would
be appropriate for the OS.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#4127
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
## Description
Add lora loading for sdxl.
NOT TESTED - I run only 2 loras, please check more(including lycoris if
they already exists).
## QA Instructions, Screenshots, Recordings
https://civitai.com/models/118536/voxel-xl

## Added/updated tests?
- [ ] Yes
- [x] No
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This PR adds execution time and VRAM usage reporting to each graph
invocation. The log output will look like this:
```
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Graph stats: c7764585-9c68-4d9d-a199-55e8186790f3
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Node Calls Seconds VRAM Used
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> main_model_loader 1 0.005s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> clip_skip 1 0.004s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> compel 2 0.512s 0.26G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> rand_int 1 0.001s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> range_of_size 1 0.001s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> iterate 1 0.001s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> metadata_accumulator 1 0.002s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> noise 1 0.002s 0.01G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> t2l 1 3.541s 1.93G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> l2i 1 0.679s 0.58G
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME: 4.749s
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> Current VRAM utilization 0.01G
```
On systems without CUDA, the VRAM stats are not printed.
The current implementation keeps track of graph ids separately so will
not be confused when several graphs are executing in parallel. It
handles exceptions, and it is integrated into the app framework by
defining an abstract base class and storing an implementation instance
in `InvocationServices`.
multi-select actions include:
- drag to board to move all to that board
- right click to add all to board or delete all
backend changes:
- add routes for changing board for list of image names, deleting list of images
- change image-specific routes to `images/i/{image_name}` to not clobber other routes (like `images/upload`, `images/delete`)
- subclass pydantic `BaseModel` as `BaseModelExcludeNull`, which excludes null values when calling `dict()` on the model. this fixes inconsistent types related to JSON parsing null values into `null` instead of `undefined`
- remove `board_id` from `remove_image_from_board`
frontend changes:
- multi-selection stuff uses `ImageDTO[]` as payloads, for dnd and other mutations. this gives us access to image `board_id`s when hitting routes, and enables efficient cache updates.
- consolidate change board and delete image modals to handle single and multiples
- board totals are now re-fetched on mutation and not kept in sync manually - was way too tedious to do this
- fixed warning about nested `<p>` elements
- closes#4088 , need to handle case when `autoAddBoardId` is `"none"`
- add option to show gallery image delete button on every gallery image
frontend refactors/organisation:
- make typegen script js instead of ts
- enable `noUncheckedIndexedAccess` to help avoid bugs when indexing into arrays, many small changes needed to satisfy TS after this
- move all image-related endpoints into `endpoints/images.ts`, its a big file now, but this fixes a number of circular dependency issues that were otherwise felt impossible to resolve
Currently we use some workflow trigger conditionals to run either a real test workflow (installing the app and running it) or a fake workflow, disguised as the real one, that just auto-passes.
This change refactors the workflow to use a single workflow that can be skipped, using another github action to determine which things to run depending on the paths changed.
## What type of PR is this? (check all applicable)
- [x] Refactor
## Have you discussed this change with the InvokeAI team?
- [x] No, because it's pretty minor
## Have you updated all relevant documentation?
- [x] No
## Description
This PR just moves the PR template to within the `.github/` directory
leading to a overall minimal project structure.
## Added/updated tests?
- [x] No : because this change doesn't affect or need a separate test
- Create abstract base class InvocationStatsServiceBase
- Store InvocationStatsService in the InvocationServices object
- Collect and report stats on simultaneous graph execution
independently for each graph id
- Track VRAM usage for each node
- Handle cancellations and other exceptions gracefully
## What type of PR is this? (check all applicable)
- [ X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X] No, because: invisible change
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No
## Description
There was a problem in 3.0.1 with root resolution. If INVOKEAI_ROOT were
set to "." (or any relative path), then the location of root would
change if the code did an os.chdir() after config initialization. I
fixed this in a quick and dirty way for 3.0.1.post3.
This PR cleans up the code with a little refactoring.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ]X Bug Fix
- [ ] Optimization
- [ X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X] No, because: obvious problem
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No
## Description
The manual installation documentation in both README.md and
020_MANUAL_INSTALL give an incomplete `invokeai-configure` command which
leaves out the path to the root directory to create. As a result, the
invokeai root directory gets created in the user’s home directory, even
if they intended it to be placed somewhere else.
This is a fairly important issue.
## What type of PR is this? (check all applicable)
- [x] Refactor
- [x] Feature
- [x] Bug Fix
- [?] Optimization
## Have you discussed this change with the InvokeAI team?
- [x] No
## Description
- Fixed filter type select using `images` instead of `all` -- Probably
some merge conflict.
- Added loading state for model lists. Handy when the model list takes
longer than a second for any reason to fetch. Better to show this than
an empty screen.
- Refactored the render model list function so we modify the display
component in one area rather than have repeated code.
### Other Issues
- The list can get a bit laggy on initial load when you have a hundreds
of models / loras. This needs to be fixed. Will make another PR for
this.
## What type of PR is this? (check all applicable)
- [x] Refactor
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: trivial
## Description
Adds a few obviously missing `Optional` on fields that default to
`None`.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: Just a documentation update
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Updated documentation with a getting started guide & a glossary of terms
needed to get started
Updated the landing page flow for users
<img width="1430" alt="Screenshot 2023-07-27 at 9 53 25 PM"
src="https://github.com/invoke-ai/InvokeAI/assets/7254508/d0006ba7-2ed4-4044-a1bc-ca9a99df9397">
## Related Tickets & Documents
<!--
For pull requests that relate or
close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
This is a relatively stable release that corrects the urgent windows
install and model manager problems in 3.0.1. It still has two known
bugs:
1. Many inpainting models are not loading correctly.
2. The merge script is failing to start.
- Remove FaceMask and add link FaceTools repository, which includes FaceMask, FaceOff, and FacePlace
- Move Ideal Size up from under the template entry
## What type of PR is this? (check all applicable)
- [ X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [X] Yes - bug discovered by jpphoto
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not needed
## Description
The user can customize the location of the models directory by setting
configuration variable `models_dir`. However, the model manager and the
TUI installer were all treating model relative paths as relative to the
invokeai root rather than the designated models directory. This has been
fixed by changing path resolution calls from using `config.root_path` to
`config.models_path`
Unfortunately there were many instances that needed replacement, so this
needs a bit of functional testing - try adding models, removing models,
renaming them, converting checkpoints, etc.
## What type of PR is this? (check all applicable)
- [ X] Optimization
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No
## Description
This PR does two things:
1. if the environment variable INVOKEAI_ROOT is defined at install time,
the zipfile installer will default to its value when asking the user
where to install the software
2. If the user has more than 72 models of any type installed, then the
list will be truncated in the TUI and the user given a warning. Anything
larger than this number of models causes the vertical space to overflow.
The only effect of truncation is that the user will not be able to see
and delete the models that were truncated. The message advises the user
to go to the Web Model Manager interface in this event.
## What type of PR is this? (check all applicable)
- [X ] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No
## Description
This PR fixes several issues with the 3.0.0 conversion script:
- Handles checkpoint variants that don't put dots between fields in the
long state dict key names
- Handles ema, non-ema, pruned and non-pruned ckpts
- Does not add safety checker to converted checkpoints
- Respects precision of original checkpoint file
## What type of PR is this? (check all applicable)
- [ X] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] Not needed
## Description
Windows users have been getting a lot of OSErrors while installing 3.0.1
during the pip dependency installation phase. Generally the errors have
involved just two packages, pydantic and numpy. Looking at the install
logs, I see that both of these packages are first installed under one
version number by a dependency, and then uninstalled and replaced by a
slightly different version specified in invoke's `pyproject.toml`. I
think this is the problem - maybe the earlier package is not completely
closed before it is uninstalled and reinstalled.
This PR relaxes pinning of numpy and pydantic in `pyproject.toml`.
Everything seems to install and run properly. Hopefully it will address
the windows install bug as well.
## What type of PR is this? (check all applicable)
- [x] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [x] Yes
## Description
- SDXL Metadata was not being retrieved. This PR fixes it.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
not yet, making pr to show
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No
## Description
Temp Change Node String input to a textbox, to allow easier input of
prompts and larger strings, it works for me but please tell me if I did
it wrong and if the size is ok
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: minor fix, let me know your thoughts
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue # https://github.com/invoke-ai/InvokeAI/issues/4017
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : Requires mps device
## [optional] Are there any post deployment tasks we need to perform?
Please test on an MPS (M1/M2) device.
Relevant code causing the error in #401701b6ec21fa/src/diffusers/schedulers/scheduling_euler_discrete.py (L263C3-L268C75)
```
self.sigmas = torch.from_numpy(sigmas).to(device=device)
if str(device).startswith("mps"):
# mps does not support float64
self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
else:
self.timesteps = torch.from_numpy(timesteps).to(device=device)
```
## What type of PR is this? (check all applicable)
- [x] Bug Fix
## Description
- Fix SDXL Concat Link animation not considering the fact that prompt
boxes can be resized.
- Also fixed a minor issue where the overlaying animation box would
block the prompt input resize slightly. Should be good now.
## What type of PR is this? (check all applicable)
- [X ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
## Have you updated all relevant documentation?
- [X ] Yes
## Description
Added solutions for installation issues related to large SDXL files and
Windows dependency glitches.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Making the prompt area styling match across all tabs / models and move
all prompt related components into a parent components for quick add.
Cherry picked stuff from the Styles PR coz we ain't gonna merge that.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- make the `SDXLConcatLink` icon match existing colors in light mode
- make the link toggle button accent color when active (its not super obvious but at least there is *some* visual difference for the button)
## What type of PR is this? (check all applicable)
- [ X] Feature
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
## Have you updated all relevant documentation?
- [ X] Yes - this makes invokeai behave the way it is described in
LOGGING.md
## Description
Prior to this PR, the uvicorn embedded web server did all its logging
independently of the InvokeAI logging infrastructure, and none of the
command-line or `invokeai.yaml` configuration directives, such as
`log_level` had any effect on its output. This PR replaces the uvicorn
logger with InvokeAI's, simultaneously creating a more uniform output
experience, as well as a unified way to control the amount and
destination of the logs.
Here's what we used to see at startup:
```
[2023-07-27 07:29:48,027]::[InvokeAI]::INFO --> InvokeAI version 3.0.1rc2
[2023-07-27 07:29:48,027]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-main
[2023-07-27 07:29:48,028]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 4070
[2023-07-27 07:29:48,040]::[InvokeAI]::INFO --> Scanning /home/lstein/invokeai-main/models for new models
[2023-07-27 07:29:49,263]::[InvokeAI]::INFO --> Scanned 22 files and directories, imported 10 models
[2023-07-27 07:29:49,271]::[InvokeAI]::INFO --> Model manager service initialized
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
INFO: 127.0.0.1:44404 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7Pvd HTTP/1.1" 200 OK
INFO: 127.0.0.1:44404 - "POST /socket.io/?EIO=4&transport=polling&t=OcN7Pw6&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK
INFO: ('127.0.0.1', 44418) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=SB-NsBKLSrW7YtM0AAAA" [accepted]
INFO: connection open
INFO: 127.0.0.1:44430 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7Pw9&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK
INFO: 127.0.0.1:44404 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7PwU&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK
INFO: 127.0.0.1:44404 - "GET /api/v1/images/?is_intermediate=true HTTP/1.1" 200 OK
INFO: 127.0.0.1:43448 - "GET / HTTP/1.1" 200 OK
INFO: connection closed
INFO: 127.0.0.1:43448 - "GET /assets/index-5a784cdd.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:43458 - "GET /assets/favicon-0d253ced.ico HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:43448 - "GET /locales/en.json HTTP/1.1" 200 OK
```
And here's what we see with the new implementation:
```
[2023-07-27 12:05:28,810]::[uvicorn.error]::INFO --> Started server process [875161]
[2023-07-27 12:05:28,810]::[uvicorn.error]::INFO --> Waiting for application startup.
[2023-07-27 12:05:28,810]::[InvokeAI]::INFO --> InvokeAI version 3.0.1rc2
[2023-07-27 12:05:28,810]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-main
[2023-07-27 12:05:28,811]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 4070
[2023-07-27 12:05:28,823]::[InvokeAI]::INFO --> Scanning /home/lstein/invokeai-main/models for new models
[2023-07-27 12:05:29,970]::[InvokeAI]::INFO --> Scanned 22 files and directories, imported 10 models
[2023-07-27 12:05:29,977]::[InvokeAI]::INFO --> Model manager service initialized
[2023-07-27 12:05:29,980]::[uvicorn.error]::INFO --> Application startup complete.
[2023-07-27 12:05:29,981]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2023-07-27 12:05:32,140]::[uvicorn.access]::INFO --> 127.0.0.1:45236 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6ILb HTTP/1.1" 200
[2023-07-27 12:05:32,142]::[uvicorn.access]::INFO --> 127.0.0.1:45248 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6ILb HTTP/1.1" 200
[2023-07-27 12:05:32,154]::[uvicorn.access]::INFO --> 127.0.0.1:45236 - "POST /socket.io/?EIO=4&transport=polling&t=OcO6ILr&sid=13O4-5uLx5eP-NuqAAAA HTTP/1.1" 200
[2023-07-27 12:05:32,168]::[uvicorn.access]::INFO --> 127.0.0.1:45252 - "POST /socket.io/?EIO=4&transport=polling&t=OcO6IM0&sid=0KRqxmh7JLc8t7wZAAAB HTTP/1.1" 200
[2023-07-27 12:05:32,171]::[uvicorn.error]::INFO --> ('127.0.0.1', 45264) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=0KRqxmh7JLc8t7wZAAAB" [accepted]
[2023-07-27 12:05:32,172]::[uvicorn.error]::INFO --> connection open
[2023-07-27 12:05:32,174]::[uvicorn.access]::INFO --> 127.0.0.1:45276 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6IM3&sid=0KRqxmh7JLc8t7wZAAAB HTTP/1.1" 200
```
You can also divert everything to a file with a `invokeai.yaml` entry
like this:
```
Logging:
log_handlers:
- file=/home/lstein/invokeai/logs/access_log
log_format: plain
log_level: info
```
This works with syslog and other log handlers as well.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
https://github.com/huggingface/diffusers/releases/tag/v0.19.0
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No
## Description
This updates InvokeAI's pyproject.toml to the minimum library versions
needed to support Python 3.11. It updates the installer to find and
allow for 3.11, and the documentation.
Between 3.10 and 3.11 there was a change to the handling of `enum`
interpolation into strings that caused the model manager to break. I
think I have fixed the places where this was a problem, but there may be
other instances in which this will cause problems. Please be alert for
errors involving `ModelType` or `BaseModelType`.
I also took the opportunity to add a `SilenceWarnings()` context to the
t2i and i2i invocations. This quenches nags from diffusers about the
HuggingFace NSFW library.
I have tested basic functionality (t2i, i2i, inpaint, lora, controlnet,
nodes) on 3.10 and 3.11 and all seems good. Please test more
extensively!
## Added/updated tests?
- [ X ] Yes - existing tests run to completion
- [ ] No
## [optional] Are there any post deployment tasks we need to perform?
Should be a drop-in replacement.
* add upper bound for minWidth to prevent crash with cypress
* add fallback so UI doesnt crash when backend isnt running
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
when multiple python versions are installed with `pyenv`, the executable
(shim) exists, but returns an error when trying to run it
unless activated with `pyenv`. This commit ensures the python
executable is usable.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature (dev feature and reformatting)
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Introducing black to the code base as a first step towards this:
https://github.com/invoke-ai/InvokeAI/discussions/3721
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : Not applicable
## [optional] Are there any post deployment tasks we need to perform?
All active branches will be affected by this and will need to be
updated.
This PR adds a new github workflow for black as well as config for
pre-commit hooks to those who wish to use it
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] Not needed
## Description
This bugfix enables InvokeAI to convert sd-1, sd-2 and sdxl base model
checkpoints (.safetensors) to diffusers.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] No
## Description
This PR causes the installer to install, by default, the fine-tuned
SDXL-1.0 VAE located at
https://huggingface.co/madebyollin/sdxl-vae-fp16-fix.
Although this VAE is supposed to run at fp16 resolution, currently it
only works in InvokeAI at fp32. However, because it is a fine tune, it
may have fewer of the watermark-related artifacts that we see with the
SDXL-1.0 VAE.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not necessary
## Description
When adding new core models to a 3.0.0 root directory needed to support
SDXL, the configure script was (under some conditions) overwriting
models.yaml. This PR corrects the problem.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No
## Description
I have reworked the console TUIs for the configure and model install
scripts to require much less vertical space. In the event that the
"NEXT" button is still missing and "page 1/2" is displayed, scrolling
beyond the last checkbox will now automatically move to page 2 where the
buttons are displayed. This is not ideal, but will no longer block user
completely.
If users continue to have problems after this, I'll get rid of the TUI
altogether and replace with a web form.
## Added/updated tests?
- [ ] Yes
- [X ] No : not needed
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X ] No, because they trust me
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No
## Description
* Added the RAIL++ license for SDXL
* Updated configure script with URLs for both the original RAIL-M and
RAIL++ licenses
* Added invisible watermark documentation and renamed doc file
* Updated documentation for installation
* Updated documentation on settings in invokeai.yaml
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Metadata was not getting saved coz the accumulator was not plugged in if
watermark or nsfw nodes were turned off.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ x] No, because there was no time!
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] No
## Description
Hotfix for issue of SD1 and SD2 legacy safetensors models not converting
in 3.0.1rc1.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X ] Yes
- [] No
## Description
This PR adds NSFW checker and invisible watermark fields. The NSFW
checker takes an image input and produces an image output. If NSFW
content is detected, the output image will be blurred and a "caution"
icon pasted into its upper left corner. A boolean `active` field
controls whether the checker is active. If turned off it simply returns
a copy of the image.
The invisible watermark node adds an invisible text to the image,
defaulting to "InvokeAI". To decode the watermark use the
`invisible-watermark` command, which is part of the
`invisible-watermark` library:
```
$ invisible-watermark -v -a decode -t bytes -m dwtDct -l 64 ./bluebird-watermark.png
decode time ms: 14.129877090454102
InvokeAI
```
Note that the `-l` (length) argument is mandatory. It is set to 64 here
because the watermark `InvokeAI` is 8 bytes/64 bits long. The length
must match in order for the watermark to be decoded correctly.
Both nodes are now incorporated into the linear Text2Image and
Image2Image UIs, including the canvas. They are not implemented for
inpaint currently.
The nodes can be disabled with configuration options:
```
invisible_watermark: false
nsfw_checker: false
```
or at launch time with `--no-invisible_watermark` and
`--no-nsfw_checker`.
feat(ui) use `as` for menuitem links
I had requested this be done with the chakra `Link` component, but actually using `as` is correct according to the docs. For other components, you are supposed to use `Link` but looks like `MenuItem` has this built in.
Fixed in all places where we use it.
Also:
- fix github icon
- give menu hamburger button padding
- add menu motion props so it animates the same as other menus
feat(ui): restore ColorModeButton
@maryhipp
chore(ui): lint
feat(ui): remove colormodebutton again
sry
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No - not yet WIP
## Description
This PR adds support for loading and converting checkpoint-format
ControlNet and SDXL models. The SDXL and SDXL-refiner model conversions
are working; however saving the unet in safetensors format leads to
corrupted model files, so currently is saving in .bin format (after
scanning the input model).
ControlNet conversion seems to be working but needs further testing.
To use this PR, you will need to copy the files
`invokeai/configs/stable-diffusion/sd_xl_base.yaml` and
`invokeai/configs/stable-diffusion/sd_xl_refiner.yaml` into
`INVOKEAI/configs/stable-diffusion`. You will also need to run
`invokeai-configure --yes --skip-sd` in order to install additional core
model files needed by the converter.
## What type of PR is this? (check all applicable)
- [x] Feature
## Have you discussed this change with the InvokeAI team?
- [x] Yes
## Description
- Update the Aspect Ratio tags to show the aspect ratio values rather
than Wide / Square and etc.
- Updated Lora Input to take values between -50 and 50 coz I found some
LoRA that are actually trained to work until -25 and +15 too. So these
input caps should mostly suffice. If there's ever a LoRA that goes
bonkers on that, we can change it.
- Fixed LoRA's being sorted the wrong way in Lora Select.
- Fixed Embeddings being sorted the wrong way in Embedding Select.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- add `addNSFWCheckerToGraph` and `addWatermarkerToGraph` functions
- use them in all linear graph creation
- add state & toggles to settings modal to enable these
- trigger queries for app config on socket connect
- disable the nsfw/watermark booleans if we get the app config and they are not available
## What type of PR is this? (check all applicable)
- [x] Feature
## Have you discussed this change with the InvokeAI team?
- [x] Yes
## Description
This PR adds support for SDXL Models in the Linear UI
### DONE
- SDXL Base Text To Image Support
- SDXL Base Image To Image Support
- SDXL Refiner Support
- SDXL Relevant UI
## [optional] Are there any post deployment tasks we need to perform?
Double check to ensure nothing major changed with 1.0 -- In any case
those changes would be backend related mostly. If Refiner is scrapped
for 1.0 models, then we simply disable the Refiner Graph.
Rolled back the earlier split of the refiner model query.
Now, when you use `useGetMainModelsQuery()`, you must provide it an array of base model types.
They are provided as constants for simplicity:
- ALL_BASE_MODELS
- NON_REFINER_BASE_MODELS
- REFINER_BASE_MODELS
Opted to just use args for the hook instead of wrapping the hook in another hook, we can tidy this up later if desired.
We can derive `isRefinerAvailable` from the query result (eg are there any refiner models installed). This is a piece of server state, so by using the list models response directly, we can avoid needing to manually keep the client in sync with the server.
Created a `useIsRefinerAvailable()` hook to return this boolean wherever it is needed.
Also updated the main models & refiner models endpoints to only return the appropriate models. Now we don't need to filter the data on these endpoints.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
Updated script to close stale issues with the newest version of the
actions/stale
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Not sure how this script gets kicked off
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: This is a minor fix that I happened upon while
reading
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
Within the `mkdocs.yml` file, there's a typo where `Model Merging` is
spelled as `Model Mergeing`. I also found some unnecessary white space
that I removed.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : Not big enough of a change to require tests (unless it is)
## [optional] Are there any post deployment tasks we need to perform?
Might need to re-run the yml file for docs to regenerate, but I'm hardly
familiar with the codebase so 🤷
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: n/a
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No n/a
## Description
Add a generation mode indicator to canvas.
- use the existing logic to determine if generation is txt2img, img2img,
inpaint or outpaint
- technically `outpaint` and `inpaint` are the same, just display
"Inpaint" if its either
- debounce this by 1s to prevent jank
I was going to disable controlnet conditionally when the mode is inpaint
but that involves a lot of fiddly changes to the controlnet UI
components. Instead, I'm hoping we can get inpaint moved over to latents
by next release, at which point controlnet will work.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
https://github.com/invoke-ai/InvokeAI/assets/4822129/87464ae9-4136-4367-b992-e243ff0d05b4
## Added/updated tests?
- [ ] Yes
- [x] No : n/a
## [optional] Are there any post deployment tasks we need to perform?
n/a
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No, n/a
## Description
When a queue item is popped for processing, we need to retrieve its
session from the DB. Pydantic serializes the graph at this stage.
It's possible for a graph to have been made invalid during the graph
preparation stage (e.g. an ancestor node executes, and its output is not
valid for its successor node's input field).
When this occurs, the session in the DB will fail validation, but we
don't have a chance to find out until it is retrieved and parsed by
pydantic.
This logic was previously not wrapped in any exception handling.
Just after retrieving a session, we retrieve the specific invocation to
execute from the session. It's possible that this could also have some
sort of error, though it should be impossible for it to be a pydantic
validation error (that would have been caught during session
validation). There was also no exception handling here.
When either of these processes fail, the processor gets soft-locked
because the processor's cleanup logic is never run. (I didn't dig deeper
into exactly what cleanup is not happening, because the fix is to just
handle the exceptions.)
This PR adds exception handling to both the session retrieval and node
retrieval and events for each: `session_retrieval_error` and
`invocation_retrieval_error`.
These events are caught and displayed in the UI as toasts, along with
the type of the python exception (e.g. `Validation Error`). The events
are also logged to the browser console.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
Closes#3860 , #3412
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
Create an valid graph that will become invalid during execution. Here's
an example:

This is valid before execution, but the `width` field of the `Noise`
node will end up with an invalid value (`0`). Previously, this would
soft-lock the app and you'd have to restart it.
Now, with this graph, you will get an error toast, and the app will not
get locked up.
## Added/updated tests?
- [x] Yes (ish)
- [ ] No
@Kyle0654 @brandonrising
It seems because the processor runs in its own thread, `pytest` cannot
catch exceptions raised in the processor.
I added a test that does work, insofar as it does recreate the issue.
But, because the exception occurs in a separate thread, the test doesn't
see it. The result is that the test passes even without the fix.
So when running the test, we see the exception:
```py
Exception in thread invoker_processor:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/processor.py", line 50, in __process
self.__invoker.services.graph_execution_manager.get(
File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/sqlite.py", line 79, in get
return self._parse_item(result[0])
File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/sqlite.py", line 52, in _parse_item
return parse_raw_as(item_type, item)
File "pydantic/tools.py", line 82, in pydantic.tools.parse_raw_as
File "pydantic/tools.py", line 38, in pydantic.tools.parse_obj_as
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
```
But `pytest` doesn't actually see it as an exception. Not sure how to
fix this, it's a bit beyond me.
## [optional] Are there any post deployment tasks we need to perform?
nope don't think so
## What type of PR is this? (check all applicable)
- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
`search_for_models` is explicitly typed as taking a singular `Path` but
was given a list because some later function in the stack expects a
list. Fixed that to be compatible with the paths. This is the only use
of that function.
The `list()` call is unrelated but removes a type warning since it's
supposed to return a list, not a set. I can revert it if requested.
This was found through pylance type errors. Go types!
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
This import is missing and used later in the file.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No: n/a
## Description
At some point I typo'd this and set the max seed to signed int32 max. It
should be *un*signed int32 max.
This restored the seed range to what it was in v2.3.
Also fixed a bug in the Noise node which resulted in the max valid seed
being one less than intended.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issues
#2843 is against v2.3 and increases the range of valid seeds
substantially. Maybe we can explore this in the future but as of v3.0,
we use numpy for a RNG in a few places, and it maxes out at the max
`uint32`. I will close this PR as this supersedes it.
- Closes#3866
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
You should be able to use seeds up to and including `4294967295`.
## Added/updated tests?
- [ ] Yes
- [x] No : don't think we have any relevant tests
## [optional] Are there any post deployment tasks we need to perform?
nope!
At some point I typo'd this and set the max seed to signed int32 max. It should be *un*signed int32 max.
This restored the seed range to what it was in v2.3.
## What type of PR is this? (check all applicable)
- [x] Bug Fix
## Have you discussed this change with the InvokeAI team?
- [x] Yes, we feel very passionate about this.
## Description
Uploading an incorrect JSON file to the Node Editor would crash the app.
While this is a much larger problem that we will tackle while refining
the Node Editor, this is a fix that should address 99% of the cases out
there.
When saving an InvokeAI node graph, there are three primary keys.
1. `nodes` - which has all the node related data.
2. `edges` - which has all the edges related data
3. `viewport` - which has all the viewport related data.
So when we load back the JSON, we now check if all three of these keys
exist in the retrieved JSON object. While the `viewport` itself is not a
mandatory key to repopulate the graph, checking for it will allow us to
treat it as an additional check to ensure that the graph was saved from
InvokeAI.
As a result ...
- If you upload an invalid JSON file, the app now warns you that the
JSON is invalid.
- If you upload a JSON of a graph editor that is not InvokeAI, it simply
warns you that you are uploading a non InvokeAI graph.
So effectively, you should not be able to load any graph that is not
generated by ReactFlow.
Here are the edge cases:
- What happens if a user maintains the above key structure but tampers
with the data inside them? Well tested it. Turns out because we validate
and build the graph based on the JSON data, if you tamper with any data
that is needed to rebuild that node, it simply will skip that and load
the rest of the graph with valid data.
- What happens if a user uploads a graph that was made by some other
random ReactFlow app? Well, same as above. Because we do not have to
parse that in our setup, it simply will skip it and only display what
are setup to do.
I think that just about covers 99% of the cases where this could go
wrong. If there's any other edges cases, can add checks if need be. But
can't think of any at the moment.
## Related Tickets & Documents
### Closes
- #3893
- #3881
## [optional] Are there any post deployment tasks we need to perform?
Yes. Making @psychedelicious a little bit happier. :P
- use the existing logic to determine if generation is txt2img, img2img, inpaint or outpaint
- technically `outpaint` and `inpaint` are the same, just display
"Inpaint" if its either
- debounce this by 1s to prevent jank
When a queue item is popped for processing, we need to retrieve its session from the DB. Pydantic serializes the graph at this stage.
It's possible for a graph to have been made invalid during the graph preparation stage (e.g. an ancestor node executes, and its output is not valid for its successor node's input field).
When this occurs, the session in the DB will fail validation, but we don't have a chance to find out until it is retrieved and parsed by pydantic.
This logic was previously not wrapped in any exception handling.
Just after retrieving a session, we retrieve the specific invocation to execute from the session. It's possible that this could also have some sort of error, though it should be impossible for it to be a pydantic validation error (that would have been caught during session validation). There was also no exception handling here.
When either of these processes fail, the processor gets soft-locked because the processor's cleanup logic is never run. (I didn't dig deeper into exactly what cleanup is not happening, because the fix is to just handle the exceptions.)
This PR adds exception handling to both the session retrieval and node retrieval and events for each: `session_retrieval_error` and `invocation_retrieval_error`.
These events are caught and displayed in the UI as toasts, along with the type of the python exception (e.g. `Validation Error`). The events are also logged to the browser console.
## What type of PR is this? (check all applicable)
- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: n/a
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No n/a
## Description
Big cleanup:
- improve & simplify the app logging
- resolve all TS issues
- resolve all circular dependencies
- fix all lint/format issues
## QA Instructions, Screenshots, Recordings
`yarn lint` passes:

<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : n/a
## [optional] Are there any post deployment tasks we need to perform?
bask in the glory of what *should* be a fully-passing frontend lint on
this PR
Added the Ideal Size node
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: It's a community node addition
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Added a reference to my community node that calculates the ideal size
for initial latent generation that avoids duplication. This is the logic
that was present in 2.3.5's first pass of high-res optimization.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : This is a documentation change that references my community
node.
## [optional] Are there any post deployment tasks we need to perform?
Add Face Mask to communityNodes.md
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
Add Face Mask to communituNodes.md list.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: just updated docs to try to help lead new users to
installs a little easier
## Have you updated relevant documentation?
- [x] Yes
- [ ] No
## Description
Some minor docs tweaks
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Revised boards logic and UI
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue # discord convos
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : n/a
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
On mps generating images with resolution above ~1536x1536 results in
"fried" output. Main problem that such resolution results in tensors in
size more then 4gb. Looks like that some of mps internals can't handle
properly this, so to mitigate it I break attention calculation in
chunks.
## QA Instructions, Screenshots, Recordings
Example of bad output:

## What type of PR is this? (check all applicable)
- [ X] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:
## Description
This is a WIP to collect documentation enhancements and other polish
prior to final 3.0.0 release. Minor bug fixes may go in here if
non-controversial. It should be merged into main prior to the final
release.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [x] Bug Fix
## Desc
Fixes a bug where the board name is not displayed in the header if there
are no images in it.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Add progress preview for sdxl generation nodes
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:
## Have you updated relevant documentation?
- [ X] Yes (swagger)
- [ ] No
## Description
This add new routes for getting and setting the command line console
logging level.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes Discussed with @hipsterusername yesterday
- [ ] No, because:
## Have you updated relevant documentation?
- [ ] Yes
- [X] No Not yet (but change to default ControlNet resizing doesn't
require any user documentation)
## Description
This PR adds resize modes (just_resize, crop_resize, fill_resize) to
InvokeAI's ControlNet node. The implementation is largely based on
lllyasviel's, which includes a high quality resizer specifically
intended to handle common ControlNet preprocessor outputs, such as
binary (black/white) images, grayscale images, and binary or grayscale
thin lines. Previously the InvokeAI ControlNet implementation only did a
simple resize with independent x/y scaling to match noise latent.
### "just_resize" mode (the default setting)
With the new implementation, using the default "just_resize" mode,
ControlNet images are still resized with independent x/y scaling to
match the noise latent resolution, but with the high quality resizer. As
a result, images generated in InvokeAI now look much closer to
counterparts generated via sd-webui-controlnet. See example below. All
inference runs are using prompt="old man", same ControlNet canny edge
detection preprocessor and model and control image, identical other
parameters except for control_mode. The top row is previous simple
resize implementation, the bottom row is with new high quality resizer
and "just_resize" mode. Control_mode is: left="balanced", middle="more
prompt", right="more control". The high quality resize images are
identical (at least by eye) to output from sd-webui-controlnet with same
settings.

## "crop_resize" and "fill_resize" modes
The other two resize modes are "crop_resize" and "fill_resize". Whereas
"just_resize" ignores any aspect ratio mismatch between the ControlNet
image and the noise latent, these other modes preserve the aspect ratio
of the ControlNet image. The "crop_resize" mode does this by cropping
the image, and the "fill_resize" option does this by expanding the image
(adding fill pixels). See example below. In this case all inference runs
are using prompt="old man", the ControlNet Midas depth detection
preprocessor and depth model, same control image of size 512x512,
control_mode="balanced", and identical other parameters except for
resize_mode and noise latent dimensions. For top row noise latent size
is 768x512, and for bottom row noise latent size is 512x768. Resize_mode
is: left="just_resize", middle="crop_resize", right="fill_resize"

## Are there any post deployment tasks we need to perform?
To use "just_resize" mode in linear UI, no post deployment work is
needed. The default is switched from old resizer to new high quality
resizer.
To use "just_resize", "crop_resize", and "fill_resize" modes in node UI,
no post deployment work is needed. There is also an additional option
"just_resize_simple" that uses old resizer, mainly left in for testing
and for anyone curious to see the difference.
To use "crop_resize" and "fill_resize" in linear UI, there will need to
be some work to incorporate choice of three modes in ControlNet UI
(probably best to not expose "just_resize_simple" in linear UI, it just
confuses things).
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Description
This changes the "sync" route from a GET to POST method, in keeping with
the Representational Existential(?) State Transfer (REST) protocol.
* feat(ui): enhance clear intermediates feature
- retrieve the # of intermediates using a new query (just uses list images endpoint w/ limit of 0)
- display the count in the UI
- add types for clearIntermediates mutation
- minor styling and verbiage changes
* feat(ui): remove unused settings option for guides
* feat(ui): use solid badge variant
consistent with the rest of the usage of badges
* feat(ui): update board ctx menu, add board auto-add
- add context menu to system boards - only open is select board. did this so that you dont think its broken when you click it
- add auto-add board. you can right click a user board to enable it for auto-add, or use the gallery settings popover to select it. the invoke button has a tooltip on a short delay to remind you that you have auto-add enabled
- made useBoardName hook, provide it a board id and it gets your the board name
- removed `boardIdToAdTo` state & logic, updated workflows to auto-switch and auto-add on image generation
* fix(ui): clear controlnet when clearing intermediates
* feat: Make Add Board icon a button
* feat(db, api): clear intermediates now clears all of them
* feat(ui): make reset webui text subtext style
* feat(ui): board name change submits on blur
---------
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: documentation update that needs review from the team
before going live
## Description
I updated the contribution guidelines, adding more structure and a
getting started guide. Also re-organized the tabs to be in the order of
most commonly used.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
run `mkdocs serve` to check it out
## Added/updated tests?
- [ ] Yes
- [X ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Description
ImageToLatentsInvocation defaulted to float16 rather than detect the
requested precision from configs.
This caused an exception to be raised on systems that don't support
float16 (e.g. CPU).
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
* feat(ui): migrate listImages to RTK query using createEntityAdapter
- see comments in `endpoints/images.ts` for explanation of the caching
- so far, only manually updating `all` images when new image is generated. no other manual cache updates are implemented, but will be needed.
- fixed some weirdness with loading state components (like the spinners in gallery)
- added `useThumbnailFallback` for `IAIDndImage`, this displays the tiny webp thumbnail while the full-size images load
- comment out some old thunk related stuff in gallerySlice, which is no longer needed
* feat(ui): add manual cache updates for board changes (wip)
- update RTK Query caches when adding/removing single image to/from board
- work more on migrating all image-related operations to RTK Query
* update AddImagesToBoardContext so that it works when user uses context menu + modal
* handle case where no image is selected
* get assets working for main list and boards - dnd only
* feat(ui): migrate image uploads to RTK Query
- minor refactor of `ImageUploader` and `useImageUploadButton` hooks, simplify some logic
- style filesystem upload overlay to match existing UI
- replace all old `imageUploaded` thunks with `uploadImage` RTK Query calls, update associated logic including canvas related uploads
- simplify `PostUploadAction`s that only need to display user input
* feat(ui): remove `receivedPageOfImages` thunks
* feat(ui): remove `receivedImageUrls` thunk
* feat(ui): finish removing all images thunks
stuff now broken:
- image usage
- delete board images
- on first load, no image selected
* feat(ui): simplify `updateImage` cache manipulation
- we don't actually ever change categories, so we can remove a lot of logic
* feat(ui): simplify canvas autosave
- instead of using a network request to set the canvas generation as not intermediate, we can just do that in the graph
* feat(ui): simplify & handle edge cases in cache updates
* feat(db, api): support `board_id='none'` for `get_many` images queries
This allows us to get all images that are not on a board.
* chore(ui): regen types
* feat(ui): add `All Assets`, `No Board` boards
Restructure boards:
- `all images` is all images
- `all assets` is all assets
- `no board` is all images/assets without a board set
- user boards may have images and assets
Update caching logic
- much simpler without every board having sub-views of images and assets
- update drag and drop operations for all possible interactions
* chore(ui): regen types
* feat(ui): move download to top of context menu
* feat(ui): improve drop overlay styles
* fix(ui): fix image not selected on first load
- listen for first load of all images board, then select the first image
* feat(ui): refactor board deletion
api changes:
- add route to list all image names for a board. this is required to handle board + image deletion. we need to know every image in the board to determine the image usage across the app. this is fetched only when the delete board and images modal is opened so it's as efficient as it can be.
- update the delete board route to respond with a list of deleted `board_images` and `images`, as image names. this is needed to perform accurate clientside state & cache updates after deleting.
db changes:
- remove unused `board_images` service method to get paginated images dtos for a board. this is now done thru the list images endpoint & images service. needs a small logic change on `images.delete_images_on_board`
ui changes:
- simplify the delete board modal - no context, just minor prop drilling. this is feasible for boards only because the components that need to trigger and manipulate the modal are very close together in the tree
- add cache updates for `deleteBoard` & `deleteBoardAndImages` mutations
- the only thing we cannot do directly is on `deleteBoardAndImages`, update the `No Board` board. we'd need to insert image dtos that we may not have loaded. instead, i am just invalidating the tags for that `listImages` cache. so when you `deleteBoardAndImages`, the `No Board` will re-fetch the initial image limit. i think this is more efficient than e.g. fetching all image dtos to insert then inserting them.
- handle image usage for `deleteBoardAndImages`
- update all (i think/hope) the little bits and pieces in the UI to accomodate these changes
* fix(ui): fix board selection logic
* feat(ui): add delete board modal loading state
* fix(ui): use thumbnails for board cover images
* fix(ui): fix race condition with board selection
when selecting a board that doesn't have any images loaded, we need to wait until the images haveloaded before selecting the first image.
this logic is debounced to ~1000ms.
* feat(ui): name 'No Board' correctly, change icon
* fix(ui): do not cache listAllImageNames query
if we cache it, we can end up with stale image usage during deletion.
we could of course manually update the cache as we are doing elsewhere. but because this is a relatively infrequent network request, i'd like to trade increased cache mgmt complexity here for increased resource usage.
* feat(ui): reduce drag preview opacity, remove border
* fix(ui): fix incorrect queryArg used in `deleteImage` and `updateImage` cache updates
* fix(ui): fix doubled open in new tab
* fix(ui): fix new generations not getting added to 'No Board'
* fix(ui): fix board id not changing on new image when autosave enabled
* fix(ui): context menu when selection is 0
need to revise how context menu is triggered later, when we approach multi select
* fix(ui): fix deleting does not update counts for all images and all assets
* fix(ui): fix all assets board name in boards list collapse button
* fix(ui): ensure we never go under 0 for total board count
* fix(ui): fix text overflow on board names
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
* new route to clear intermediates
* UI to clear intermediates from settings modal
* cleanup
* PR feedback
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Description
In transformers 4.31.0 `text_model.embeddings.position_ids` no longer
part of state_dict.
Fix untested as can't run right now but should be correct. Also need to
check how transformers 4.30.2 works with this fix.
## Related Tickets & Documents
8e5d1619b3 (diff-7f53db5caa73a4cbeb0dca3b396e3d52f30f025b8c48d4daf51eb7abb6e2b949R191)https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.register_buffer
## QA Instructions, Screenshots, Recordings
```
File "C:\Users\artis\Documents\invokeai\.venv\lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 844, in convert_ldm_clip_checkpoint
text_model.load_state_dict(text_model_dict)
File "C:\Users\artis\Documents\invokeai\.venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".
```
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
## Description
Fix for
```
File "/home/invokeuser/InvokeAI/invokeai/app/services/processor.py",
line 70, in __process
outputs = invocation.invoke(
File "/home/invokeuser/InvokeAI/invokeai/app/invocations/latent.py",
line 660, in invoke
device=choose_torch_device()
NameError: name 'choose_torch_device' is not defined
```
when using scale latents node
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:
## Description
This PR points mkdocs to the `main` branch again, so that the 3.0.0
documentation appears in gh-pages.
It also makes a minor tweak to the tooltip for model imports, so that
users know that URLs are accepted.
Also rebuilds frontend for use in beta testing.
I've opted to leave out any additional upscaling parameters like scale
and denoising strength, which, from my review of the ESRGAN code, don't
do much:
- scale just resizes the image using CV2 after the AI upscaling, so
that's not particularly useful
- denoising strength is only valid for one class of model, which we are
no longer supporting
If there is demand, we can implement output size/scale UI and handle it
by passing the upscaled image to that a resize/scale node.
I also understand we previously had some functionality to blend the
upscaled image with the original. If that is desired, we would need to
implement that as a node that we can pass the upscaled image to.
Demo:
https://github.com/invoke-ai/InvokeAI/assets/4822129/32eee615-62a1-40ce-a183-87e7d935fbf1
---
[feat(nodes): add RealESRGAN_x2plus.pth, update upscale
nodes](dbc256c5b4)
- add `RealESRGAN_x2plus.pth` model to installer @lstein
- add `RealESRGAN_x2plus.pth` to `realesrgan` node
- rename `RealESRGAN` to `ESRGAN` in nodes
- make `scale_factor` optional in `img_scale` node
[feat(ui): restore ad-hoc
upscaling](b3fd29e5ad)
- remove face restoration entirely
- add dropdown for ESRGAN model select
- add ad-hoc upscaling graph and workflow
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
There no vram cleanup on models offload which leads to filling vram and
slow generation speed.
## What type of PR is this? (check all applicable)
- [x] Feature
- [x] Optimization
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [x] Optimization
- [ ] Documentation Update
## Description
Various fixes to consume less memory and make run sdxl on 8gb vram.
Most changes due to moving all output tensors to cpu, so that cached
tensors not consume vram.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Fixes a bug in the `inpaint` node introduced by the new version of
`compel`. The other nodes were updated, but this one was missed. Fixed
by @StAlKeR7779 ty
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue # discord reports
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : n/a, bugfix
This contains minor fixes to the beta as well as the version bump to
3.0.0.
Fixes include:
- Warning user when the installer window size is inadequate for the TUI.
- Selection of the most frequently downloaded controlnet models for
default installation.
- Adding the LowRA LoRA for dark image enhancement
- Documentation
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
Making some final style fixes before we push the next 3.0 version
tomorrow.
- Fixed light mode colors in Settings Modal.
- Double checked other light mode colors. Nothing seems off.
- Added Base Model badge to the model list item. Makes it visually
better and also serves as a quick glance feature for the user.
- Some minor styling updates to the Node Editor.
- Fixed hotkeys 'G' and 'O', 'Shift+G' and 'Shift+O' used to toggle the
panels not resizing canvas. #3780
- Fixed hotkey 'N' not working for Snap To Grid on Canvas.
- Fixed brush opacity hotkeys not working.
- Cleaned up hotkeys modal of hotkeys that are no longer used.
- Updated compel requirement to `2.0.0`
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#3780
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
hides sdxl models from linear ui model select. just a hold-me-over
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : n/a
## [optional] Are there any post deployment tasks we need to perform?
- add `RealESRGAN_x2plus.pth` model to installer
- add `RealESRGAN_x2plus.pth` to `realesrgan` node
- rename `RealESRGAN` to `ESRGAN` in nodes
- make `scale_factor` optional in `img_scale` node
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:
If its not useful, they do not have to use it 😄
## Description
While I was still in the viewportcontrols.tsx
added Option to toggle off the minimap with default being on(true)
added Tooltips to the buttons in viewportcontrols.tsx
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
This is a WIP to add SDXL support.
Tasks:
- [x] SDXL model loading support
- [x] SDXL model installation
- [x] SDXL model loader
- [x] SDXL base invocations for text2latent and latent2latent
- [ ] SDXL refiner invocations for text2latent and latent2latent
- [x] Compel support / pooled embeddings
- [ ] Linear UI graph for SDXL
- [ ] Documentation
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
fix json formatting to not have big red comment blocks
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: simple docs fix
## Description
Fix LOCAL_DEVELOPMENT.md json comment highlighting
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue # n/a
- Closes # n/a
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : simple docs change
This PR completely ports over the Model Manager to 3.0 -- all of the
functionality has now been restored in addition to the following
changes.
- Model Manager now has been moved to its own tab on the left hand side.
- Model Manager has three tabs - Model Manager, Import Models and Merge
Models
- The edit forms for the Models now allow the users to update the model
name and the base model too along with other details.
- Checkpoint Edit form now displays the available config files from
InvokeAI and also allows users to supply their own custom config file.
- Under Import Models you can directly add models or a scan a folder for
your checkpoint files.
- Adding models has two modes -- Simple and Advanced.
- In Simple Mode, you just simply need to pass a path and InvokeAI will
try to determine kind of model it is and fill up the rest of the details
accordingly. This input lets you supply local paths to diffusers / local
paths to checkpoints / huggingface repo ID's to download models /
CivitAI links.
- Simple Mode also allows you to download different models types like
VAE's and Controlnet models and etc. Not just main models.
- In cases where the auto detection system of InvokeAI fails to read a
model correctly, you can take the manual approach and go to Advanced
where you can configure your model while adding it exactly the way you
want it. Both Diffusers and Checkpoint models now have their own custom
forms.
- Scan Models has been cleaned up. It will now only display the models
that are not already installed to InvokeAI. And each item will have two
options - Quick Add and Advanced .. replicating the Add Model behavior
from above.
- Scan Models now has a search bar for you to search through your
scanned models.
- Merge Models functionality has been restored.
This is a wrap for this PR.
**TODO: (Probably for 3.1)**
- Add model management for model types such as VAE's and ControlNet
Models
- Replace the VAE slot on the edit forms with the installed VAE drop
down + custom option
[feat(nodes): emit model loading
events](7b6159f8d6)
- remove dependency on having access to a `node` during emits, would
need a bit of additional args passed through the system and I don't
think its necessary at this point. this also allowed us to drop an
extraneous fetching/parsing of the session from db.
- provide the invocation context to all `get_model()` calls, so the
events are able to be emitted
- test all model loading events in the app and confirm socket events are
received
[feat(ui): add listeners for model load
events](c487166d9c)
- currently only exposed as DEBUG-level logs
---
One change I missed in the commit messages is the `ModelInfo` class is
not serializable, so I split out the pieces of information we didn't
already have (hash, location, precision) and added them to the event
payload directly.
This small patch improves the stability of `invokeai-*` scripts by
avoiding crashes in the model manager while scanning the models
directory for new and removed models.
Both support the same actions:
- Open in new tab
- Copy image (if supported by browser)
- Use prompt
- Use seed
- Use all
- Send to img2img
- Send to canvas
- Change board
- Download image
- Delete
- restore copy image functionality* in image context menu, current image buttons
- give IAIDndImage the same context menu
* copying image to clipboard is not possible on Firefox unless the user enables a setting which is disabled by default. if the browser does not support copying an image, the copy functionality is disabled.
- filename -> file_path
- pre and post prompt changed to optional
- clearer pre and post prompt descriptions
- handle pre and post prompt passed as None
- max_prompts defaults to 1 isted of 0 to avoid accidentally processing large prompt files with it set to 0 when adding a new node.
This PR adds several default models to the ones selected at install
time. It also removes the GFPGAN and text2clip models, which should
shave a little time off the install process.
## ESRGAN:
* models/core/upscaling/realesrgan/RealESRGAN_x4plus.pth
* models/core/upscaling/realesrgan/RealESRGAN_x4plus_anime_6B.pth
*
models/core/upscaling/realesrgan/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth
## ControlNet
* models/sd-1/controlnet/canny
* models/sd-1/controlnet/depth
* models/sd-1/controlnet/lineart
* models/sd-1/controlnet/openpose
## Embedding (textual inversion)
* models/sd-1/embedding/EasyNegative.safetensors
- remove dependency on having access to a `node` during emits, would need a bit of additional args passed through the system and I don't think its necessary at this point. this also allowed us to drop an extraneous fetching/parsing of the session from db.
- provide the invocation context to all `get_model()` calls, so the events are able to be emitted
- test all model loading events in the app and confirm socket events are received
- update controlnet state to use object format for model
- update model-parsing helper functions to log errors
- update nodes components, types and state
- remove controlnets from state when models are loaded and the controlnet's model is not available
# Multiple enhancements to model manager REACT API
1. add a `/sync` route for synchronizing the in-memory model lists to
models.yaml, the models directory, and the autoimport directories.
2. added optional destination directories to convert_model and
merge_model operations.
3. added a `/ckpt_confs` route for retrieving known legacy checkpoint
configuration files.
4. added a `/search` route for finding all models in a directory located
in the server filesystem
5. added a `/add` route for manual addition of a local models
6. added a `/rename` route for renaming and/or rebasing models
7. changed the path of the `import_model` route to `/import`
# Slightly annoying detail:
When adding a model manually using `/add`, the body JSON must exactly
match one of the model configurations returned by `list_models` (i.e.
there is no defaulting of fields). This includes the `error` field,
which should be set to "null".
1. add a /sync route for synchronizing the in-memory model lists to
models.yaml, the models directory, and the autoimport directories.
2. add optional destination_directories to convert_model and merge_model
operations.
3. add /ckpt_confs route for retrieving known legacy checkpoint configuration
files.
4. add /search route for finding all models in a directory located in the server
filesystem
DONE:
- Restore Update Model functionality
- Restore Delete Model functionality
- Restore Model Convert functionality
- Restore Model Merge functionality
- Refine UX (fine tweaks when everything is done - TODO)
TODO
- Add Model (will be finished in a future PR once the backend work is
done)
IAIMantineSelect and IAIMantineMultiSelect have a bit of extra logic that prevents simple select functionality from working as expected.
- extract the styles into hooks
- rename those two components to IAIMantineSearchableSelect and IAIMantineSearchableMultiSelect
- Create IAIMantineSelect (which is just a dropdown) and use it in model manager and a few other places
When we only have a few options to present and searching is not efficient, we should use this instead.
Image files are immutable and we expect deletion to result in no further
requests for a given image, so we can set the max-age to something
thicc.
Resolves#3426
@ebr @brandonrising @maryhipp
- simplify UI logic in `ModelManagerPanel` components
- fix up the types a bit to make it easier to select models
- remove `openModel` state, just make it a useState since it is very local to model manager
similar to the previous commit, update the node editor to not just store models as strings - instead, store the model object.
the model select components in nodes are now just kinda copy-pastes over the linear UI versions of the same components, but they were different enough that we can't just share them.
i explored adding some props to override the linear ui components' logic, but it was too brittle. so just copy/paste.
We were storing all types of models by their model ID, which is a format like `sd-1/main/deliberate`.
This meant we had to do a lot of extra parsing, because nodes actually wants something like `{base_model: 'sd-1', model_name: 'deliberate'}`.
Some of this parsing was done with zod's error-throwing `parse()` method, and in other places it was done with brittle string parsing.
This commit refactors the state to use the object form of models.
There is still a bit of string parsing done in the to construct the ID from the object form, but it's far less complicated.
Also, the zod parsing is now done using `safeParse()`, which does not throw. This requires a few more conditional checks, but should prevent further crashes.
* feat(ui): salvaged gallery UI enhancements
* restore boardimage functionality, load boardimages and remove some cachine optimizations in the name of data integrity
* fix assets, fix load more params
* jk NOW fix assets, fix load more params
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: Mary Hipp Rogers <maryhipp@gmail.com>
- available infill methods is server state - remove it from client state, use the query to populate the dropdown
- add listener to ensure the selected infill method is an available one
As it said in comment to this branch we want to use conditioning run:
```python
if cfg_injection: # only applying ControlNet to conditional instead of in unconditioned
```
But in code used unconditioning
embeddings(`conditioning_data.unconditioned_embeddings`).
Later in code confirms that we want to run conditioning generation by
comment and tensor concatenation order(as all code expect to get [uc, c]
tensor):
```python
if cfg_injection:
# Inferred ControlNet only for the conditional batch.
# To apply the output of ControlNet to both the unconditional and conditional batches,
# add 0 to the unconditional batch to keep it unchanged.
down_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_samples]
mid_sample = torch.cat([torch.zeros_like(mid_sample), mid_sample])
```
Adds a Clear Nodes Button with Confirmation Dialog, I think I Did it
right 😃
I am sure there is a way to make the Confirmation look better and have
Yes/No instead of OK/Cancel
- Restore recall functionality to `CurrentImageButtons` and `ImageContextMenu`.
- Debounce metadata requests for `ImageMetadataViewer` and `CurrentImageButtons` by 500ms. It's possible to scroll through these really fast, so we want to debounce the network requests.
- `ImageContextMenu` is lazy-mounted so it does not need to be debounced; it makes the metadata request as soon as you click it.
- Move next/prev image selection logic into hook and add the hotkeys for this to `CurrentImageButtons`. The hotkeys now work when metadata viewer is open.
I will follow up with improved loading state during the debounced calls in the future
- Update for new routes
- Update model storage in state to be `MainModelField` type instead of `string`, simplifies a lot of model handling
- Update model-related stuff for model `name` --> `model_name`
- Update linear graphs to use `MetadataAccumulator`
- Update `ImageMetadataViewer` UI
- Ensure all `recall` functions work (well, the ones that are active anyways)
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.
Additionally, we provide the unexpanded graph with the metadata API response.
Both of these are embedded into the PNGs.
- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
Our model fields use `model_name`, but the API response uses `name`. Some places use `model_type` but the API response used `type`.
Changed the API response to provide `model_name` and `model_type`, which simplifies how we manage models on the client substantially.
- rewrite Dockerfile
- add a stage to build the UI
- add docker-compose.yml
- add docker-entrypoint.sh such that any command may be used at runtime
- docker-compose adds .env support - add a sample .env file
* fix the test of the config system
* Add torchmetrics==0.11.4 to installer
- Closes#3700
- Closes#3658
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
[feat(ui): memoize ImageContextMenu
selector](265996d230)
Without the selector itself being memoized, the gallery was rerendering
on every progress image.
[feat(ui): memoize NextPrevImageButtons
component](a7b8109ac2)
This was rerendering on every progress image, now it doesn't
[fix(ui): correctly set disabled on invoke button during
generation](1c45d18e6d)
It wasn't disabled when it should have been, looked clickable during
generation.
[fix(nodes): remove board_id column from images
table](00e26ffa9a)
This is extraneous; the `board_images` table holds image-board
relationships. @maryhipp
Image files are immutable and we expect deletion to result in no further requests for a given image, so we can set the max-age to something thicc.
Resolves#3426
Just a small thing now, as nodes are all still wip, but since
@psychedelicious was nice enough to add the progress image node for me,
what I noticed was missing now is the cancel button on nodes tab
@psychedelicious @blessedcoolant Somehow i deleted the branch the other
version of this pull request was on. 🤭
Just an idea, if you think its worth while please make changes ( I did
what I could)
I added a load more to the right arrow to avoid having to open gallery
to load more images,
I am not sure about the icon i used, maybe it should just be the normal
arrow, so you don't even need to show its loading more images.
there is an issue with it not disappearing once all images have been
loaded, (I did play around for a while to try and fix that)
Some users want the model select to take full width coz their model
names might be long. As this is a more frequently used feature,
rearrange it to do that.
Followed by VAE (as it is related to the model) and the Sampler next to
it.
I made a recent change to the function that finds the default root
directory locatoin that broke it when run under Conda (where VIRTUAL_ENV
is not set). This revision fixes the issue.
Mantine's multiselect does not let you edit the search box with mouse, paste into it, etc. Normal select is fine.
I can't remember why I made Lora etc multiselects, but everything seems to work with normal selects, so I've change to that.
- `isLoading` - now `true` *only* on first load
- added `isFetching` - `true` whenever gallery images are fetching
- on first load, show a spinner instead of skeletons. this prevents an awkward flash of skeletons into empty gallery when the gallery doesn't have enough images to fill it.
- removed `imageCategoriesChanged` listener, bc now on app start, both images and assets will be populated. leaving this in caused jank flashes of skeletons when switching gallery tabs when gallery doesn't have images to load
taking the coward's way out on this and just fetching 100 images & 100 assets on app start...
- add `appStarted` action, dispatched once on mount in App.tsx. listener fetches 100 images & 100 assets
- fix bug with selectedBoardId & assets tab
The shift key listener didn't catch pressed when focused in a textarea
or input field, causing jank on slider number inputs.
Add keydown and keyup listeners to all such fields, which ensures that
the `shift` state is always correct.
Also add the action tracking it to `actionsDenylist` to not clutter up
devtools.
The shift key listener didn't catch pressed when focused in a textarea or input field, causing jank on slider number inputs.
Add keydown and keyup listeners to all such fields, which ensures that the `shift` state is always correct.
Also add the action tracking it to `actionsDenylist` to not clutter up devtools.
There was a props on IAISlider to make the input component readonly - I
didn't know this existed and at some point used a component with that
prop as a template for other sliders, copying the flag over.
It's not actually used anywhere, so I removed the prop entirely,
enabling the number inputs everywhere.
There was a props on IAISlider to make the input component readonly - I didn't know this existed and at some point used a component with that prop as a template for other sliders, copying the flag over.
It's not actually used anywhere, so I removed the prop entirely, enabling the number inputs everywhere.
I'm not sure if this was just my local install, but even after a fresh
`yarn install` my upload network request was failing because no file was
passed in. I don't think the `bodySerializer` part is getting run
I'm not sure if this was just my local install, but even after a fresh
`yarn install` my upload network request was failing because no file was
passed in. I don't think the `bodySerializer` part is getting run
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.
Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
- No longer fail root directory probing if invokeai.yaml is missing
(test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
with --from and --to pointing to same 2.3 root.
Clip Skip breaks when you supply a number greater than the number of
layers for the model type. So capping this out based on the model on the
frontend
- `sd-1` at 12
- `sd-2` at 24
- Will update later to whatever SDXL needs if it is different.
- Also fixes LoRA's breaking with Clip Skip.
My PR to fix an issue with the handling of formdata in `openapi-fetch` is released. This means we no longer need to patch the package (no patches at all now!).
This PR bumps its version and adds a transformer to our typegen script to handle typing binary form fields correctly as `Blob`.
Also regens types.
This is PR adds the following API methods for managing models:
* list_models (GET)
* update_model (PATCH)
* import_model (POST)
* delete_model (DELETE)
* convert_model (PUT)
* merge_models (PUT)
* load images on gallery render
* wait for models to be loaded before you can invoke
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
This PR enables model manager importation of diffusers-style .bin LoRAs.
However, since there is no backend support for this type of LoRA yet,
attempts to use them will result in an unimplemented error.
It closes#3636 and #3637
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)
Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
This PR fixes the migrate script so that it uses the same directory for
both the tokenizer and text encoder CLIP models. This will fix a crash
that occurred during checkpoint->diffusers conversions
This PR also removes the check for an existing models directory in the
target root directory when `invokeai-migrate3` is run.
* close modal when user clicks cancel
* close modal when delete image context cleared
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
A user discovered that 2.3 models whose symbolic names contain the "/"
character are not imported properly by the `migrate-models-3` script.
This fixes the issue by changing "/" to underscore at import time.
- Accordions now may be opened or closed regardless of whether or not
their contents are enabled or active
- Accordions have a short text indicator alerting the user if their
contents are enabled, either a simple `Enabled` or, for accordions like
LoRA or ControlNet, `X Active` if any are active
https://github.com/invoke-ai/InvokeAI/assets/4822129/43db63bd-7ef3-43f2-8dad-59fc7200af2e
- Accordions now may be opened or closed regardless of whether or not their contents are enabled or active
- Accordions have a short text indicator alerting the user if their contents are enabled, either a simple `Enabled` or, for accordions like LoRA or ControlNet, `X Active` if any are active
This caused a lot of re-rendering whenever the selection changed, which caused a huge performance hit. It also made changing the current image lag a bit.
Instead of providing an array of image names as a multi-select dnd payload, there is now no multi-select dnd payload at all - instead, the payload types are used by the `imageDropped` listener to pull the selection out of redux.
Now, the only big re-renders are when the selectionCount changes. In the future I'll figure out a good way to do image names as payload without incurring re-renders.
Every `GalleryImage` was rerendering any time the app rerendered bc the selector function itself was not memoized. This resulted in the memoization cache inside the selector constantly being reset.
Same for `BatchImage`.
Also updated memoization for a few other selectors.
Eg `useGetMainModelsQuery()`, `useGetLoRAModelsQuery()` instead of `useListModelsQuery({base_type})`.
Add specific adapters for each model type. Just more organised and easier to consume models now.
Also updated LoRA UI to use the model name.
This PR is to allow FP16 precision to work on Macs with MPS. In addition, it centralizes the torch fixes/workarounds
required for MPS into a new backend utility file `mps_fixes.py`. This is conditionally imported in `api_app.py`/`cli_app.py`.
Many MANY thanks to StAlKeR7779 for patiently working to debug and fix these issues.
This PR is for adjusting the unit tests in the `tests` directory so that
they no longer throw errors.
I've removed two tests that were obsoleted by the shift to latent nodes,
but `test_graph_execution_state.py` and `test_invoker.py` are throwing
this validation error:
```
TypeError: InvocationServices.__init__() missing 2 required positional arguments: 'boards' and 'board_images'
```
The `invokeai-configure` script migrates the old invokeai.init file to
the new invokeai.yaml format. However, the parser for the invokeai.init
file was missing the names of the k* samplers and was giving a parser
error on any invokeai.init file that referred to one of these samplers.
This PR fixes the problem.
Ironically, there is no longer the concept of the preferred scheduler in
3.0, and so these sampler names are simply ignored and not written into
`invokeai.yaml`
This introduces the core functionality for batch operations on images and multiple selection in the gallery/batch manager.
A number of other substantial changes are included:
- `imagesSlice` is consolidated into `gallerySlice`, allowing for simpler selection of filtered images
- `batchSlice` is added to manage the batch
- The wonky context pattern for image deletion has been changed, much simpler now using a `imageDeletionSlice` and redux listeners; this needs to be implemented still for the other image modals
- Minimum gallery size in px implemented as a hook
- Many style fixes & several bug fixes
TODO:
- The UI and UX need to be figured out, especially for controlnet
- Batch processing is not hooked up; generation does not do anything with batch
- Routes to support batch image operations, specifically delete and add/remove to/from boards
@blessedcoolant it looks like with the new theme buttons not being
transparent the progress bar was completely hidden, I moved to be on
top, however it was not transparent so it hid the invoke text, after
trying for a while couldn't get it to be transparent, so I just made the
height 15%,
Basically updated all slices to be more descriptive in their names. Did so in order to make sure theres good naming scheme available for secondary models.
Update the text to imaeg and image to image graphs to work with the new model loader. Currently only supports 1.x models. Will update this soon to make it work with all models.
(Replace `v3.0.0` with the current release number if this document is out of date).
The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migration Caveats
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. The released
version of 3.0 is expected to have an interface for importing an
entire directory of image files as a batch.
images stored in your 2.3-format outputs directory. You will need to
manually import selected images into the 3.0 gallery via drag-and-drop.
## Hardware Requirements
@ -305,9 +324,12 @@ AMD card (using the ROCm driver).
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
of VRAM is highly recommended for rendering using the Stable
Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
only), 6-8 GB for XL rendering.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
@ -329,24 +351,23 @@ InvokeAI offers a locally hosted Web Server & React Frontend, with an industry l
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Advanced Prompt Syntax*
### *Node Architecture & Editor (Beta)*
Invoke AI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
### *Command Line Interface*
### *Board & Gallery Management*
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Upscaling & Face Restoration Tools*
- *SD 2.0, 2.1, XL support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
- *Boards & Gallery Management
### Latest Changes
@ -359,7 +380,7 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
## 🤝 Contributing
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
@ -378,7 +399,7 @@ to become part of our community.
Welcome to InvokeAI!
### 👥 Contributors
### Contributors
This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
All commands are to be run from the `docker` directory: `cd docker`
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
#### macOS
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
This is done via Docker Desktop preferences
## Quickstart
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1.`docker compose up`
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
### Use a GPU
- Linux is *recommended* for GPU support in Docker.
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
You can also set these values in `docker compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (most values are optional):
```
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
```
## Even Moar Customizing!
See the `docker compose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory
Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
Stable Diffusion distribution by InvokeAI: https://github.com/invoke-ai
The Docker image tracks the `main` branch of the InvokeAI project, which means it includes the latest features, but may contain some bugs.
Your working directory is mounted under the `/workspace` path inside the pod. The models are in `/workspace/invokeai/models`, and outputs are in `/workspace/invokeai/outputs`.
> **Only the /workspace directory will persist between pod restarts!**
> **If you _terminate_ (not just _stop_) the pod, the /workspace will be lost.**
## Quickstart
1. Launch a pod from this template. **It will take about 5-10 minutes to run through the initial setup**. Be patient.
1. Wait for the application to load.
- TIP: you know it's ready when the CPU usage goes idle
- You can also check the logs for a line that says "_Point your browser at..._"
1. Open the Invoke AI web UI: click the `Connect` => `connect over HTTP` button.
1. Generate some art!
## Other things you can do
At any point you may edit the pod configuration and set an arbitrary Docker command. For example, you could run a command to downloads some models using `curl`, or fetch some images and place them into your outputs to continue a working session.
If you need to run *multiple commands*, define them in the Docker Command field like this:
This image includes a couple of handy tools to help you get the data into the pod (such as your custom models or embeddings), and out of the pod (such as downloading your outputs). Here are your options for getting your data in and out of the pod:
- **SSH server**:
1. Make sure to create and set your Public Key in the RunPod settings (follow the official instructions)
1. Add an exposed port 22 (TCP) in the pod settings!
1. When your pod restarts, you will see a new entry in the `Connect` dialog. Use this SSH server to `scp` or `sftp` your files as necessary, or SSH into the pod using the fully fledged SSH server.
1. On your computer, `pip install magic-wormhole` (see above instructions for details)
1. Connect to the command line **using the "light" SSH client** or the browser-based console. _Currently there's a bug where `wormhole` isn't available when connected to "full" SSH server, as described above_.
1.`wormhole send /workspace/invokeai/outputs` will send the entire `outputs` directory. You can also send individual files.
1. Once packaged, you will see a `wormhole receive <123-some-words>` command. Copy it
1. Paste this command into the terminal on your local machine to securely download the payload.
1. It works the same in reverse: you can `wormhole send` some models from your computer to the pod. Again, save your files somewhere in `/workspace` or they will be lost when the pod is stopped.
- **RunPod's Cloud Sync feature** may be used to sync the persistent volume to cloud storage. You could, for example, copy the entire `/workspace` to S3, add some custom models to it, and copy it back from S3 when launching new pod configurations. Follow the Cloud Sync instructions.
### Disable the NSFW checker
The NSFW checker is enabled by default. To disable it, edit the pod configuration and set the following command:
We're thrilled to have you here and we're excited for you to contribute.
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
Here are some guidelines to help you get started:
### Technical Prerequisites
## Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
Front-end: You'll need a working knowledge of React and TypeScript.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
Back-end: Depending on the scope of your contribution, you may need to know SQLite, FastAPI, Python, and Socketio. Also, a good majority of the backend logic involved in processing images is built in a modular way using a concept called "Nodes", which are isolated functions that carry out individual, discrete operations. This design allows for easy contributions of novel pipelines and capabilities.
### Areas of contribution:
### How to Submit Contributions
#### Development
If you’d like to help with development, please see our [development guide](contribution_guides/development.md). If you’re unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
To start contributing, please follow these steps:
#### Documentation
If you’d like to help with documentation, please see our [documentation guide](contribution_guides/documenation.md).
1. Familiarize yourself with our roadmap and open projects to see where your skills and interests align. These documents can serve as a source of inspiration.
2. Open a Pull Request (PR) with a clear description of the feature you're adding or the problem you're solving. Make sure your contribution aligns with the project's vision.
3. Adhere to general best practices. This includes assuming interoperability with other nodes, keeping the scope of your functions as small as possible, and organizing your code according to our architecture documents.
#### Translation
If you'd like to help with translation, please see our[translation guide](docs/contributing/.contribution_guides/translation.md).
### Types of Contributions We're Looking For
#### Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We welcome all contributions that improve the project. Right now, we're especially looking for:
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
1. Quality of life (QOL) enhancements on the front-end.
2. New backend capabilities added through nodes.
3. Incorporating additional optimizations from the broader open-source software community.
### Communication and Decision-making Process
### Contributors
Project maintainers and code owners review PRs to ensure they align with the project's goals. They may provide design or architectural guidance, suggestions on user experience, or provide more significant feedback on the contribution itself. Expect to receive feedback on your submissions, and don't hesitate to ask questions or propose changes.
This project is a combined effort of dedicated people from across the world.[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
For more robust discussions, or if you're planning to add capabilities not currently listed on our roadmap, please reach out to us on our Discord server. That way, we can ensure your proposed contribution aligns with the project's direction before you start writing code.
### Code of Conduct
### Code of Conduct and Contribution Expectations
We want everyone in our community to have a positive experience. To facilitate this, we've established a code of conduct and a statement of values that we expect all contributors to adhere to. Please take a moment to review these documents—they're essential to maintaining a respectful and inclusive environment.
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
By making a contribution to this project, you certify that:
@ -49,6 +45,12 @@ This disclaimer is not a license and does not grant any rights or permissions. Y
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
### Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).
Original portions of the software are Copyright (c) 2023 by respective contributors.
---
Remember, your contributions help make this project great. We're excited to see what you'll bring to our community!
| Name | `image` | The variable that will hold our image |
| Type Hint | `Union[ImageField, None]` | The types for our field. Indicates that the image can either be an `ImageField` type or `None` |
| Field | `Field(description="The input image", default=None)` | The image variable is a field which needs a description and a default value that we set to `None`. |
Great. Now let us create our other inputs for `width` and `height`
| type_hints | `Dict[str, Literal["integer", "float", "boolean", "string", "enum", "image", "latents", "model", "control"]]` | `type_hint: "model"` provides type hints related to the model like displaying a list of available models |
| tags | `List[str]` | `tags: ['resize', 'image']` will classify your invocation under the tags of resize and image. |
| title | `str` | `title: 'Resize Image` will rename your to this custom title rather than infer from the name of the Invocation class. |
So let us update your `ResizeInvocation` with some extra configuration and see
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
There are two paths to making a development contribution:
1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors.
1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help.
2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.**
*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no one’s time is being misspent.*
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviwers easily understand your contribution
* Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under**your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If you’d like to learn more about contributing to Open Source projects, here is a[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?**
Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in`invokeai/frontend/web/`for review.
## Stack
State management is Redux via[Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
-`createAsyncThunk`for HTTP requests
-`createEntityAdapter`for fetching images and models
-`createListenerMiddleware`for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and[socket.io](https://github.com/socketio/socket.io-client)(with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui)& [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva)for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with[PixiJS](https://github.com/pixijs/pixijs)to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/)for bundling.
Localisation is via[i18next](https://github.com/i18next/react-i18next), but translation happens on our[Weblate](https://hosted.weblate.org/engage/invokeai/)project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on[Discord](https://discord.gg/ZmtBAhwWhy)if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install[node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this:
```bash
npm install --global yarn
yarn --version
```
From`invokeai/frontend/web/`run`yarn install`to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server:`yarn dev`
3. Start the InvokeAI Nodes backend:`python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g.[http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
Documentation is an important part of any open source project. It provides a clear and concise way to communicate how the software works, how to use it, and how to troubleshoot issues. Without proper documentation, it can be difficult for users to understand the purpose and functionality of the project.
## Contributing
All documentation is maintained in the InvokeAI GitHub repository. If you come across documentation that is out of date or incorrect, please submit a pull request with the necessary changes.
When updating or creating documentation, please keep in mind InvokeAI is a tool for everyone, not just those who have familiarity with generative art.
## Help & Questions
Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
InvokeAI uses[Weblate](https://weblate.org/)for translation. Weblate is a FOSS project providing a scalable translation service. Weblate automates the tedious parts of managing translation of a growing project, and the service is generously provided at no cost to FOSS projects like InvokeAI.
## Contributing
If you'd like to contribute by adding or updating a translation, please visit our[Weblate project](https://hosted.weblate.org/engage/invokeai/). You'll need to sign in with your GitHub account (a number of other accounts are supported, including Google).
Once signed in, select a language and then the Web UI component. From here you can Browse and Translate strings from English to your chosen language. Zen mode offers a simpler translation experience.
Your changes will be attributed to you in the automated PR process; you don't need to do anything else.
## Help & Questions
Please check Weblate's[documentation](https://docs.weblate.org/en/latest/index.html)or ping @Harvestor on [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
## Thanks
Thanks to the InvokeAI community for their efforts to translate the project!
Tutorials help new & existing users expand their abilty to use InvokeAI to the full extent of our features and services.
Currently, we have a set of tutorials available on our [YouTube channel](https://www.youtube.com/@invokeai), but as InvokeAI continues to evolve with new updates, we want to ensure that we are giving our users the resources they need to succeed.
Tutorials can be in the form of videos or article walkthroughs on a subject of your choice. We recommend focusing tutorials on the key image generation methods, or on a specific component within one of the image generation methods.
## Contributing
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
# :material-library-shelves: Textual Inversions and LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## Using Textual Inversion Files
@ -12,18 +15,16 @@ and artistic styles. They are also known as "embeds" in the machine learning
world.
Each TI file introduces one or more vocabulary terms to the SD model. These are
known in InvokeAI as "triggers." Triggers are often, but not always, denoted
using angle brackets as in "<trigger-phrase>". The two most common type of
known in InvokeAI as "triggers." Triggers are denoted using angle brackets
as in "<trigger-phrase>". The two most common type of
TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
different TI training packages. InvokeAI supports both formats, but its
[built-in TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
[built-in TI training system](TRAINING.md) produces `.pt`.
The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
amassed a large ligrary of >800 community-contributed TI files covering a
broad range of subjects and styles. InvokeAI has built-in support for this
library which downloads and merges TI files automatically upon request. You can
also install your own or others' TI files by placing them in a designated
directory.
broad range of subjects and styles. You can also install your own or others' TI files
by placing them in the designated directory for the compatible model type
### An Example
@ -41,66 +42,47 @@ You can also combine styles and concepts:
The configuration settings are divided into several distinct
groups in `invokeia.yaml`:
### Web Server
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].
### Features
These configuration settings allow you to enable and disable various InvokeAI features:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `esrgan` | `true` | Activate the ESRGAN upscaling options|
| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet |
| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected |
| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting |
| `restore` | `true` | Activate the facial restoration features (DEPRECATED; restoration features will be removed in 3.0.0) |
### Memory/Performance
These options tune InvokeAI's memory and performance characteristics.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `always_use_cpu` | `false` | Use the CPU to generate images, even if a GPU is available |
| `free_gpu_mem` | `false` | Aggressively free up GPU memory after each operation; this will allow you to run in low-VRAM environments with some performance penalties |
| `max_cache_size` | `6` | Amount of CPU RAM (in GB) to reserve for caching models in memory; more cache allows you to keep models in memory and switch among them quickly |
| `max_vram_cache_size` | `2.75` | Amount of GPU VRAM (in GB) to reserve for caching models in VRAM; more cache speeds up generation but reduces the size of the images that can be generated. This can be set to zero to maximize the amount of memory available for generation. |
| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system |
| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss |
| `xformers_enabled` | `true` | If the x-formers memory-efficient attention module is installed, activate it for better memory usage and generation speed|
| `tiled_decode` | `false` | If true, then during the VAE decoding phase the image will be decoded a section at a time, reducing memory consumption at the cost of a performance hit |
### Paths
These options set the paths of various directories and files used by
InvokeAI. Relative paths are interpreted relative to INVOKEAI_ROOT, so
if INVOKEAI_ROOT is `/home/fred/invokeai` and the path is
`autoimport/main`, then the corresponding directory will be located at
`/home/fred/invokeai/autoimport/main`.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `autoimport_dir` | `autoimport/main` | At startup time, read and import any main model files found in this directory |
| `lora_dir` | `autoimport/lora` | At startup time, read and import any LoRA/LyCORIS models found in this directory |
| `embedding_dir` | `autoimport/embedding` | At startup time, read and import any textual inversion (embedding) models found in this directory |
| `controlnet_dir` | `autoimport/controlnet` | At startup time, read and import any ControlNet models found in this directory |
| `conf_path` | `configs/models.yaml` | Location of the `models.yaml` model configuration file |
| `models_dir` | `models` | Location of the directory containing models installed by InvokeAI's model manager |
| `legacy_conf_dir` | `configs/stable-diffusion` | Location of the directory containing the .yaml configuration files for legacy checkpoint models |
| `db_dir` | `databases` | Location of the directory containing InvokeAI's image, schema and session database |
| `outdir` | `outputs` | Location of the directory in which the gallery of generated and uploaded images will be stored |
| `use_memory_db` | `false` | Keep database information in memory rather than on disk; this will not preserve image gallery information across restarts |
Note that the autoimport directories will be searched recursively,
allowing you to organize the models into folders and subfolders in any
way you wish. In addition, while we have split up autoimport
directories by the type of model they contain, this isn't
necessary. You can combine different model types in the same folder
and InvokeAI will figure out what they are. So you can easily use just
one autoimport directory by commenting out the unneeded paths:
```
Paths:
autoimport_dir: autoimport
# lora_dir: null
# embedding_dir: null
# controlnet_dir: null
```
### Logging
These settings control the information, warning, and debugging
messages printed to the console log while InvokeAI is running:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `log_handlers` | `console` | This controls where log messages are sent, and can be a list of one or more destinations. Values include `console`, `file`, `syslog` and `http`. These are described in more detail below |
| `log_format` | `color` | This controls the formatting of the log messages. Values are `plain`, `color`, `legacy` and `syslog` |
| `log_level` | `debug` | This filters messages according to the level of severity and can be one of `debug`, `info`, `warning`, `error` and `critical`. For example, setting to `warning` will display all messages at the warning level or higher, but won't display "debug" or "info" messages |
Several different log handler destinations are available, and multiple destinations are supported by providing a list:
```
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```
* `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
* `syslog` is only available on Linux and Macintosh systems. It uses
the operating system's "syslog" facility to write log file entries
locally or to a remote logging machine. `syslog` offers a variety
of configuration options:
```
syslog=/dev/log` - log to the /dev/log device
syslog=localhost` - log to the network logger running on the local machine
syslog=localhost:512` - same as above, but using a non-standard port
ControlNet is a powerful set of features developed by the open-source community (notably, Stanford researcher [**@ilyasviel**](https://github.com/lllyasviel)) that allows you to apply a secondary neural network model to your image generation process in Invoke.
ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
apply a secondary neural network model to your image generation
process in Invoke.
With ControlNet, you can get more control over the output of your image generation, providing you with a way to direct the network towards generating images that better fit your desired style or outcome.
With ControlNet, you can get more control over the output of your
image generation, providing you with a way to direct the network
towards generating images that better fit your desired style or
outcome.
### How it works
ControlNet works by analyzing an input image, pre-processing that image to identify relevant information that can be interpreted by each specific ControlNet model, and then inserting that control information into the generation process. This can be used to adjust the style, composition, or other aspects of the image to better achieve a specific result.
ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each
specific ControlNet model, and then inserting that control information
into the generation process. This can be used to adjust the style,
composition, or other aspects of the image to better achieve a
specific result.
### Models
As part of the model installation, ControlNet models can be selected including a variety of pre-trained models that have been added to achieve different effects or styles in your generated images. Further ControlNet models may require additional code functionality to also be incorporated into Invoke's Invocations folder. You should expect to follow any installation instructions for ControlNet models loaded outside the default models provided by Invoke. The default models include:
InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images. Currently
InvokeAI only supports "diffuser" style ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
***InvokeAI does not currently support checkpoint-format
ControlNets. These come in the form of a single file with the
extension `.safetensors`.***
Diffuser-style ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname"). The easiest way to install them is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
to the CONTROLNETS section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
*The node editor is experimental. We've made it accessible because we use it to develop the application, but we have not addressed the many known rough edges. It's very easy to shoot yourself in the foot, and we cannot offer support for it until it sees full release (ETA v3.1). Everything is subject to change without warning.*
🚨
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. The node processing flow is usually done from left (inputs) to right (outputs), though linearity can become abstracted the more complex the node graph becomes. Nodes inputs and outputs are connected by dragging connectors from node to node.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
## Anatomy of a Node
Individual nodes are made up of the following:
- Inputs: Edge points on the left side of the node window where you connect outputs from other nodes.
- Outputs: Edge points on the right side of the node window where you connect to inputs on other nodes.
- Options: Various options which are either manually configured, or overridden by connecting an output from another node to the input.
## Diffusion Overview
Taking the time to understand the diffusion process will help you to understand how to set up your nodes in the nodes editor.
There are two main spaces Stable Diffusion works in: image space and latent space.
Image space represents images in pixel form that you look at. Latent space represents compressed inputs. It’s in latent space that Stable Diffusion processes images. A VAE (Variational Auto Encoder) is responsible for compressing and encoding inputs into latent space, as well as decoding outputs back into image space.
When you generate an image using text-to-image, multiple steps occur in latent space:
1. Random noise is generated at the chosen height and width. The noise’s characteristics are dictated by the chosen (or not chosen) seed. This noise tensor is passed into latent space. We’ll call this noise A.
1. Using a model’s U-Net, a noise predictor examines noise A, and the words tokenized by CLIP from your prompt (conditioning). It generates its own noise tensor to predict what the final image might look like in latent space. We’ll call this noise B.
1. Noise B is subtracted from noise A in an attempt to create a final latent image indicative of the inputs. This step is repeated for the number of sampler steps chosen.
1. The VAE decodes the final latent image from latent space into image space.
image-to-image is a similar process, with only step 1 being different:
1. The input image is decoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how much noise is added, 0 being none, and 1 being all-encompassing. We’ll call this noise A. The process is then the same as steps 2-4 in the text-to-image explanation above.
Furthermore, a model provides the CLIP prompt tokenizer, the VAE, and a U-Net (where noise prediction occurs given a prompt and initial noise tensor).
A noise scheduler (eg. DPM++ 2M Karras) schedules the subtraction of noise from the latent image across the sampler steps chosen (step 3 above). Less noise is usually subtracted at higher sampler steps.
| CannyImageProcessor | Canny edge detection for ControlNet |
| ClipSkip | Skip layers in clip text_encoder model |
| Collect | Collects values into a collection |
| Prompt (Compel) | Parse prompt using compel package to conditioning |
| ContentShuffleImageProcessor | Applies content shuffle processing to image |
| ControlNet | Collects ControlNet info to pass to other nodes |
| CvInpaint | Simple inpaint using opencv |
| Divide | Divides two numbers |
| DynamicPrompt | Parses a prompt using adieyal/dynamic prompt's random or combinatorial generator |
| FloatLinearRange | Creates a range |
| HedImageProcessor | Applies HED edge detection to image |
| ImageBlur | Blurs an image |
| ImageChannel | Gets a channel from an image |
| ImageCollection | Load a collection of images and provide it as output |
| ImageConvert | Converts an image to a different mode |
| ImageCrop | Crops an image to a specified box. The box can be outside of the image. |
| ImageInverseLerp | Inverse linear interpolation of all pixels of an image |
| ImageLerp | Linear interpolation of all pixels of an image |
| ImageMultiply | Multiplies two images together using `PIL.ImageChops.Multiply()` |
| ImageNSFWBlurInvocation | Detects and blurs images that may contain sexually explicit content |
| ImagePaste | Pastes an image into another image |
| ImageProcessor | Base class for invocations that reprocess images for ControlNet |
| ImageResize | Resizes an image to specific dimensions |
| ImageScale | Scales an image by a factor |
| ImageToLatents | Scales latents by a given factor |
| ImageWatermarkInvocation | Adds an invisible watermark to images |
| InfillColor | Infills transparent areas of an image with a solid color |
| InfillPatchMatch | Infills transparent areas of an image using the PatchMatch algorithm |
| InfillTile | Infills transparent areas of an image with tiles of the image |
| Inpaint | Generates an image using inpaint |
| Iterate | Iterates over a list of items |
| LatentsToImage | Generates an image from latents |
| LatentsToLatents | Generates latents using latents as base image |
| LeresImageProcessor | Applies leres processing to image |
| LineartAnimeImageProcessor | Applies line art anime processing to image |
| LineartImageProcessor | Applies line art processing to image |
| LoadImage | Load an image and provide it as output |
| Lora Loader | Apply selected lora to unet and text_encoder |
| Model Loader | Loads a main model, outputting its submodels |
| MaskFromAlpha | Extracts the alpha channel of an image as a mask |
| MediapipeFaceProcessor | Applies mediapipe face processing to image |
| MidasDepthImageProcessor | Applies Midas depth processing to image |
| MlsdImageProcessor | Applied MLSD processing to image |
| Multiply | Multiplies two numbers |
| Noise | Generates latent noise |
| NormalbaeImageProcessor | Applies NormalBAE processing to image |
| OpenposeImageProcessor | Applies Openpose processing to image |
| ParamFloat | A float parameter |
| ParamInt | An integer parameter |
| PidiImageProcessor | Applies PIDI processing to an image |
| Progress Image | Displays the progress image in the Node Editor |
| RandomInit | Outputs a single random integer |
| RandomRange | Creates a collection of random numbers |
| Range | Creates a range of numbers from start to stop with step |
| RangeOfSize | Creates a range from start to start + size with step |
| ResizeLatents | Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8. |
| RestoreFace | Restores faces in the image |
| ScaleLatents | Scales latents by a given factor |
| SegmentAnythingProcessor | Applies segment anything processing to image |
| ShowImage | Displays a provided image, and passes it forward in the pipeline |
| StepParamEasing | Experimental per-step parameter for easing for denoising steps |
| Subtract | Subtracts two numbers |
| TextToLatents | Generates latents from conditionings |
| TileResampleProcessor | Bass class for invocations that preprocess images for ControlNet |
| Upscale | Upscales an image |
| VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput |
| ZoeDepthImageProcessor | Applies Zoe depth processing to image |
## Node Grouping Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
### Noise
As described, an initial noise tensor is necessary for the latent diffusion process. As a result, all non-image *ToLatents nodes require a noise node input.

### Conditioning
As described, conditioning is necessary for the latent diffusion process, whether empty or not. As a result, all non-image *ToLatents nodes require positive and negative conditioning inputs. Conditioning is reliant on a CLIP tokenizer provided by the Model Loader node.
The ImageToLatents node doesn't require a noise node input, but requires a VAE input to convert the image from image space into latent space. In reverse, the LatentsToImage node requires a VAE input to convert from latent space back into image space.

### Defined & Random Seeds
It is common to want to use both the same seed (for continuity) and random seeds (for variance). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
Control means to guide the diffusion process to adhere to a defined input or structure. Control can be provided as input to non-image *ToLatents nodes from ControlNet nodes. ControlNet nodes usually require an image processor which converts an input image for use with ControlNet.
The Lora Loader node lets you load a LoRA (say that ten times fast) and pass it as output to both the Prompt (Compel) and non-image *ToLatents nodes. A model's CLIP tokenizer is passed through the LoRA into Prompt (Compel), where it affects conditioning. A model's U-Net is also passed through the LoRA into a non-image *ToLatents node, where it affects noise prediction.

### Scaling
Use the ImageScale, ScaleLatents, and Upscale nodes to upscale images and/or latent images. The chosen method differs across contexts. However, be aware that latents are already noisy and compressed at their original resolution; scaling an image could produce more detailed results.
Iteration is a common concept in any processing, and means to repeat a process with given input. In nodes, you're able to use the Iterate node to iterate through collections usually gathered by the Collect node. The Iterate node has many potential uses, from processing a collection of images one after another, to varying seeds across multiple image generations and more. This screenshot demonstrates how to collect several images and pass them out one at a time.
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
With our knowledge of node grouping and the diffusion process, let’s break down some basic graphs in the nodes editor. Note that a node's options can be overridden by inputs from other nodes. These examples aren't strict rules to follow and only demonstrate some basic configurations.
### Basic text-to-image Node Graph

- Model Loader: A necessity to generating images (as we’ve read above). We choose our model from the dropdown. It outputs a U-Net, CLIP tokenizer, and VAE.
- Prompt (Compel): Another necessity. Two prompt nodes are created. One will output positive conditioning (what you want, ‘dog’), one will output negative (what you don’t want, ‘cat’). They both input the CLIP tokenizer that the Model Loader node outputs.
- Noise: Consider this noise A from step one of the text-to-image explanation above. Choose a seed number, width, and height.
- TextToLatents: This node takes many inputs for converting and processing text & noise from image space into latent space, hence the name TextTo**Latents**. In this setup, it inputs positive and negative conditioning from the prompt nodes for processing (step 2 above). It inputs noise from the noise node for processing (steps 2 & 3 above). Lastly, it inputs a U-Net from the Model Loader node for processing (step 2 above). It outputs latents for use in the next LatentsToImage node. Choose number of sampler steps, CFG scale, and scheduler.
- LatentsToImage: This node takes in processed latents from the TextToLatents node, and the model’s VAE from the Model Loader node which is responsible for decoding latents back into the image space, hence the name LatentsTo**Image**. This node is the last stop, and once the image is decoded, it is saved to the gallery.
### Basic image-to-image Node Graph

- Model Loader: Choose a model from the dropdown.
- Prompt (Compel): Two prompt nodes. One positive (dog), one negative (dog). Same CLIP inputs from the Model Loader node as before.
- ImageToLatents: Upload a source image directly in the node window, via drag'n'drop from the gallery, or passed in as input. The ImageToLatents node inputs the VAE from the Model Loader node to decode the chosen image from image space into latent space, hence the name ImageTo**Latents**. It outputs latents for use in the next LatentsToLatents node. It also outputs the source image's width and height for use in the next Noise node if the final image is to be the same dimensions as the source image.
- Noise: A noise tensor is created with the width and height of the source image, and connected to the next LatentsToLatents node. Notice the width and height fields are overridden by the input from the ImageToLatents width and height outputs.
- LatentsToLatents: The inputs and options are nearly identical to TextToLatents, except that LatentsToLatents also takes latents as an input. Considering our source image is already converted to latents in the last ImageToLatents node, and text + noise are no longer the only inputs to process, we use the LatentsToLatents node.
- LatentsToImage: Like previously, the LatentsToImage node will use the VAE from the Model Loader as input to decode the latents from LatentsToLatents into image space, and save it to the gallery.
### Basic ControlNet Node Graph

- Model Loader
- Prompt (Compel)
- Noise: Width and height of the CannyImageProcessor ControlNet image is passed in to set the dimensions of the noise passed to TextToLatents.
- CannyImageProcessor: The CannyImageProcessor node is used to process the source image being used as a ControlNet. Each ControlNet processor node applies control in different ways, and has some different options to configure. Width and height are passed to noise, as mentioned. The processed ControlNet image is output to the ControlNet node.
- ControlNet: Select the type of control model. In this case, canny is chosen as the CannyImageProcessor was used to generate the ControlNet image. Configure the control node options, and pass the control output to TextToLatents.
- TextToLatents: Similar to the basic text-to-image example, except ControlNet is passed to the control input edge point.
@ -301,5 +301,48 @@ summoning up the concept of some sort of scifi creature? Let's find out.
Indeed, removing the word "hybrid" produces an image that is more like what we'd
expect.
In conclusion, prompt blending is great for exploring creative space,
but takes some trial and error to achieve the desired effect.
## Dynamic Prompts
Dynamic Prompts are a powerful feature designed to produce a variety of prompts based on user-defined options. Using a special syntax, you can construct a prompt with multiple possibilities, and the system will automatically generate a series of permutations based on your settings. This is extremely beneficial for ideation, exploring various scenarios, or testing different concepts swiftly and efficiently.
### Structure of a Dynamic Prompt
A Dynamic Prompt comprises of regular text, supplemented with alternatives enclosed within curly braces {} and separated by a vertical bar |. For example: {option1|option2|option3}. The system will then select one of the options to include in the final prompt. This flexible system allows for options to be placed throughout the text as needed.
Furthermore, Dynamic Prompts can designate multiple selections from a single group of options. This feature is triggered by prefixing the options with a numerical value followed by $$. For example, in {2$$option1|option2|option3}, the system will select two distinct options from the set.
### Creating Dynamic Prompts
To create a Dynamic Prompt, follow these steps:
Draft your sentence or phrase, identifying words or phrases with multiple possible options.
Encapsulate the different options within curly braces {}.
Within the braces, separate each option using a vertical bar |.
If you want to include multiple options from a single group, prefix with the desired number and $$.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {2$$style1|style2|style3}.
### How Dynamic Prompts Work
Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options.
For example, the following prompts could be generated from the above Dynamic Prompt:
A house in summer designed in style1, style2
A lodge in autumn designed in style3, style1
A cottage in winter designed in style2, style3
And many more!
When the `Combinatorial` setting is on, Invoke will disable the "Images" selection, and generate every combination up until the setting for Max Prompts is reached.
When the `Combinatorial` setting is off, Invoke will randomly generate combinations up until the setting for Images has been reached.
### Tips and Tricks for Using Dynamic Prompts
Below are some useful strategies for creating Dynamic Prompts:
Utilize Dynamic Prompts to generate a wide spectrum of prompts, perfect for brainstorming and exploring diverse ideas.
Ensure that the options within a group are contextually relevant to the part of the sentence where they are used. For instance, group building types together, and seasons together.
Apply the 2$$ prefix when you want to incorporate more than one option from a single group. This becomes quite handy when mixing and matching different elements.
Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options.
Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group.
Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands.
You may personalize the generated images to provide your own styles or objects
@ -258,16 +259,6 @@ invokeai-ti \
--only_save_embeds
```
## Using Embeddings
After training completes, the resultant embeddings will be saved into your `$INVOKEAI_ROOT/embeddings/<trigger word>/learned_embeds.bin`.
These will be automatically loaded when you start InvokeAI.
Add the trigger word, surrounded by angle brackets, to use that embedding. For example, if your trigger word was `terence`, use `<terence>` in prompts. This is the same syntax used by the HuggingFace concepts library.
**Note:** `.pt` embeddings do not require the angle brackets.
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
| `--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network. |
| `--port PORT` | Web server: Port to listen on |
| `--certfile CERTFILE` | Web server: Path to certificate file to use for SSL. Use together with --keyfile |
| `--keyfile KEYFILE` | Web server: Path to private key file to use for SSL. Use together with --certfile' |
| `--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver. |
Scroll down the control panel until you get to the LoRA accordion
New to image generation with AI? You’re in the right place!
This is a high level walkthrough of some of the concepts and terms you’ll see as you start using InvokeAI. Please note, this is not an exhaustive guide and may be out of date due to the rapidly changing nature of the space.
## Using InvokeAI
### **Prompt Crafting**
- Prompts are the basis of using InvokeAI, providing the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be.
*To get started, here’s an easy template to use for structuring your prompts:*
- Subject, Style, Quality, Aesthetic
- **Subject:** What your image will be about. E.g. “a futuristic city with trains”, “penguins floating on icebergs”, “friends sharing beers”
- **Style:** The style or medium in which your image will be in. E.g. “photograph”, “pencil sketch”, “oil paints”, or “pop art”, “cubism”, “abstract”
- **Quality:** A particular aspect or trait that you would like to see emphasized in your image. E.g. "award-winning", "featured in {relevant set of high quality works}", "professionally acclaimed". Many people often use "masterpiece".
- **Aesthetics:** The visual impact and design of the artwork. This can be colors, mood, lighting, setting, etc.
- There are two prompt boxes: *Positive Prompt*&*Negative Prompt*.
- A **Positive** Prompt includes words you want the model to reference when creating an image.
- Negative Prompt is for anything you want the model to eliminate when creating an image. It doesn’t always interpret things exactly the way you would, but helps control the generation process. Always try to include a few terms - you can typically use lower quality image terms like “blurry” or “distorted” with good success.
- Some examples prompts you can try on your own:
- A detailed oil painting of a tranquil forest at sunset with vibrant+ colors and soft, golden light filtering through the trees
- friends sharing beers in a busy city, realistic colored pencil sketch, twilight, masterpiece, bright, lively
### Generation Workflows
- Invoke offers a number of different workflows for interacting with models to produce images. Each is extremely powerful on its own, but together provide you an unparalleled way of producing high quality creative outputs that align with your vision.
- **Text to Image:** The text to image tab focuses on the key workflow of using a prompt to generate a new image. It includes other features that help control the generation process as well.
- **Image to Image:** With image to image, you provide an image as a reference (called the “initial image”), which provides more guidance around color and structure to the AI as it generates a new image. This is provided alongside the same features as Text to Image.
- **Unified Canvas:** The Unified Canvas is an advanced AI-first image editing tool that is easy to use, but hard to master. Drag an image onto the canvas from your gallery in order to regenerate certain elements, edit content or colors (known as inpainting), or extend the image with an exceptional degree of consistency and clarity (called outpainting).
### Improving Image Quality
- Fine tuning your prompt - the more specific you are, the closer the image will turn out to what is in your head! Adding more details in the Positive Prompt or Negative Prompt can help add / remove pieces of your image to improve it - You can also use advanced techniques like upweighting and downweighting to control the influence of certain words. [Learn more here](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-syntax-features).
- **Tip: If you’re seeing poor results, try adding the things you don’t like about the image to your negative prompt may help. E.g. distorted, low quality, unrealistic, etc.**
- Explore different models - Other models can produce different results due to the data they’ve been trained on. Each model has specific language and settings it works best with; a model’s documentation is your friend here. Play around with some and see what works best for you!
- Increasing Steps - The number of steps used controls how much time the model is given to produce an image, and depends on the “Scheduler” used. The schedule controls how each step is processed by the model. More steps tends to mean better results, but will take longer - We recommend at least 30 steps for most
- Tweak and Iterate - Remember, it’s best to change one thing at a time so you know what is working and what isn't. Sometimes you just need to try a new image, and other times using a new prompt might be the ticket. For testing, consider turning off the “random” Seed - Using the same seed with the same settings will produce the same image, which makes it the perfect way to learn exactly what your changes are doing.
- Explore Advanced Settings - InvokeAI has a full suite of tools available to allow you complete control over your image creation process - Check out our [docs if you want to learn more](https://invoke-ai.github.io/InvokeAI/features/).
## Terms & Concepts
If you're interested in learning more, check out [this presentation](https://docs.google.com/presentation/d/1IO78i8oEXFTZ5peuHHYkVF-Y3e2M6iM5tCnc-YBfcCM/edit?usp=sharing) from one of our maintainers (@lstein).
### Stable Diffusion
Stable Diffusion is deep learning, text-to-image model that is the foundation of the capabilities found in InvokeAI. Since the release of Stable Diffusion, there have been many subsequent models created based on Stable Diffusion that are designed to generate specific types of images.
### Prompts
Prompts provide the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be.
### Models
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what you’d like to see. (Like Stable Diffusion!)
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*
### Scheduler
Schedulers guide the process of removing noise (de-noising) from data. They determine:
1. The number of steps to take to remove the noise.
2. Whether the steps are random (stochastic) or predictable (deterministic).
3. The specific method (algorithm) used for de-noising.
Experimenting with different schedulers is recommended as each will produce different outputs!
### Steps
The number of de-noising steps each generation through.
Schedulers can be intricate and there's often a balance to strike between how quickly they can de-noise data and how well they can do it. It's typically advised to experiment with different schedulers to see which one gives the best results. There has been a lot written on the internet about different schedulers, as well as exploring what the right level of "steps" are for each. You can save generation time by reducing the number of steps used, but you'll want to make sure that you are satisfied with the quality of images produced!
### Low-Rank Adaptations / LoRAs
Low-Rank Adaptations (LoRAs) are like a smaller, more focused version of models, intended to focus on training a better understanding of how a specific character, style, or concept looks.
### Textual Inversion Embeddings
Textual Inversion Embeddings, like LoRAs, assist with more easily prompting for certain characters, styles, or concepts. However, embeddings are trained to update the relationship between a specific word (known as the “trigger”) and the intended output.
### ControlNet
ControlNets are neural network models that are able to extract key features from an existing image and use these features to guide the output of the image generation model.
### VAE
Variational auto-encoder (VAE) is a encode/decode model that translates the "latents" image produced during the image generation procees to the large pixel images that we see.
This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
@ -43,24 +43,7 @@ InvokeAI comes with support for a good set of starter models. You'll
find them listed in the master models file
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
subset that are currently installed are found in
`configs/models.yaml`. As of v2.3.1, the list of starter models is:
|Model Name | HuggingFace Repo ID | Description | URL |
|---------- | ---------- | ----------- | --- |
|stable-diffusion-1.5|runwayml/stable-diffusion-v1-5|Stable Diffusion version 1.5 diffusers model (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|sd-inpainting-1.5|runwayml/stable-diffusion-inpainting|RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-inpainting |
|stable-diffusion-2.1|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-inpainting|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-inpainting |
|analog-diffusion-1.0|wavymulder/Analog-Diffusion|An SD-1.5 model trained on diverse analog photographs (2.13 GB)|https://huggingface.co/wavymulder/Analog-Diffusion |
|deliberate-1.0|XpucT/Deliberate|Versatile model that produces detailed images up to 768px (4.27 GB)|https://huggingface.co/XpucT/Deliberate |
|dreamlike-photoreal-2.0|dreamlike-art/dreamlike-photoreal-2.0|A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)|https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
|inkpunk-1.0|Envvi/Inkpunk-Diffusion|Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)|https://huggingface.co/Envvi/Inkpunk-Diffusion |
|openjourney-4.0|prompthero/openjourney|An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)|https://huggingface.co/prompthero/openjourney |
|portrait-plus-1.0|wavymulder/portraitplus|An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)|https://huggingface.co/wavymulder/portraitplus |
|seek-art-mega-1.0|coreco/seek.art_MEGA|A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)|https://huggingface.co/coreco/seek.art_MEGA |
|trinart-2.0|naclbit/trinart_stable_diffusion_v2|An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)|https://huggingface.co/naclbit/trinart_stable_diffusion_v2 |
|waifu-diffusion-1.4|hakurei/waifu-diffusion|An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)|https://huggingface.co/hakurei/waifu-diffusion |
`configs/models.yaml`.
Note that these files are covered by an "Ethical AI" license which
forbids certain uses. When you initially download them, you are asked
@ -71,8 +54,7 @@ with the model terms by visiting the URLs in the table above.
## Community-Contributed Models
There are too many to list here and more are being contributed every
* `--model <modelname>` -- Start up with the indicated model loaded
* `--ckpt_convert` -- When a checkpoint/safetensors model is loaded, convert it into a `diffusers` model in memory. This does not permanently save the converted model to disk.
* `--autoconvert <path/to/directory>` -- Scan the indicated directory path for new checkpoint/safetensors files, convert them into `diffusers` models, and import them into InvokeAI.
Here is an example of providing an argument on the command line using
These are nodes that have been developed by the community, for the community. If you're not sure what a node is, you can learn more about nodes [here](overview.md).
If you'd like to submit a node for the community, please refer to the [node creation overview](./overview.md#contributing-nodes).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
## Disclaimer
The nodes linked below have been developed and contributed by members of the Invoke AI community. While we strive to ensure the quality and safety of these contributions, we do not guarantee the reliability or security of the nodes. If you have issues or concerns with any of the nodes below, please raise it on GitHub or in the Discord.
## List of Nodes
### FaceTools
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
An Node is simply a single operation that takes in some inputs and gives
out some outputs. We can then chain multiple nodes together to create more
complex functionality. All InvokeAI features are added through nodes.
This means nodes can be used to easily extend the image generation capabilities of InvokeAI, and allow you build workflows to suit your needs.
You can read more about nodes and the node editor [here](../features/NODES.md).
## Downloading Nodes
To download a new node, visit our list of [Community Nodes](communityNodes.md). These are nodes that have been created by the community, for the community.
## Contributing Nodes
To learn about creating a new node, please visit our [Node creation documenation](../contributing/INVOCATIONS.md).
Once you’ve created a node and confirmed that it behaves as expected locally, follow these steps:
* Make sure the node is contained in a new Python (.py) file
* Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
* Make sure you are following the template below and have provided all relevant details about the node and what it does.
* A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
### Community Node Template
```markdown
--------------------------------
### Super Cool Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
yieldText.from_markup("Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.",justify="center")
yieldText.from_markup(
"Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.",
# if the given destination already exists, the starting point for browsing is its parent directory.
# the user may have made a typo, or otherwise wants to place the root dir next to an existing one.
# if the destination dir does NOT exist, then the user must have changed their mind about the selection.
@ -165,6 +167,10 @@ def graphical_accelerator():
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
"cuda",
)
nvidia_with_dml=(
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
"cuda_and_dml",
)
amd=(
"an [gold1 b]AMD[/] GPU (using ROCm™)",
"rocm",
@ -179,7 +185,7 @@ def graphical_accelerator():
)
ifOS=="Windows":
options=[nvidia,cpu]
options=[nvidia,nvidia_with_dml,cpu]
ifOS=="Linux":
options=[nvidia,amd,cpu]
elifOS=="Darwin":
@ -300,15 +306,20 @@ def introduction() -> None:
)
console.line(2)
def_platform_specific_help()->str:
def_platform_specific_help()->str:
ifOS=="Darwin":
text=Text.from_markup("""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/].""")
text=Text.from_markup(
"""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/]."""
)
elifOS=="Windows":
text=Text.from_markup("""[b wheat1]Windows Users![/]\n\nBefore you start, please do the following:
text=Text.from_markup(
"""[b wheat1]Windows Users![/]\n\nBefore you start, please do the following:
1. Double-click on the file [b wheat1]WinLongPathsEnabled.reg[/] in order to
enable long path support on your system.
2. Make sure you have the [b wheat1]Visual C++ core libraries[/] installed. If not, install from
raiseHTTPException(status_code=500,detail="Failed to remove images from board")
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.