Compare commits

...

800 Commits

Author SHA1 Message Date
86c11f9e27 make session_list api return a raw dict rather than pydantic object 2023-08-17 22:22:06 -04:00
832335998f Update 'monkeypatched' controlnet class (#4269)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Should be removed when added in diffusers
https://github.com/huggingface/diffusers/pull/4599
2023-08-17 15:49:54 -04:00
1102c12084 Merge branch 'main' into fix/sdxl_controlnet 2023-08-17 15:40:51 -04:00
b5cee7d20c blackify chore 2023-08-17 15:40:15 -04:00
89b82b3dc4 (feat): Add Seam Painting to Canvas (1.x, 2.x & SDXL w/ Refiner) (#4292)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes
      
## Description

PR to add Seam Painting back to the Canvas.

## TODO Later

While the graph works as intended, it has become extremely large and
complex. I don't know if there's a simpler way to do this. Maybe there
is but there's soo many connections and visualizing the graph in my head
is extremely difficult. We might need to create some kind of tooling for
this. Coz it's going going to get crazier.

But well works for now.
2023-08-17 21:24:39 +12:00
8923201fdf Merge branch 'main' into seam-painting 2023-08-17 21:21:44 +12:00
226409107b Fix for Image Deletion issue 2023-08-17 17:18:11 +10:00
ae986bf873 Report RAM usage and RAM cache statistics after each generation (#4287)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

     
## Have you updated all relevant documentation?
- [X] Yes


## Description

This PR enhances the logging of performance statistics to include RAM
and model cache information. After each generation, the following will
be logged. The new information follows TOTAL GRAPH EXECUTION TIME.

```
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Graph stats: 2408dbec-50d0-44a3-bbc4-427037e3f7d4
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Node                 Calls    Seconds VRAM Used
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> main_model_loader        1     0.004s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> clip_skip                1     0.002s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> compel                   2     2.706s     0.246G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> rand_int                 1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> range_of_size            1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> iterate                  1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> metadata_accumulator     1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> noise                    1     0.003s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> denoise_latents          1     2.429s     2.022G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> l2i                      1     1.020s     1.858G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:    6.171s
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used by InvokeAI process: 4.50G (delta=0.10G)
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used to load models: 1.99G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> VRAM in use: 0.303G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM cache statistics:
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache hits: 2
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache misses: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cached: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cleared from cache: 0
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Cache high water mark: 1.99/7.50G    
```

There may be a memory leak in InvokeAI. I'm seeing the process memory
usage increasing by about 100 MB with each generation as shown in the
example above.
2023-08-17 16:10:18 +12:00
daf75a1361 blackify 2023-08-16 21:47:29 -04:00
fe4b2d53ed Merge branch 'feat/collect-more-stats' of github.com:invoke-ai/InvokeAI into feat/collect-more-stats 2023-08-16 21:39:29 -04:00
c39f8b478b fix misplaced ram_used and ram_changed attributes 2023-08-16 21:39:18 -04:00
1f82d8013e Merge branch 'main' into feat/collect-more-stats 2023-08-16 18:51:17 -04:00
e373bfca54 fix several broken links in the installation index 2023-08-16 17:54:39 -04:00
2ca8611723 add +/- sign in front of RAM delta 2023-08-16 15:53:01 -04:00
b12cf315a8 Merge branch 'main' into feat/collect-more-stats 2023-08-16 09:19:33 -04:00
975586bb40 Merge branch 'main' into seam-painting 2023-08-17 01:05:42 +12:00
a7ba142ad9 feat(ui): set min zoom on nodes to 0.1 2023-08-16 23:04:36 +10:00
0d36bab6cc fix(ui): do not rerender top panel buttons 2023-08-16 23:04:36 +10:00
c2e7f62701 fix(ui): do not rerender edges 2023-08-16 23:04:36 +10:00
1f194e3688 chore(ui): lint 2023-08-16 23:04:36 +10:00
f9b8b5cff2 fix(ui): improve node rendering performance
Previously the editor was using prop-drilling node data and templates to get values deep into nodes. This ended up causing very noticeable performance degradation. For example, any text entry fields were super laggy.

Refactor the whole thing to use memoized selectors via hooks. The hooks are mostly very narrow, returning only the data needed.

Data objects are never passed down, only node id and field name - sometimes the field kind ('input' or 'output').

The end result is a *much* smoother node editor with very minimal rerenders.
2023-08-16 23:04:36 +10:00
f7c92e1eff fix(ui): disable awkward resize animation for <Flow /> 2023-08-16 23:04:36 +10:00
70b8c3dfea fix(ui): fix context menu on workflow editor
There is a tricky mouse event interaction between chakra's `useOutsideClick()` hook (used by chakra `<Menu />`) and reactflow. The hook doesn't work when you click the main reactflow area.

To get around this, I've used a dirty hack, copy-pasting the simple context menu component we use, and extending it slightly to respond to a global `contextMenusClosed` redux action.
2023-08-16 23:04:36 +10:00
43b30355e4 feat: make primitive node titles consistent 2023-08-16 23:04:36 +10:00
a93bd01353 fix bad merge 2023-08-16 08:53:07 -04:00
bb1b8ceaa8 Update invokeai/backend/model_management/model_cache.py
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
2023-08-16 08:48:44 -04:00
be8edaf3fd Merge branch 'main' into feat/collect-more-stats 2023-08-16 08:48:14 -04:00
9cbaefaa81 feat: Add Seam Painting to SDXL 2023-08-16 19:46:48 +12:00
cc7c6e5d41 feat: Add Seam Painting with Scale Before 2023-08-16 19:35:03 +12:00
f2ee8a3da8 wip: Basic Seam Painting (only normal models) (no scale) 2023-08-16 17:26:23 +12:00
e98d7a52d4 feat: Add Seam Painting Options 2023-08-16 17:25:55 +12:00
21e1c0a5f0 tweaked formatting 2023-08-15 22:25:30 -04:00
611e241ca7 chore(ui): regen types 2023-08-16 12:07:34 +10:00
6df4af2c79 chore: lint 2023-08-16 12:07:34 +10:00
0f8606914e feat(ui): remove shouldShowDeleteButton
- remove this state entirely
- use `state.hotkeys.shift` directly to hide and show the icon on gallery
- also formatting
2023-08-16 12:07:34 +10:00
5b1099193d fix(ui): restore reset button in node image component 2023-08-16 12:07:34 +10:00
230131646f feat(ui): use imageDTOs instead of images in starring queries 2023-08-16 12:07:34 +10:00
8b1ec2685f chore: black 2023-08-16 12:07:34 +10:00
60c2c877d7 fix: add response model for star/unstar routes
- also implement pessimistic updates for starring, only changing the images that were successfully updated by backend
- some autoformat changes crept in
2023-08-16 12:07:34 +10:00
315a056686 feat(ui): show Star All if selection is a mix of starred and unstarred 2023-08-16 12:07:34 +10:00
80b0c5eab4 change from pin to star 2023-08-16 12:07:34 +10:00
08dc265e09 add listener to update selection list with change in star status 2023-08-16 12:07:34 +10:00
029a95550e rename pin to star, add multiselect and remove single image update api 2023-08-16 12:07:34 +10:00
ee6a26a97d update list images endpoint to sort by pinnedness and then created_at 2023-08-16 12:07:34 +10:00
a512fdc0f6 update IAIDndImage to use children for icons, add UI for shift+delete to delete images from gallery 2023-08-16 12:07:34 +10:00
767a612746 (ui) WIP trying to get all cache scenarios working smoothly, fix assets 2023-08-16 12:07:34 +10:00
0a71d6baa1 (ui) update cache to render pinned images alongside unpinned correctly, as well as changes in pinnedness 2023-08-16 12:07:34 +10:00
37be827e17 (ui) hook up toggle pin mutation with context menu for single image 2023-08-16 12:07:34 +10:00
04a9894e77 (api) add ability to pin and unpin images 2023-08-16 12:07:34 +10:00
f9958de6be added memory used to load models 2023-08-15 21:56:19 -04:00
ec10aca91e report RAM and RAM cache statistics 2023-08-15 21:00:30 -04:00
2b7dd3e236 feat: add missing primitive collections
- add missing primitive collections
- remove `Seed` and `LoRAField` (they don't exist)
2023-08-16 09:54:38 +10:00
fa884134d9 feat: rename ui_type_hint to ui_type
Just a bit more succinct while not losing any clarity.
2023-08-16 09:54:38 +10:00
18006cab9a chore: Regen frontend types 2023-08-16 09:54:38 +10:00
75ea716c13 feat(ui): hide node footer if there is nothing to display 2023-08-16 09:54:38 +10:00
d5f7027597 feat: Save Mask option for Canvas 2023-08-16 09:54:38 +10:00
b1ad777f5a fix: Outpainting being broken due to field name change 2023-08-16 09:54:38 +10:00
f65c8092cb fix(ui): fix issue with node editor state not restoring correctly on mount
If `reactflow` initializes before the node templates are parsed, edges may not be rendered and the viewport may get reset.

- Add `isReady` state to `NodesState`. This is false when we are loading or parsing node templates and true when that is finished.
- Conditionally render `reactflow` based on `isReady`.
- Add `viewport` to `NodesState` & handlers to keep it synced. This allows `reactflow` to mount and unmount freely and not lose viewport.
2023-08-16 09:54:38 +10:00
94bfef3543 feat(ui): add UI component for unknown node types 2023-08-16 09:54:38 +10:00
c48fd9c083 feat(nodes): refactor parameter/primitive nodes
Refine concept of "parameter" nodes to "primitives":
- integer
- float
- string
- boolean
- image
- latents
- conditioning
- color

Each primitive has:
- A field definition, if it is not already python primitive value. The field is how this primitive value is passed between nodes. Collections are lists of the field in node definitions. ex: `ImageField` & `list[ImageField]`
- A single output class. ex: `ImageOutput`
- A collection output class. ex: `ImageCollectionOutput`
- A node, which functions to load or pass on the primitive value. ex: `ImageInvocation` (in this case, `ImageInvocation` replaces `LoadImage`)

Plus a number of related changes:
- Reorganize these into `primitives.py`
- Update all nodes and logic to use primitives
- Consolidate "prompt" outputs into "string" & "mask" into "image" (there's no reason for these to be different, the function identically)
- Update default graphs & tests
- Regen frontend types & minor frontend tidy related to changes
2023-08-16 09:54:38 +10:00
f49fc7fb55 feat: node editor
squashed rebase on main after backendd refactor
2023-08-16 09:54:38 +10:00
a4b029d03c write RAM usage and change after each generation 2023-08-15 18:21:31 -04:00
d6c9bf5b38 added sdxl controlnet detection 2023-08-15 12:51:15 -04:00
4f82273fc4 Update 'monkeypatched' controlnet class 2023-08-15 11:07:43 -04:00
e54355f0f3 Prevent merge from crashing with a WindowsPath serialization error (#4271)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [X] Yes

## Description

On Windows systems, model merging was crashing at the very last step
with an error related to not being able to serialize a WindowsPath
object. I have converted the path that is passed to `save_pretrained`
into a string, which I believe will solve the problem.

Note that I had to rebuild the web frontend and add it to the PR in
order to test on my Windows VM which does not have the full node stack
installed due to space limitations.

## Related Tickets & Documents


https://discord.com/channels/1020123559063990373/1042475531079262378/1140680788954861698
2023-08-15 15:11:01 +12:00
b2934be6ba use as_posix() instead of str() 2023-08-14 22:59:26 -04:00
eab67b6a01 fixed actual bug 2023-08-14 22:59:26 -04:00
02fa116690 rebuild frontend for windows testing 2023-08-14 22:59:26 -04:00
5190a4c282 further removal of Paths 2023-08-14 22:59:26 -04:00
141d438517 prevent windows from crashing with a WindowsPath serialization error on merge 2023-08-14 22:59:26 -04:00
549d2e0485 chore: remove old web server code and python deps 2023-08-15 10:54:57 +10:00
d3d8b71c67 feat: Change refinerStart default to 0.8
This is the recommended value according to the paper.
2023-08-15 10:13:02 +10:00
6eaaa75a5d Use double quotes in docker entrypoint to prevent word splitting (#4260)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: it's smol

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
docker_entrypoint.sh does not quote variable expansion to prevent word
splitting, causing paths with spaces to fail as in #3913

## Related Tickets & Documents
#3913

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #3913
- Closes #3913

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-15 02:15:22 +12:00
ba57ec5907 Merge branch 'main' into fix/docker_entrypoint 2023-08-14 09:26:32 -04:00
cd0e4bc1d7 Refactor generation backend (#4201)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [x] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
- Remove SDXL raw prompt nodes
- SDXL and SD1/2 generation merged to same nodes - t2l/l2l
- Fixed - if no xformers installed we trying to enable attention
slicing, ignoring torch-sdp availability
- Fixed - In SDXL negative prompt now creating zeroed tensor(according
to official code)
- Added mask field to l2l node
- Removed inpaint node and all legacy code related to this node
- Pass info about seed in latents, so we can use it to initialize
ancestral/sde schedulers
- t2l and l2l nodes moved from strength to denoising_start/end
- Removed code for noise threshold(@hipsterusername said that there no
plans to restore this feature)
- Fixed - first preview image now not gray
- Fixed - report correct total step count in progress, added scheduler
order in progress event
- Added MaskEdge and ColorCorrect nodes (@hipsterusername)

## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-13 23:08:11 -04:00
9d3cd85bdd chore: black 2023-08-14 13:02:33 +10:00
46a8eed33e Merge branch 'main' into feat/refactor_generation_backend 2023-08-14 13:01:28 +10:00
9fee3f7b66 Revert "Add magic to debug"
This reverts commit 511da59793.
2023-08-14 12:58:08 +10:00
9217a217d4 fix(ui): refiner uses steps directly, no math 2023-08-14 12:56:37 +10:00
b2700ffde4 Update post processing docs 2023-08-13 22:25:49 -04:00
511da59793 Add magic to debug 2023-08-14 05:14:24 +03:00
409e5d01ba Fix cpu_only schedulers(unipc) 2023-08-14 05:14:05 +03:00
58d5c61c79 fix: SDXL Inpaint & Outpaint using regular Img2Img strength 2023-08-14 12:55:18 +12:00
3d8da67be3 Remove callback-generator wrapper 2023-08-14 03:35:15 +03:00
957ee6d370 fix: SDXL Canvas Inpaint & Outpaint not respecting SDXL Refiner start value 2023-08-14 12:13:29 +12:00
fecad2c014 fix: SDXL Denoising Strength not plugged in correctly 2023-08-14 11:59:11 +12:00
550e6ef27a re: Set the image denoise str back to 0
Bug has been fixed. No longer needed.
2023-08-14 10:27:07 +12:00
cc85c98bf3 feat: Upgrade Diffusers to 0.19.3
Needed for some schedulers
2023-08-14 09:26:28 +12:00
75fb3f429f re: Readd Refiner Step Math but cap max steps to 1000 2023-08-14 09:26:01 +12:00
d63bb39475 Make dpmpp_sde(_k) use not random seed 2023-08-14 00:24:38 +03:00
096333ba3f Fix error on zero timesteps 2023-08-14 00:20:01 +03:00
0b2925709c Use double quotes in docker entrypoint to prevent word splitting 2023-08-13 14:36:55 -05:00
7a8f14d595 Clean-up code a bit 2023-08-13 19:50:48 +03:00
59ba9fc0f6 Flip bits in seed for sde/ancestral schedulers to have different noise from initial 2023-08-13 19:50:16 +03:00
6e0beb1ed4 Fixes for second order scheduler timesteps 2023-08-13 19:31:47 +03:00
94636ddb03 Fix empty prompt handling 2023-08-13 19:31:14 +03:00
746e099f0d fix: Do not do step math for refinerSteps
This is probably better done on the backend or in a different way. This can cause steps to go above 1000 which is more than the set number for the model.
2023-08-14 04:04:15 +12:00
499e89d6f6 feat: Add SDXL Negative Aesthetic Score 2023-08-14 04:02:36 +12:00
250d530260 Fixed import issue in invokeai/frontend/install/model_install.py (#4259)
This fixes an import issue introduced in commit 1bfe983. The change made
'invokeai_configure' into a module but this line still tries to call it
as if it's a function. This will result in a `'module' not callable`
error.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description

imic from discord ask that I submit a PR to fix this bug.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-14 02:40:08 +12:00
90fa3eebb3 feat: Make SDXL Style Prompt not take spaces 2023-08-14 02:25:39 +12:00
0aba105a8f Missed a spot in configure_invokeai.py 2023-08-13 05:32:35 -07:00
9e2e82a752 Fixed import issue in invokeai/frontend/install/model_install.py
This fixes an import issue introduced in commit 1bfe983.
The change made 'invokeai_configure' into a module but this line still tries to call it as if it's a function. This will result in a `'module' not callable` error.
2023-08-13 05:15:55 -07:00
561951ad98 chore: Black linting 2023-08-13 21:28:39 +12:00
3ff9961bda fix: Circular dependency in Mask Blur Method 2023-08-13 21:26:20 +12:00
33779b6339 chore: Remove shouldFitToWidthHeight from Inpaint Graphs
Was never used for inpainting but was fed to the node anyway.
2023-08-13 21:16:37 +12:00
b35cdc05a5 feat: Scaled Processing to Inpainting & Outpainting / 1.x & SDXL 2023-08-13 20:17:23 +12:00
9afb5d6ace Update version to 3.0.2post1 2023-08-12 19:49:33 -04:00
50177b8ed9 Update frontend JS files 2023-08-12 19:49:33 -04:00
c8864e475b fix: SDXL Lora's not working on Canvas Image To Image 2023-08-13 04:34:15 +12:00
fcf7f4ac77 feat: Add SDXL ControlNet To Linear UI 2023-08-13 04:27:38 +12:00
29f1c6dc82 fix: Image To Image FP32 Fix for Canvas SDXL 2023-08-13 04:23:52 +12:00
28208e6f49 fix: Fix VAE Precision not working for SDXL Canvas Modes 2023-08-13 04:09:51 +12:00
c33acf951e feat: Make Refiner work with Canvas 2023-08-13 03:53:40 +12:00
500cd552bc feat: Make SDXL work across the board + Custom VAE Support
Also a major cleanup pass to the SDXL graphs to ensure there's no ID overlap
2023-08-13 01:45:03 +12:00
55d27f71a3 feat: Give each graph its own unique id 2023-08-13 00:51:10 +12:00
746c7c59ff fix: remove extra node for canvas output catch 2023-08-12 22:39:30 +12:00
ad96c41156 feat: Add Canvas Output node to all Canvas Graphs 2023-08-12 22:04:43 +12:00
27bd127fb0 fix: Do not add anything but final output to staging area 2023-08-12 21:10:30 +12:00
f296e5c41e wip: Remove MaskBlur / Adjust color correction 2023-08-12 20:54:30 +12:00
a67d8376c7 fix missed spot for autoAddBoardId none 2023-08-12 18:07:01 +10:00
9f6221fe8c Merge branch 'main' into feat/refactor_generation_backend 2023-08-12 18:37:47 +12:00
7587b54787 chore: Cleanup, comment and organize Node Graphs
Before it gets too chaotic
2023-08-12 17:17:46 +12:00
7254ffc3e7 chore: Split Inpaint and Outpaint Graphs 2023-08-12 16:30:20 +12:00
6034fa12de feat: Add Mask Blur node 2023-08-12 16:20:58 +12:00
ce3675fc14 Apply denoising_start/end according on timestep value 2023-08-12 03:19:49 +03:00
8acd7eeca5 feat: Disable clip skip for SDXL Canvas 2023-08-12 08:18:30 +12:00
7293a6036a feat(wip): Add SDXL To Canvas 2023-08-12 08:16:05 +12:00
0b11f309ca instead of crashing when a corrupted model is detected, warn and move on 2023-08-11 15:05:14 -04:00
6a8eb392b2 Add support for loading SDXL LoRA weights in diffusers format. 2023-08-11 14:40:22 -04:00
f343ab0302 wip: Port Outpainting to new backend 2023-08-12 06:15:59 +12:00
824ca92760 fix maximum python version instructions 2023-08-11 13:49:39 -04:00
d7d6298ec0 feat: Add Infill Method support 2023-08-12 05:32:11 +12:00
58a48bf197 fix: LoRA list name sorting 2023-08-12 04:47:15 +12:00
5629d8fa37 fix; Key issue in Lora List 2023-08-12 04:43:40 +12:00
1affb7f647 feat: Add Paste / Mask Blur / Color Correction to Inpainting
Seam options are now removed. They are replaced by two options --Mask Blur and Mask Blur Method .. which control the softness of the mask that is being painted.
2023-08-12 03:28:19 +12:00
69a9dc7b36 wip: Add initial Inpaint Graph 2023-08-12 02:42:13 +12:00
f3ae52ff97 Fix error at high denoising_start, fix unipc(cpu_only) 2023-08-11 15:46:16 +03:00
7479f9cc02 feat: Update LinearUI to use new backend (except Inpaint) 2023-08-11 22:22:01 +12:00
87ce4ab27c fix: Update default_graph to use new DenoiseLatents 2023-08-11 22:21:13 +12:00
7c0023ad9e feat: Remove TextToLatents / Rename Latents To Latents -> DenoiseLatents 2023-08-11 22:20:37 +12:00
231e665675 Merge branch 'main' into feat/refactor_generation_backend 2023-08-11 20:53:38 +12:00
80fd4c2176 undo lint changes 2023-08-11 14:26:09 +10:00
3b6e425e17 fix error detail in toast 2023-08-11 14:26:09 +10:00
50415450d8 invalidate board total when images deleted, only run date range logic if board has less than 20 images 2023-08-11 14:26:09 +10:00
06296896a9 Update invokeai version 2023-08-10 22:23:41 -04:00
a7399aca0c Add new JS files for 3.0.2 build 2023-08-10 22:23:41 -04:00
d1ea8b1e98 Two changes to command-line scripts (#4235)
During install testing I discovered two small problems in the
command-line scripts. These are fixed.

## What type of PR is this? (check all applicable)

- [X Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- 
      
## Have you updated all relevant documentation?
- [X] Yes


## Description

- installer - use correct entry point for invokeai-configure
- model merge script - prevent error when `--root` not provided
2023-08-10 21:11:45 -04:00
f851ad7ba0 Two changes to command-line scripts
- installer - use correct entry point for invokeai-configure
- model merge script - prevent error when `--root` not provided
2023-08-10 20:59:22 -04:00
591838a84b Add support for LyCORIS IA3 format (#4234)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Add support for LyCORIS IA3 format

## Related Tickets & Documents
- Closes #4229 

## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-11 03:30:35 +03:00
c0c2ab3dcf Format by black 2023-08-11 03:20:56 +03:00
56023bc725 Add support for LyCORIS IA3 format 2023-08-11 02:08:08 +03:00
2ef6a8995b Temporary force set vae to same precision as unet 2023-08-10 18:01:58 -04:00
d0fee93aac round slider values to nice numbers 2023-08-10 18:00:45 -04:00
1bfe9835cf clip cache settings to permissible values; remove redundant imports in install __init__ file 2023-08-10 18:00:45 -04:00
8e7eae6cc7 Probe LoRAs that do not have the text encoder (#4181)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] No - minor fix

      
## Have you updated all relevant documentation?
- [X] Yes

## Description

It turns out that some LoRAs do not have the text encoder model, and
this was causing the code that distinguishes the model base type during
model import to reject them as having an unknown base model. This PR
enables detection of these cases.
2023-08-10 17:50:20 -04:00
f6522c8971 Merge branch 'main' into fix/detect-more-loras 2023-08-10 17:33:16 -04:00
a969707e45 prevent vae: '' from crashing model 2023-08-10 17:33:04 -04:00
6c8e898f09 Update scripts/verify_checkpoint_template.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-08-10 16:00:33 -04:00
7bad9bcf53 update dependencies and docs to cu118 2023-08-10 15:19:12 -04:00
d42b45116f fix(ui): fix lora sort (#4222)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [s] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      

## Description

was sorting with disabled at top of list instead of bottom

fixes #4217

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4217

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/dd895b86-05de-4303-8674-9b181037abaa)
2023-08-10 21:04:28 +12:00
d4812bbc8d Merge branch 'main' into fix/ui/fix-lora-sort 2023-08-10 19:00:26 +10:00
3cd05cf6bf fix(ui): fix lora sort
was sorting with disabled at top of list instead of bottom

fixes #4217
2023-08-10 15:31:29 +10:00
2564301aeb fix(ui): fix canvas model switching (#4221)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

There was no check at all to see if the canvas had a valid model already
selected. The first model in the list was selected every time.

Now, we check if its valid. If not, we go through the logic to try and
pick the first valid model.

If there are no valid models, or there was a problem listing models, the
model selection is cleared.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->


- Closes #4125

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

- Go to Canvas tab
- Select a model other than the first one in the list
- Go to a different tab
- Go back to Canvas tab
- The model should be the same as you selected
2023-08-10 17:29:41 +12:00
da0efeaa7f fix(ui): fix canvas model switching
There was no check at all to see if the canvas had a valid model already selected. The first model in the list was selected every time.

Now, we check if its valid. If not, we go through the logic to try and pick the first valid model.

If there are no valid models, or there was a problem listing models, the model selection is cleared.
2023-08-10 15:20:37 +10:00
49cce1eec6 feat: add app_version to image metadata 2023-08-10 14:22:39 +10:00
e9ec5ab85c Apply requested changes
Co-Authored-By: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-10 06:19:22 +03:00
17fed1c870 Fix merge conflict errors 2023-08-10 05:03:33 +03:00
ade78b9591 Merge branch 'main' into feat/refactor_generation_backend 2023-08-10 04:32:16 +03:00
c8fbaf54b6 Add self.min, not self.max 2023-08-10 09:59:22 +10:00
f86d388786 refactor(diffusers_pipeline): remove unused pipeline methods 🚮 (#4175) 2023-08-09 15:19:27 -07:00
cd2c688562 Merge branch 'main' into refactor/remove_unused_pipeline_methods 2023-08-09 17:26:09 -04:00
2d29ac6f0d Add techjedi's image import script (#4171)
## What type of PR is this? (check all applicable)

- [X ] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [X] Yes

## Description

This PR adds the `invokeai-import-images` script, which imports a
directory of 2.*.* -generated images into the current InvokeAI root
directory, preserving and converting their metadata. The script also
handles 3.* images.

Many thanks to @techjedi for writing this. This version differs from the
original in two minor respects:

1. It is installed as an `invokeai-import-images` command.
2. The prompts for image and database paths use file completion provided
by the `prompt_toolkit` library.
## To Test

1. Activate the virtual environment for the destination root to import
INTO
2. Run `invokeai-import-images`
3. Follow the prompts

## Related Tickets & Documents

This is a frequently-requested feature on Discord, but I couldn't find
an Issue.

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : but should in the future
2023-08-09 13:17:08 -04:00
2c2b731386 fix typo 2023-08-09 13:08:59 -04:00
2f68a1a76c use Stalker's simplified LoRA vector-length detection code 2023-08-09 09:21:29 -04:00
930e7bc754 Merge branch 'main' into feat/image-import-script 2023-08-09 08:54:56 -04:00
7d4ace962a Merge branch 'main' into fix/detect-more-loras 2023-08-09 08:48:27 -04:00
06842f8e0a Update to 3.0.2rc1 2023-08-09 00:29:43 -04:00
c82da330db Pin safetensors to 0.3.1
Safetensors 0.3.2 does not ship an ARM64 wheel so install on macOS fails
2023-08-09 00:29:43 -04:00
628df4ec98 Add updated frontend html file 2023-08-09 00:29:43 -04:00
16b956616f Update version to 3.0.2 2023-08-09 00:29:43 -04:00
604cc17a3a Yarn build JS files 2023-08-09 00:29:43 -04:00
37c9b85549 Add slider for VRAM cache in configure script (#4133)
## What type of PR is this? (check all applicable)

- [X ] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No - will be in release notes

## Description

On CUDA systems, this PR adds a new slider to the install-time configure
script for adjusting the VRAM cache and suggests a good starting value
based on the user's max VRAM (this is subject to verification).

On non-CUDA systems this slider is suppressed.

Please test on both CUDA and non-CUDA systems using:
```
invokeai-configure --root ~/invokeai-main/ --skip-sd --skip-support
```

To see and test the default values, move `invokeai.yaml` out of the way
before running.

**Note added 8 August 2023**

This PR also fixes the configure and model install scripts so that if
the window is too small to fit the user interface, the user will be
prompted to interactively resize the window and/or change font size
(with the option to give up). This will prevent `npyscreen` from
generating its horrible tracebacks.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-09 12:27:54 +10:00
8b39b67ec7 Merge branch 'main' into feat/select-vram-in-config 2023-08-09 12:17:27 +10:00
a933977861 Pick correct config file for sdxl models (#4191)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X Yes
- [ ] No


## Description

If `models.yaml` is cleared out for some reason, the model manager will
repopulate it by scanning `models`. However, this would fail with a
pydantic validation error if any SDXL checkpoint models were present
because the lack of logic to pick the correct configuration file. This
has now been added.
2023-08-09 11:16:48 +10:00
dfb41d8461 Merge branch 'main' into bugfix/autodetect-sdxl-ckpt-config 2023-08-09 03:57:44 +03:00
e98f7eda2e Fix total_steps in generation event, order field added 2023-08-09 03:34:25 +03:00
b4a74f6523 Add MaskEdge and ColorCorrect nodes
Co-Authored-By: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2023-08-08 23:57:02 +03:00
f7aec3b934 Move conditioning class to backend 2023-08-08 23:33:52 +03:00
4d5169e16d Merge branch 'main' into feat/select-vram-in-config 2023-08-08 13:50:02 -04:00
a7e44678fb Remove legacy/unused code 2023-08-08 20:49:01 +03:00
da0184a786 Invert mask, fix l2l on no mask conntected, remove zeroing latents on zero start 2023-08-08 20:01:49 +03:00
f56f19710d allow user to interactively resize screen before UI runs 2023-08-08 12:27:25 -04:00
96b7248051 Add mask to l2l 2023-08-08 18:50:36 +03:00
e77400ab62 remove deprecated options from config 2023-08-08 08:33:30 -07:00
13347f6aec blackified 2023-08-08 08:33:30 -07:00
a9bf387e5e turned on Pydantic validate_assignment 2023-08-08 08:33:30 -07:00
8258c87a9f refrain from writing deprecated legacy options to invokeai.yaml 2023-08-08 08:33:30 -07:00
1b1b399fd0 Fix crash when attempting to update a model (#4192)
## What type of PR is this? (check all applicable)

- [X] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [X No, because small fix

      
## Have you updated all relevant documentation?
- [X] Yes

## Description

A logic bug was introduced in PR #4109 that caused Web-based model
updates to fail with a pydantic validation error. This corrects the
problem.

## Related Tickets & Documents

PR #4109
2023-08-08 10:54:27 -04:00
a8d3e078c0 Merge branch 'main' into fix/detect-more-loras 2023-08-08 10:42:45 -04:00
6ed7ba57dd Merge branch 'main' into bugfix/fix-model-updates 2023-08-08 09:05:25 -04:00
2b3b77a276 api(images): allow HEAD request on image/full (#4193) 2023-08-08 00:08:48 -07:00
8b8ec68b30 Merge branch 'main' into feat/image_http_head 2023-08-08 00:02:48 -07:00
e20af5aef0 feat(ui): add LoRA support to SDXL linear UI
new graph modifier `addSDXLLoRasToGraph()` handles adding LoRA to the SDXL t2i and i2i graphs.
2023-08-08 15:02:00 +10:00
57e8ec9488 chore(ui): lint/format 2023-08-08 12:53:47 +10:00
734a9e4271 invalidate board total when images deleted, only run date range logic if board has less than 20 images 2023-08-08 12:53:47 +10:00
fe924daee3 add option to disable multiselect 2023-08-08 12:53:47 +10:00
750f09fbed blackify 2023-08-07 21:01:59 -04:00
4df581811e add template verification script 2023-08-07 21:01:48 -04:00
eb70bc2ae4 add scripts to create model templates and check whether they match 2023-08-07 21:00:47 -04:00
5f29526a8e Add seed to latents field 2023-08-08 04:00:33 +03:00
492bfe002a Remove sdxl t2l/l2l nodes 2023-08-08 03:38:42 +03:00
809705c30d api(images): allow HEAD request on image/full 2023-08-07 15:11:47 -07:00
f0918edf98 improve error reporting on unrecognized lora models 2023-08-07 16:38:58 -04:00
a846d82fa1 Add techedi code to avoid rendering prompt/seed with null
- Added techjedi github and real names
2023-08-07 16:29:46 -04:00
22f7cf0638 add stalker's complicated but effective code for finding token vector length in LoRAs 2023-08-07 16:19:57 -04:00
25c669b1d6 Merge remote-tracking branch 'origin/main' into refactor/remove_unused_pipeline_methods 2023-08-07 13:03:10 -07:00
4367061b19 fix(ModelManager): fix overridden VAE with relative path (#4059) 2023-08-07 12:57:32 -07:00
0fd13d3604 Merge branch 'main' into feat/select-vram-in-config 2023-08-07 15:51:59 -04:00
72a3e776b2 fix logic error introduced in PR 4109 2023-08-07 15:38:22 -04:00
af044007d5 pick correct config file for sdxl models 2023-08-07 15:19:49 -04:00
1db2c93f75 Fix preview, inpaint 2023-08-07 21:27:32 +03:00
f272a44feb Merge branch 'main' into refactor/model_manager_instantiate 2023-08-07 10:59:28 -07:00
2539e26c18 Apply denoising_start/end, add torch-sdp to memory effictiend attention func 2023-08-07 19:57:11 +03:00
b0738b7f70 Fixes, zero tensor for empty negative prompt, remove raw prompt node 2023-08-07 18:37:06 +03:00
8469d3e95a chore: black 2023-08-07 10:05:52 +10:00
ae17d01e1d Fix hue adjustment (#4182)
* Fix hue adjustment

Hue adjustment wasn't working correctly because color channels got swapped. This has now been fixed and we're using PIL rather than cv2 to do the RGBA->HSV->RGBA conversion. The range of hue adjustment is also the more typical 0..360 degrees.
2023-08-06 23:23:51 +00:00
f3d3316558 probe LoRAs that do not have the text encoder 2023-08-06 16:00:53 -04:00
5a6cefb0ea add backslash to end of incomplete windows paths 2023-08-06 12:34:35 -04:00
1a6f5f0860 use backslash on Windows systems for autoadded delimiter 2023-08-06 12:29:31 -04:00
5bfd6cb66f Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-08-05 22:02:28 -07:00
59caff7ff0 refactor(diffusers_pipeline): remove unused img2img wrappers 🚮
invokeai.app no longer needs this as a single method, as it builds on latents2latents instead.
2023-08-05 21:50:52 -07:00
6487e7d906 refactor(diffusers_pipeline): remove unused ModelGroup 🚮
orphaned since #3550 removed the LazilyLoadedModelGroup code, probably unused since ModelCache took over responsibility for sequential offload somewhere around #3335.
2023-08-05 21:50:52 -07:00
77033eabd3 refactor(diffusers_pipeline): remove unused precision 🚮 2023-08-05 21:50:52 -07:00
b80abdd101 refactor(diffusers_pipeline): remove unused image_from_embeddings 🚮 2023-08-05 21:50:52 -07:00
006d782cc8 refactor(diffusers_pipeline): tidy imports 🚮 2023-08-05 21:50:52 -07:00
d09dfc3e9b fix(api): use db_location instead of db_path_string
This may just be the SQLite memory sentinel value.
2023-08-06 14:09:04 +10:00
66f524cae7 fix(mm): fix a lot of typing issues
Most fixes are just things being typed as `str` but having default values of `None`, but there are some minor logic changes.
2023-08-06 14:09:04 +10:00
9ba50130a1 fix(api): fix db location types
The services all want strings instead of `Path`s; create variable for the string representation of the path provided by the config services.
2023-08-06 14:09:04 +10:00
d4cf2d2666 fix(api): fix ApiDependencies.invoker types
ApiDependencies.invoker` provides typing for the API's services layer. Marking it `Optional` results in all the routes seeing it as optional, which is not good.

Instead of marking it optional to satisfy the initial assignment to `None`, we can just skip the initial assignment. This preserves the IDE hinting in API layer and is types-legal.
2023-08-06 14:09:04 +10:00
9aaf67c5b4 wip 2023-08-06 05:05:25 +03:00
b8b589c150 fix(nodes): fix hsl nodes rebase conflict 2023-08-06 09:57:49 +10:00
d93900a8de Added HSL Nodes 2023-08-06 09:57:49 +10:00
7f4c387080 test(model_management): factor out name strings 2023-08-05 15:46:46 -07:00
80876bbbd1 Merge remote-tracking branch 'origin/refactor/model_manager_instantiate' into refactor/model_manager_instantiate 2023-08-05 15:25:05 -07:00
7a4ff4c089 Merge branch 'main' into refactor/model_manager_instantiate 2023-08-05 15:23:38 -07:00
44bf308192 test(model_management): add a couple tests for _get_model_path 2023-08-05 15:22:23 -07:00
12e51c84ae blackified 2023-08-05 14:26:16 -07:00
b2eb83deff add docs 2023-08-05 14:26:16 -07:00
0ccc3b509e add techjedi's import script, with some filecompletion tweaks 2023-08-05 14:26:16 -07:00
4043a4c21c blackified 2023-08-05 12:44:58 -04:00
c8ceb96091 add docs 2023-08-05 12:26:52 -04:00
83f75750a9 add techjedi's import script, with some filecompletion tweaks 2023-08-05 12:19:24 -04:00
dc96a3e79d Fix random number generator
Passing in seed=0 is not equivalent to seed=None. The latter will get a new seed from entropy in the OS, and that's what we should be using.
2023-08-06 00:29:08 +10:00
c076f1397e rebuild frontend 2023-08-05 14:40:42 +10:00
2568aafc0b bump version number so that pip updates work 2023-08-05 14:40:42 +10:00
65ed224bfc Merge branch 'main' into refactor/model_manager_instantiate 2023-08-04 21:34:38 -07:00
b6e369c745 chore: black 2023-08-05 12:28:35 +10:00
ecabfc252b devices.py - Update MPS FP16 check to account for upcoming MacOS Sonoma
float16 doesn't seem to work on MacOS Sonoma due to further changes with Metal. This'll default back to float32 for Sonoma users.
2023-08-05 12:28:35 +10:00
da96a41103 Merge branch 'main' into feat/select-vram-in-config 2023-08-05 12:11:50 +10:00
d162b78767 fix broken civitai example link 2023-08-05 12:10:52 +10:00
eb6c317f04 chore: black 2023-08-05 12:05:24 +10:00
6d7223238f fix: fix typo in message 2023-08-05 12:05:24 +10:00
8607d124c5 improve message about the consequences of the --ignore_missing_core_models flag 2023-08-05 12:05:24 +10:00
23497bf759 add --ignore_missing_core_models CLI flag to bypass checking for missing core models 2023-08-05 12:05:24 +10:00
b10cf20eb1 Merge branch 'main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-08-04 18:28:18 -07:00
3d93851dba Installer should download fp16 models if user has specified 'auto' in config (#4129)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

At install time, when the user's config specified "auto" precision, the
installer was downloading the fp32 models even when an fp16 model would
be appropriate for the OS.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4127
2023-08-05 01:56:25 +03:00
9bacd77a79 Merge branch 'main' into bugfix/fp16-models 2023-08-05 01:42:43 +03:00
1b158f62c4 resolve vae overrides correctly 2023-08-04 18:24:47 -04:00
6ad565d84c folded in changes from 4099 2023-08-04 18:24:47 -04:00
04229082d6 Provide ti name from model manager, not from ti itself 2023-08-04 18:24:47 -04:00
03c27412f7 [WIP] Add sdxl lora support (#4097)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Add lora loading for sdxl.
NOT TESTED - I run only 2 loras, please check more(including lycoris if
they already exists).

## QA Instructions, Screenshots, Recordings
https://civitai.com/models/118536/voxel-xl

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/76a6abff-cb0a-43b4-b779-a0b0e5b46e56)


## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-04 16:12:22 -04:00
f0613bb0ef Fix merge conflict resolve - restore full/diff layer support 2023-08-04 19:53:27 +03:00
0e9f92b868 Merge branch 'main' into feat/sdxl_lora 2023-08-04 19:22:13 +03:00
7d0cc6ec3f chore: black 2023-08-05 02:04:22 +10:00
2f8b928486 Add support for diff/full lora layers 2023-08-05 02:04:22 +10:00
0d3c27f46c Fix typo
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-08-04 11:44:56 -04:00
cff91f06d3 Add lora apply in sdxl l2l node 2023-08-04 11:44:56 -04:00
1d5d187ba1 model probe detects sdxl lora models 2023-08-04 11:44:56 -04:00
1ac14a1e43 add sdxl lora support 2023-08-04 11:44:56 -04:00
cfc3a20565 autoAddBoardId should always be defined 2023-08-04 22:19:11 +10:00
05ae4e283c Stop checking for unet/model.onnx when a model_index.json is detected (#4132)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-03 22:10:37 -04:00
f06fee4581 Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-03 22:02:05 -04:00
9091e19de8 Add execution stat reporting after each invocation (#4125)
## What type of PR is this? (check all applicable)

- [X] Feature


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No

## Description

This PR adds execution time and VRAM usage reporting to each graph
invocation. The log output will look like this:

```
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Graph stats: c7764585-9c68-4d9d-a199-55e8186790f3                                                                                              
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Node                 Calls  Seconds  VRAM Used                                                                                                 
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> main_model_loader        1   0.005s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> clip_skip                1   0.004s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> compel                   2   0.512s     0.26G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> rand_int                 1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> range_of_size            1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> iterate                  1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> metadata_accumulator     1   0.002s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> noise                    1   0.002s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> t2l                      1   3.541s     1.93G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> l2i                      1   0.679s     0.58G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:  4.749s                                                                                                            
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> Current VRAM utilization 0.01G                                                                                                                 
```
On systems without CUDA, the VRAM stats are not printed.

The current implementation keeps track of graph ids separately so will
not be confused when several graphs are executing in parallel. It
handles exceptions, and it is integrated into the app framework by
defining an abstract base class and storing an implementation instance
in `InvocationServices`.
2023-08-03 20:05:21 -04:00
0a0b7141af Merge branch 'main' into feat/execution-stats 2023-08-03 19:49:00 -04:00
1deca89fde Merge branch 'main' into feat/select-vram-in-config 2023-08-03 19:27:58 -04:00
446fb4a438 blackify 2023-08-03 19:24:23 -04:00
ab5d938a1d use variant instead of revision 2023-08-03 19:23:52 -04:00
9942af756a Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-03 10:10:51 -04:00
06742faca7 Merge branch 'feat/execution-stats' of github.com:invoke-ai/InvokeAI into feat/execution-stats 2023-08-03 08:48:05 -04:00
d2bddf7f91 tweak formatting to accommodate longer runtimes 2023-08-03 08:47:56 -04:00
91ebf9f76e Merge branch 'main' into refactor/model_manager_instantiate 2023-08-02 19:01:21 -07:00
bf94412d14 feat: add multi-select to gallery
multi-select actions include:
- drag to board to move all to that board
- right click to add all to board or delete all

backend changes:
- add routes for changing board for list of image names, deleting list of images
- change image-specific routes to `images/i/{image_name}` to not clobber other routes (like `images/upload`, `images/delete`)
- subclass pydantic `BaseModel` as `BaseModelExcludeNull`, which excludes null values when calling `dict()` on the model. this fixes inconsistent types related to JSON parsing null values into `null` instead of `undefined`
- remove `board_id` from `remove_image_from_board`

frontend changes:
- multi-selection stuff uses `ImageDTO[]` as payloads, for dnd and other mutations. this gives us access to image `board_id`s when hitting routes, and enables efficient cache updates.
- consolidate change board and delete image modals to handle single and multiples
- board totals are now re-fetched on mutation and not kept in sync manually - was way too tedious to do this
- fixed warning about nested `<p>` elements
- closes #4088 , need to handle case when `autoAddBoardId` is `"none"`
- add option to show gallery image delete button on every gallery image

frontend refactors/organisation:
- make typegen script js instead of ts
- enable `noUncheckedIndexedAccess` to help avoid bugs when indexing into arrays, many small changes needed to satisfy TS after this
- move all image-related endpoints into `endpoints/images.ts`, its a big file now, but this fixes a number of circular dependency issues that were otherwise felt impossible to resolve
2023-08-03 11:46:59 +10:00
e080fd1e08 blackify 2023-08-03 11:25:20 +10:00
eeef1e08f8 restore ability to convert merged inpaint .safetensors files 2023-08-03 11:25:20 +10:00
b3b94b5a8d use correct prop 2023-08-03 11:01:21 +10:00
5c9787c145 add project-id header to requests 2023-08-03 11:01:21 +10:00
cf72eba15c Merge branch 'main' into feat/execution-stats 2023-08-03 10:53:25 +10:00
a6f9396a30 fix(db): retrieve metadata even when no session_id
this was unnecessarily skipped if there was no `session_id`.
2023-08-03 10:43:44 +10:00
118d5b387b deploy: refactor github workflows
Currently we use some workflow trigger conditionals to run either a real test workflow (installing the app and running it) or a fake workflow, disguised as the real one, that just auto-passes.

This change refactors the workflow to use a single workflow that can be skipped, using another github action to determine which things to run depending on the paths changed.
2023-08-03 10:32:50 +10:00
02d2cc758d Merge branch 'main' into refactor/model_manager_instantiate 2023-08-02 17:11:23 -07:00
db545f8801 chore: move PR template to .github/ dir (#4060)
## What type of PR is this? (check all applicable)

- [x] Refactor

## Have you discussed this change with the InvokeAI team?
- [x] No, because it's pretty minor

      
## Have you updated all relevant documentation?
- [x] No


## Description

This PR just moves the PR template to within the `.github/` directory
leading to a overall minimal project structure.

## Added/updated tests?

- [x] No : because this change doesn't affect or need a separate test
2023-08-03 10:08:17 +10:00
b0d72b15b3 Merge branch 'main' into patch-1 2023-08-03 10:04:47 +10:00
4e0949fa55 fix .swap() by reverting improperly merged @classmethod change 2023-08-03 10:00:43 +10:00
f028342f5b Merge branch 'main' into patch-1 2023-08-03 10:00:10 +10:00
7021467048 (ci) do not install all dependencies when running static checks (#4036)
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-02 23:46:02 +00:00
26ef5249b1 guard board switching in board context menu 2023-08-03 09:18:46 +10:00
87424be95d block auto add board change during generation. Switch condition to isProcessing 2023-08-03 09:18:46 +10:00
366952f810 fix localization 2023-08-03 09:18:46 +10:00
450e95de59 auto change board waiting for isReady 2023-08-03 09:18:46 +10:00
0ba8a0ea6c Board assignment changing on click 2023-08-03 09:18:46 +10:00
f4981f26d5 Merge branch 'main' into bugfix/fp16-models 2023-08-02 19:17:55 -04:00
6bc21984c6 Merge branch 'main' into feat/select-vram-in-config 2023-08-02 19:12:43 -04:00
43d6312587 Merge branch 'main' into feat/execution-stats 2023-08-02 19:12:08 -04:00
0d125bf3e4 chore: delete nonfunctional shell.nix
This was for v2.3 and is very broken. See `flake.nix`, thanks to @zopieux
2023-08-03 09:09:40 +10:00
921ccad04d added stats service to the cli_app startup 2023-08-02 18:41:43 -04:00
05c9207e7b Merge branch 'feat/execution-stats' of github.com:invoke-ai/InvokeAI into feat/execution-stats 2023-08-02 18:31:33 -04:00
3fc789a7ee fix unit tests 2023-08-02 18:31:10 -04:00
008362918e Merge branch 'main' into feat/execution-stats 2023-08-02 18:15:51 -04:00
8fc75a71ee integrate correctly into app API and add features
- Create abstract base class InvocationStatsServiceBase
- Store InvocationStatsService in the InvocationServices object
- Collect and report stats on simultaneous graph execution
  independently for each graph id
- Track VRAM usage for each node
- Handle cancellations and other exceptions gracefully
2023-08-02 18:10:52 -04:00
82d259f43b Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-02 16:35:46 -04:00
ec48779080 blackify 2023-08-02 14:28:19 -04:00
bc20fe4cb5 Merge branch 'main' into feat/select-vram-in-config 2023-08-02 14:27:17 -04:00
5de42be4a6 reduce VRAM cache default; take max RAM from system 2023-08-02 14:27:13 -04:00
818c55cd53 Refactor/cleanup root detection (#4102)
## What type of PR is this? (check all applicable)

- [ X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X] No, because: invisible change

      
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No


## Description

There was a problem in 3.0.1 with root resolution. If INVOKEAI_ROOT were
set to "." (or any relative path), then the location of root would
change if the code did an os.chdir() after config initialization. I
fixed this in a quick and dirty way for 3.0.1.post3.

This PR cleans up the code with a little refactoring.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-02 10:36:12 -04:00
0db1e97119 Merge branch 'main' into refactor/cleanup-root-detection 2023-08-02 09:46:46 -04:00
29ac252501 blackify 2023-08-02 09:44:06 -04:00
880727436c fix default vram cache size calculation 2023-08-02 09:43:52 -04:00
77c5c18542 add slider for VRAM cache 2023-08-02 09:11:24 -04:00
ed76250dba Stop checking for unet/model.onnx when a model_index.json is detected 2023-08-02 07:21:21 -04:00
4d22cafdad Installer should download fp16 models if user has specified 'auto' in config
- Closes #4127
2023-08-01 22:06:27 -04:00
1f9e984b0d Merge branch 'main' into refactor/model_manager_instantiate 2023-08-01 16:49:39 -07:00
8a4e5f73aa reset stats on exception 2023-08-01 19:39:42 -04:00
4599575e65 fix(ui): use const for wsProtocol, lint 2023-08-02 09:26:20 +10:00
242d860a47 fix https/wss behind reverse proxy 2023-08-02 09:26:20 +10:00
0c1a7e72d4 Fix manual installation documentation (#4107)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ]X Bug Fix
- [ ] Optimization
- [ X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X] No, because: obvious problem

      
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No


## Description

The manual installation documentation in both README.md and
020_MANUAL_INSTALL give an incomplete `invokeai-configure` command which
leaves out the path to the root directory to create. As a result, the
invokeai root directory gets created in the user’s home directory, even
if they intended it to be placed somewhere else.

This is a fairly important issue.
2023-08-01 18:55:53 -04:00
11a44b944d fix installation documentation 2023-08-01 18:52:17 -04:00
fd7b842419 add execution stat reporting after each invocation 2023-08-01 17:44:09 -04:00
5998509888 Merge branch 'main' into refactor/model_manager_instantiate 2023-08-01 11:09:43 -07:00
403a6e88f2 fix: flake: add opencv with CUDA, new patchmatch dependency. 2023-08-01 23:56:41 +10:00
c9d452b9d4 fix: Model Manager Tab Issues (#4087)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [x] Bug Fix
- [?] Optimization


## Have you discussed this change with the InvokeAI team?
- [x] No

     
## Description

- Fixed filter type select using `images` instead of `all` -- Probably
some merge conflict.
- Added loading state for model lists. Handy when the model list takes
longer than a second for any reason to fetch. Better to show this than
an empty screen.
- Refactored the render model list function so we modify the display
component in one area rather than have repeated code.

### Other Issues

- The list can get a bit laggy on initial load when you have a hundreds
of models / loras. This needs to be fixed. Will make another PR for
this.
2023-08-02 01:06:53 +12:00
dcc274a2b9 feat: Make ModelListWrapper instead of rendering conditionally 2023-08-01 22:50:10 +10:00
f404669831 fix: Rename loading vars for consistency 2023-08-01 22:42:05 +10:00
ce687b28ef fix: Model Manager Tab Issues 2023-08-01 22:41:32 +10:00
7292d89108 Merge branch 'main' into refactor/cleanup-root-detection 2023-08-01 22:14:56 +10:00
41d6a38690 Update lint-frontend.yml
The action should run on every PR. We can make this more efficient in the future.
2023-08-01 22:10:56 +10:00
fb8f218901 fix(ui): post-onnx fixes 2023-08-01 07:59:01 -04:00
437f45a97f do not depend on existence of /tmp directory 2023-08-01 00:41:35 -04:00
13ef33ed64 Merge branch 'refactor/cleanup-root-detection' of github.com:invoke-ai/InvokeAI into refactor/cleanup-root-detection 2023-08-01 00:19:55 -04:00
86d8b46fca Merge branch 'main' into refactor/cleanup-root-detection 2023-08-01 00:14:26 -04:00
e86925d424 Add onnxruntime to the main dependencies 2023-08-01 00:03:10 -04:00
df53b62048 get rid of dangling debug statements 2023-07-31 22:39:11 -04:00
55d3f04476 additional refactoring 2023-07-31 22:36:11 -04:00
72ebe2ce68 refactor root directory detection to be cleaner 2023-07-31 22:30:06 -04:00
7cd8b2f207 Refactor root detection code 2023-07-31 21:15:44 -04:00
52437205bb chore(ui): lint 2023-08-01 08:54:03 +10:00
ceebb501a4 try named export 2023-08-01 08:54:03 +10:00
cbe874b964 add chakra as peer dep 2023-08-01 08:54:03 +10:00
e2e5918ee2 export theme nad move chakra to peer dep 2023-08-01 08:54:03 +10:00
1b131e328a add optional projectId - unused so far 2023-08-01 08:54:03 +10:00
81654daed7 ONNX Support (#3562)
Note: this branch based on #3548, not on main

While find out what needs to be done to implement onnx, found that I can
do draft of it pretty quickly, so... here it is)
Supports LoRA and TI.
As example - cat with sadcatmeme lora:

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/dbd1a5df-0629-4741-94b3-8e09f4b4d5a3)

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/d918836c-fdc7-43c0-aa81-dde9182f2e0f)
2023-07-31 17:34:27 -04:00
746afcd235 Merge branch 'main' into feat/onnx 2023-07-31 16:56:34 -04:00
ae0f4efcca Add missing Optional on a few nullable fields (#4076)
## What type of PR is this? (check all applicable)

- [x] Refactor

## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: trivial

## Description

Adds a few obviously missing `Optional` on fields that default to
`None`.
2023-07-31 16:56:10 -04:00
23647336ce Merge branch 'main' into fix-optional 2023-07-31 16:55:57 -04:00
4ca54dd5fa Added a getting started guide & updated the user landing page flow (#4028)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: Just a documentation update

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Updated documentation with a getting started guide & a glossary of terms
needed to get started
Updated the landing page flow for users 

<img width="1430" alt="Screenshot 2023-07-27 at 9 53 25 PM"
src="https://github.com/invoke-ai/InvokeAI/assets/7254508/d0006ba7-2ed4-4044-a1bc-ca9a99df9397">

## Related Tickets & Documents

<!--
For pull requests that relate or
 close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-31 16:55:25 -04:00
d3a3067164 Merge branch 'main' into main 2023-07-31 16:54:48 -04:00
aeac557c41 Run python black, point out that onnx is an alpha feature in the installer 2023-07-31 16:47:48 -04:00
af4fd328a6 Merge branch 'main' into feat/onnx 2023-07-31 16:45:12 -04:00
c40c7424b6 Merge branch 'main' into fix-optional 2023-07-31 15:59:12 -04:00
a6b907150b Add python black check to pre-commit (#4094)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-31 15:58:20 -04:00
bacdf985f1 doc(model_manager): docstrings 2023-07-31 09:16:32 -07:00
e3519052ae Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate 2023-07-31 08:46:09 -07:00
b0e84c6497 Add python black check to pre-commit 2023-07-31 11:42:08 -04:00
f784e8412c Some cleanup after the merge 2023-07-31 11:23:43 -04:00
1bafbafdd3 Regen schema and rebuild frontend after merging main 2023-07-31 11:02:15 -04:00
f5ac73b091 Merge branch 'main' into feat/onnx 2023-07-31 10:58:40 -04:00
eb642653cb Add Nix Flake for development, which uses Python virtualenv. 2023-07-31 19:14:30 +10:00
2c07f54b6e Merge branch 'main' into fix-optional 2023-07-31 16:31:01 +10:00
0691e0a12a Few modifications to getting started doc 2023-07-31 15:35:20 +10:00
79afcbd07e Merge branch 'main' of https://github.com/invoke-ai/InvokeAI 2023-07-31 14:19:37 +10:00
f4ead5e07f fix keyerror bug that was causing merge script to crash 2023-07-30 19:25:44 -04:00
6d24ca7f52 3.0.1post3 (#4082)
This is a relatively stable release that corrects the urgent windows
install and model manager problems in 3.0.1. It still has two known
bugs:

1. Many inpainting models are not loading correctly.
2. The merge script is failing to start.
2023-07-30 18:03:35 -04:00
2164da8592 blackify 2023-07-30 16:25:06 -04:00
adfd1e52f4 refactor(model_manager): avoid copy/paste logic 2023-07-30 11:53:12 -07:00
0e48c98330 Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-07-30 11:33:13 -07:00
4121c261a0 fix missing models when INVOKEAI_ROOT="." 2023-07-30 13:37:18 -04:00
99823d5039 more fixes to update and install 2023-07-30 11:57:06 -04:00
0abceb0e7b Merge branch 'main' of github.com:invoke-ai/InvokeAI 2023-07-30 11:08:27 -04:00
83d3f2347e fix "unrecognized arguments: --yes" bug on unattended upgrade 2023-07-30 11:07:06 -04:00
73e25d8dbe Update communityNodes.md
- Remove FaceMask and add link FaceTools repository, which includes FaceMask, FaceOff, and FacePlace
- Move Ideal Size up from under the template entry
2023-07-30 10:59:56 -04:00
50e00feceb Add missing Optional on a few nullable fields. 2023-07-30 16:25:12 +02:00
03594c949a blackified 2023-07-30 10:18:39 -04:00
adb85036e6 dependency tweaks to avoid installing/uninstalling pkgs 2023-07-30 10:17:04 -04:00
7d7a9273ed Merge branch 'main' of github.com:invoke-ai/InvokeAI 2023-07-30 09:19:14 -04:00
f17ad227cf fix relative model paths to be against config.models_path, not root (#4061)
## What type of PR is this? (check all applicable)

- [ X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes - bug discovered by jpphoto
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not needed

## Description

The user can customize the location of the models directory by setting
configuration variable `models_dir`. However, the model manager and the
TUI installer were all treating model relative paths as relative to the
invokeai root rather than the designated models directory. This has been
fixed by changing path resolution calls from using `config.root_path` to
`config.models_path`

Unfortunately there were many instances that needed replacement, so this
needs a bit of functional testing - try adding models, removing models,
renaming them, converting checkpoints, etc.
2023-07-30 08:51:35 -04:00
f91d01eb38 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:25:37 -04:00
adfcb610b6 Installer tweaks (#4070)
## What type of PR is this? (check all applicable)


- [ X] Optimization

## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No


## Description

This PR does two things:

1. if the environment variable INVOKEAI_ROOT is defined at install time,
the zipfile installer will default to its value when asking the user
where to install the software
2. If the user has more than 72 models of any type installed, then the
list will be truncated in the TUI and the user given a warning. Anything
larger than this number of models causes the vertical space to overflow.
The only effect of truncation is that the user will not be able to see
and delete the models that were truncated. The message advises the user
to go to the Web Model Manager interface in this event.
2023-07-30 08:25:11 -04:00
cafcd16657 Merge branch 'main' into install/tui-tweaks 2023-07-30 08:19:45 -04:00
2537ff0280 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:17:36 -04:00
0f5f08e494 Merge branch 'bugfix/model-manager-rel-paths' of github.com:invoke-ai/InvokeAI into bugfix/model-manager-rel-paths 2023-07-30 08:17:21 -04:00
e20c4dc1e8 blackified 2023-07-30 08:17:10 -04:00
6dc4ddef1b Fix various bugs in ckpt to diffusers conversion script (#4065)
## What type of PR is this? (check all applicable)


- [X ] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No


## Description

This PR fixes several issues with the 3.0.0 conversion script:

- Handles checkpoint variants that don't put dots between fields in the
long state dict key names
- Handles ema, non-ema, pruned and non-pruned ckpts
- Does not add safety checker to converted checkpoints
- Respects precision of original checkpoint file
2023-07-30 08:16:37 -04:00
26af5ec341 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:08:17 -04:00
10b182f316 Merge branch 'main' into bugfix/convert-script 2023-07-30 08:07:51 -04:00
ac84a9f915 reenable display of autoloaded models 2023-07-30 08:05:05 -04:00
844578ab88 fix lora loading crash 2023-07-30 07:57:10 -04:00
ff1c40747e lint: formatting 2023-07-29 20:02:31 -07:00
dbfd1bcb5e Merge branch 'main' into refactor/model_manager_instantiate 2023-07-29 19:53:21 -07:00
444390617f rebuild front end 2023-07-29 22:00:16 -04:00
6cb40d9d7b bump version for hotfix 3.0.1post1 2023-07-29 21:58:57 -04:00
ca895a9cd0 Unpin pydantic and numpy in pyproject.toml (#4062)
## What type of PR is this? (check all applicable)

- [ X] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] Not needed

## Description

Windows users have been getting a lot of OSErrors while installing 3.0.1
during the pip dependency installation phase. Generally the errors have
involved just two packages, pydantic and numpy. Looking at the install
logs, I see that both of these packages are first installed under one
version number by a dependency, and then uninstalled and replaced by a
slightly different version specified in invoke's `pyproject.toml`. I
think this is the problem - maybe the earlier package is not completely
closed before it is uninstalled and reinstalled.

This PR relaxes pinning of numpy and pydantic in `pyproject.toml`.
Everything seems to install and run properly. Hopefully it will address
the windows install bug as well.
2023-07-29 21:57:21 -04:00
7d27c7b1a4 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 21:47:16 -04:00
6c82229910 fix recovery recipe 2023-07-29 20:00:06 -04:00
43b1eb8e84 wording changes 2023-07-29 19:49:58 -04:00
b10b07220e blackify code 2023-07-29 19:20:20 -04:00
c2eb50d1cd make installer use initial INVOKEAI_ROOT as default install location 2023-07-29 19:19:42 -04:00
73f3b7f84b remove dangling comment 2023-07-29 17:32:33 -04:00
bb18251fad Merge branch 'bugfix/convert-script' of github.com:invoke-ai/InvokeAI into bugfix/convert-script 2023-07-29 17:31:02 -04:00
348bee8981 blackified 2023-07-29 17:30:54 -04:00
078b33bda2 Merge branch 'main' into bugfix/convert-script 2023-07-29 17:30:40 -04:00
e82eb0b9fc add correct optional annotation to precision arg 2023-07-29 17:30:21 -04:00
ad976e5198 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-29 17:27:16 -04:00
0e28961e69 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 17:27:02 -04:00
6ce059f063 blackified again 2023-07-29 17:26:40 -04:00
1de783b1ce fix mistake in indexing flat_ema_key 2023-07-29 17:20:26 -04:00
3f9105be50 make convert script respect setting of use_ema in config file 2023-07-29 17:17:45 -04:00
781322a647 installer respects INVOKEAI_ROOT for default root location 2023-07-29 16:16:44 -04:00
9a1cfadd8b fix: SDXL Metadata not being retrieved (#4057)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Description

- SDXL Metadata was not being retrieved. This PR fixes it.
2023-07-29 15:37:02 -04:00
2a2d988928 convert script handles more ckpt variants 2023-07-29 15:28:39 -04:00
ccceb32a85 lint: formatting 2023-07-29 11:50:04 -07:00
72c519c6ad fix incorrect key construction 2023-07-29 13:51:47 -04:00
af12f67948 Merge branch 'lstein/no-pydantic-in-pyproject' of github.com:invoke-ai/InvokeAI into lstein/no-pydantic-in-pyproject 2023-07-29 13:28:38 -04:00
60f5606c2d downgrade torchmetrics to fix model import problem 2023-07-29 13:28:29 -04:00
24b19166dd further refactoring 2023-07-29 13:13:22 -04:00
0396bce4f9 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 13:06:30 -04:00
71768f5988 restore unpinned versions of pydantic and numpy 2023-07-29 13:04:34 -04:00
0fb7328022 blackify code 2023-07-29 13:00:43 -04:00
99daa97978 more refactoring; fixed place where rel conversion missed 2023-07-29 13:00:07 -04:00
21617e60e1 Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate 2023-07-29 08:21:26 -07:00
982a568349 blackify pr 2023-07-29 10:47:55 -04:00
d79d5a4ff7 modest refactoring 2023-07-29 10:45:26 -04:00
9968ff2893 fix relative model paths to be against config.models_path, not root 2023-07-29 10:30:27 -04:00
35dd58e273 chore: move PR template to .github/ dir 2023-07-29 12:59:56 +05:30
6d82a1019a fix: Black linting 2023-07-29 17:34:43 +12:00
6ed1bf7084 Merge branch 'main' into metadata-fix 2023-07-29 17:33:30 +12:00
974175be45 fix: Prompt Node using incorrect output type (#4058)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-29 17:32:10 +12:00
86b8b69e88 internal(ModelManager): add instantiate method 2023-07-28 22:30:25 -07:00
bc9a5038fd refactor(ModelManager): factor out get_model_path 2023-07-28 22:29:36 -07:00
bee678fdd1 fix: Prompt Node using incorrect output type 2023-07-29 17:12:25 +12:00
c5caf1e8fe fix: SDXL Metadata not being retrieved 2023-07-29 17:03:19 +12:00
b163ae6a4d refactor(ModelManager): factor out get_model_config 2023-07-28 21:30:20 -07:00
dca685ac25 refactor(ModelManager): refactor rescan-on-miss to exists() method 2023-07-28 21:11:00 -07:00
72708eb53c Feat/Nodes: Change Input to Textbox (#3853)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
not yet, making pr to show
      
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No


## Description
Temp Change Node String input to a textbox, to allow easier input of
prompts and larger strings, it works for me but please tell me if I did
it wrong and if the size is ok

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-29 16:10:32 +12:00
aae1670080 fix: Incorrect Prompt Node output type 2023-07-29 16:04:19 +12:00
e70bedba7d refactor(ModelManager): factor out _get_implementation method 2023-07-28 21:03:27 -07:00
1e776d2523 chore: Regen types 2023-07-29 15:59:52 +12:00
8e06e6abbc feat: Update 'style' string input to also display text area 2023-07-29 15:52:59 +12:00
8a0e1b6cfc feat: Create Prompt Input Node 2023-07-29 15:52:37 +12:00
2d9bc79ca4 Merge branch 'main' into nodepromptsize 2023-07-29 12:43:29 +10:00
6886eb094d Make more Simple 2023-07-29 12:40:17 +10:00
6ca0c38ee3 Merge branch 'main' into feat/onnx 2023-07-28 22:06:28 -04:00
d633eb1612 remove pydantic and numpy from pyproject.toml 2023-07-28 21:56:22 -04:00
1bbf2f269d Update installer 2023-07-28 21:02:48 -04:00
ac22652686 rebuild front end 2023-07-28 18:21:05 -04:00
77cfec5cc8 Release 3.0.1 release candidate 3 (#4025)
Branch used to rebuild front end and make minor doc changes during
release process. Merge before next release.
2023-07-28 18:03:44 -04:00
3e4420c1ae bugfix: Float64 error for mps devices on set_timesteps (#4040)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: minor fix, let me know your thoughts

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No

## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue # https://github.com/invoke-ai/InvokeAI/issues/4017
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Requires mps device

## [optional] Are there any post deployment tasks we need to perform?

Please test on an MPS (M1/M2) device. 

Relevant code causing the error in #4017 


01b6ec21fa/src/diffusers/schedulers/scheduling_euler_discrete.py (L263C3-L268C75)

```
        self.sigmas = torch.from_numpy(sigmas).to(device=device)
        if str(device).startswith("mps"):
            # mps does not support float64
            self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
        else:
            self.timesteps = torch.from_numpy(timesteps).to(device=device)
```
2023-07-28 18:02:39 -04:00
f8181ab1b3 fix: Concat Link Styling (#4048)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Description

- Fix SDXL Concat Link animation not considering the fact that prompt
boxes can be resized.
- Also fixed a minor issue where the overlaying animation box would
block the prompt input resize slightly. Should be good now.
2023-07-28 18:02:22 -04:00
3dfeead9b8 Update troubleshooting guide with ~ydantic and SDXL unet issue advice (#4054)
## What type of PR is this? (check all applicable)


- [X ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [X ] Yes

      
## Have you updated all relevant documentation?
- [X ] Yes

## Description

Added solutions for installation issues related to large SDXL files and
Windows dependency glitches.
2023-07-28 18:02:04 -04:00
d3f6c7f983 Remove onnxruntime 2023-07-28 16:58:06 -04:00
390ce9f249 Fix onnx installer 2023-07-28 16:54:03 -04:00
3da0be7eb9 update troubleshooting guide with ~ydantic and SDXL unet issue workarounds 2023-07-28 16:42:57 -04:00
8935ae0ea3 Fix issues caused by merge 2023-07-28 14:00:32 -04:00
31e5f4bb0e Merge branch 'main' into set-timestep-mps-fix 2023-07-28 08:58:12 -07:00
2164674b01 Black format 2023-07-28 07:49:29 -07:00
8f2a646286 fix: Lint errors 2023-07-29 02:37:59 +12:00
5ff4dd26bb fix: Concat Link Styling 2023-07-29 02:30:10 +12:00
e342ca872f fix to work on non-MPS systems 2023-07-28 10:27:49 -04:00
a2aa66f43a Run Python black 2023-07-28 10:00:09 -04:00
da751da3dd Merge branch 'main' into feat/onnx 2023-07-28 09:59:35 -04:00
2b7b3dd4ba Run python black 2023-07-28 09:46:44 -04:00
dc1148106d Just install onnxruntime by default 2023-07-28 09:32:43 -04:00
062a369044 feat: Unify Promp Area Styling (#4033)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Making the prompt area styling match across all tabs / models and move
all prompt related components into a parent components for quick add.

Cherry picked stuff from the Styles PR coz we ain't gonna merge that.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-28 22:10:08 +12:00
e4a2f56ad1 feat(ui): tweak link colors
- make the `SDXLConcatLink` icon match existing colors in light mode
- make the link toggle button accent color when active (its not super obvious but at least there is *some* visual difference for the button)
2023-07-28 19:57:05 +10:00
1df30f7260 feat: Pulse Animate SDXL Concat Link 2023-07-28 20:45:39 +12:00
514722d67a Update definitions to be more accurate 2023-07-28 18:35:05 +10:00
5dbde2116f Merge branch 'invoke-ai:main' into main 2023-07-28 18:34:33 +10:00
14c4650801 fix: Lint Errors (returning possible null component) 2023-07-28 19:03:29 +12:00
f155b03eee feat: New animation for Concat Link 2023-07-28 18:55:59 +12:00
ddaf753f7b Merge branch 'set-timestep-mps-fix' of ssh://github.com/ZachNagengast/InvokeAI into set-timestep-mps-fix 2023-07-27 23:40:44 -07:00
e6d14c708c Fix variable name 2023-07-27 23:40:23 -07:00
7f81a95b20 Merge branch 'main' into set-timestep-mps-fix 2023-07-28 16:12:07 +10:00
6a49eec606 feat: Add Concat Link Animation
Might remove the lines. Just pushing to save changes for now.
2023-07-28 15:01:40 +12:00
fd67b18c9a Merge branch 'main' into unify-prompt 2023-07-28 14:48:13 +12:00
9affdbbaad chore: black 2023-07-28 11:38:52 +10:00
8d300bddd0 feat(ui): alias existing type for UpdateLoRAModelResponse 2023-07-28 11:38:52 +10:00
aa2c94be9e make LoRAs editable 2023-07-28 11:38:52 +10:00
4c79350300 persist LoRA model info in models.yaml 2023-07-28 11:38:52 +10:00
10e1d623c3 Add LoRAs to the model manager. 2023-07-28 11:38:52 +10:00
aa1f827271 Fix unet_info location, can have no device prop 2023-07-27 14:47:09 -07:00
fb113b9077 Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 16:24:29 -04:00
bb9460d278 Unify uvicorn and backend logging (#4034)
## What type of PR is this? (check all applicable)

- [ X] Feature

## Have you discussed this change with the InvokeAI team?
- [ X] Yes

      
## Have you updated all relevant documentation?
- [ X] Yes - this makes invokeai behave the way it is described in
LOGGING.md

## Description

Prior to this PR, the uvicorn embedded web server did all its logging
independently of the InvokeAI logging infrastructure, and none of the
command-line or `invokeai.yaml` configuration directives, such as
`log_level` had any effect on its output. This PR replaces the uvicorn
logger with InvokeAI's, simultaneously creating a more uniform output
experience, as well as a unified way to control the amount and
destination of the logs.

Here's what we used to see at startup:
```
[2023-07-27 07:29:48,027]::[InvokeAI]::INFO --> InvokeAI version 3.0.1rc2                                                                                                                               
[2023-07-27 07:29:48,027]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-main                                                                                                             
[2023-07-27 07:29:48,028]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 4070                                                                                                               
[2023-07-27 07:29:48,040]::[InvokeAI]::INFO --> Scanning /home/lstein/invokeai-main/models for new models                                                                                               
[2023-07-27 07:29:49,263]::[InvokeAI]::INFO --> Scanned 22 files and directories, imported 10 models                                                                                                    
[2023-07-27 07:29:49,271]::[InvokeAI]::INFO --> Model manager service initialized                                                                                                                       
INFO:     Application startup complete.                                                                                                                                                                 
INFO:     Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)                                                                                                                               
INFO:     127.0.0.1:44404 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7Pvd HTTP/1.1" 200 OK                                                                                                         
INFO:     127.0.0.1:44404 - "POST /socket.io/?EIO=4&transport=polling&t=OcN7Pw6&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK                                                                               
INFO:     ('127.0.0.1', 44418) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=SB-NsBKLSrW7YtM0AAAA" [accepted]                                                                                  
INFO:     connection open                                                                                                                                                                               
INFO:     127.0.0.1:44430 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7Pw9&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK                                                                                
INFO:     127.0.0.1:44404 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7PwU&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK                                                                                
INFO:     127.0.0.1:44404 - "GET /api/v1/images/?is_intermediate=true HTTP/1.1" 200 OK                                                                                                                  
INFO:     127.0.0.1:43448 - "GET / HTTP/1.1" 200 OK                                                                                                                                                     
INFO:     connection closed                                                                                                                                                                             
INFO:     127.0.0.1:43448 - "GET /assets/index-5a784cdd.js HTTP/1.1" 200 OK                                                                                                                             
INFO:     127.0.0.1:43458 - "GET /assets/favicon-0d253ced.ico HTTP/1.1" 304 Not Modified                                                                                                                
INFO:     127.0.0.1:43448 - "GET /locales/en.json HTTP/1.1" 200 OK                                                                                                                                      
```

And here's what we see with the new implementation:
```
[2023-07-27 12:05:28,810]::[uvicorn.error]::INFO --> Started server process [875161]
[2023-07-27 12:05:28,810]::[uvicorn.error]::INFO --> Waiting for application startup.
[2023-07-27 12:05:28,810]::[InvokeAI]::INFO --> InvokeAI version 3.0.1rc2
[2023-07-27 12:05:28,810]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-main
[2023-07-27 12:05:28,811]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 4070
[2023-07-27 12:05:28,823]::[InvokeAI]::INFO --> Scanning /home/lstein/invokeai-main/models for new models
[2023-07-27 12:05:29,970]::[InvokeAI]::INFO --> Scanned 22 files and directories, imported 10 models
[2023-07-27 12:05:29,977]::[InvokeAI]::INFO --> Model manager service initialized
[2023-07-27 12:05:29,980]::[uvicorn.error]::INFO --> Application startup complete.
[2023-07-27 12:05:29,981]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2023-07-27 12:05:32,140]::[uvicorn.access]::INFO --> 127.0.0.1:45236 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6ILb HTTP/1.1" 200
[2023-07-27 12:05:32,142]::[uvicorn.access]::INFO --> 127.0.0.1:45248 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6ILb HTTP/1.1" 200
[2023-07-27 12:05:32,154]::[uvicorn.access]::INFO --> 127.0.0.1:45236 - "POST /socket.io/?EIO=4&transport=polling&t=OcO6ILr&sid=13O4-5uLx5eP-NuqAAAA HTTP/1.1" 200
[2023-07-27 12:05:32,168]::[uvicorn.access]::INFO --> 127.0.0.1:45252 - "POST /socket.io/?EIO=4&transport=polling&t=OcO6IM0&sid=0KRqxmh7JLc8t7wZAAAB HTTP/1.1" 200
[2023-07-27 12:05:32,171]::[uvicorn.error]::INFO --> ('127.0.0.1', 45264) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=0KRqxmh7JLc8t7wZAAAB" [accepted]
[2023-07-27 12:05:32,172]::[uvicorn.error]::INFO --> connection open
[2023-07-27 12:05:32,174]::[uvicorn.access]::INFO --> 127.0.0.1:45276 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6IM3&sid=0KRqxmh7JLc8t7wZAAAB HTTP/1.1" 200

```

You can also divert everything to a file with a `invokeai.yaml` entry
like this:
```
  Logging:
    log_handlers:
    - file=/home/lstein/invokeai/logs/access_log
    log_format: plain
    log_level: info
```

This works with syslog and other log handlers as well.
2023-07-27 15:56:42 -04:00
6edeb4e072 Pass device to set_timestep to avoid float64 error 2023-07-27 12:52:18 -07:00
2bb4e6d5aa Merge branch 'main' into feat/unify-logging 2023-07-27 15:48:06 -04:00
53028feb83 feat: Upgrade Diffusers to 0.19.0 (#4011)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

https://github.com/huggingface/diffusers/releases/tag/v0.19.0


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-27 15:39:02 -04:00
d983dd371c Merge branch 'diffusers-upgrade' of github.com:blessedcoolant/InvokeAI into diffusers-upgrade 2023-07-27 15:30:01 -04:00
17ee17a789 merge with main;resolve conflicts 2023-07-27 15:29:34 -04:00
6b3ec29480 Merge branch 'main' into diffusers-upgrade 2023-07-27 15:27:55 -04:00
4a30773d09 Merge branch 'main' into feat/unify-logging 2023-07-27 15:25:56 -04:00
006075483d Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 15:21:08 -04:00
1ea9ba84f5 Release session if applying ti or lora 2023-07-27 15:20:38 -04:00
64bd11541a Merge branch 'main' into feat/unify-logging 2023-07-27 15:20:07 -04:00
52bd29d484 Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 15:19:05 -04:00
41b13e83a5 Support Python 3.11 (#3966)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No

## Description

This updates InvokeAI's pyproject.toml to the minimum library versions
needed to support Python 3.11. It updates the installer to find and
allow for 3.11, and the documentation.

Between 3.10 and 3.11 there was a change to the handling of `enum`
interpolation into strings that caused the model manager to break. I
think I have fixed the places where this was a problem, but there may be
other instances in which this will cause problems. Please be alert for
errors involving `ModelType` or `BaseModelType`.

I also took the opportunity to add a `SilenceWarnings()` context to the
t2i and i2i invocations. This quenches nags from diffusers about the
HuggingFace NSFW library.

I have tested basic functionality (t2i, i2i, inpaint, lora, controlnet,
nodes) on 3.10 and 3.11 and all seems good. Please test more
extensively!

## Added/updated tests?

- [ X ] Yes - existing tests run to completion
- [ ] No

## [optional] Are there any post deployment tasks we need to perform?

Should be a drop-in replacement.
2023-07-27 15:18:21 -04:00
0d8f9cbe55 resolved conflicts with main 2023-07-27 15:11:25 -04:00
fd75a1dd10 reformat with black 2023-07-27 15:01:00 -04:00
bfdc8c80f3 Testing caching onnx sessions 2023-07-27 14:13:29 -04:00
3bb81bedbd Merge branch 'main' into unify-prompt 2023-07-28 05:36:04 +12:00
e191f6d4b2 prevent resize error (#4031)
* add upper bound for minWidth to prevent crash with cypress

* add fallback so UI doesnt crash when backend isnt running

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-27 17:30:31 +00:00
00988e4972 (installer) check that the found Python executable is actually operational
when multiple python versions are installed with `pyenv`, the executable
(shim) exists, but returns an error when trying to run it
unless activated with `pyenv`. This commit ensures the python
executable is usable.
2023-07-27 13:28:00 -04:00
7d458eb1ac Dev/black (#3840)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature (dev feature and reformatting)
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Description
Introducing black to the code base as a first step towards this:
https://github.com/invoke-ai/InvokeAI/discussions/3721

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Not applicable

## [optional] Are there any post deployment tasks we need to perform?
All active branches will be affected by this and will need to be
updated.

This PR adds a new github workflow for black as well as config for
pre-commit hooks to those who wish to use it
2023-07-27 12:59:47 -04:00
b8b46aec09 Revert "fix: Lint Errors"
This reverts commit f057d5c85b.
2023-07-28 04:34:41 +12:00
4d2b87ea01 fix(ui): fix types for controlnet models
`ControlNetModelConfig` was split into `ControlNetModelCheckpointConfig` and `ControlNetModelDiffusersConfig`, need to update the UI types
2023-07-28 04:34:29 +12:00
8023a23cec beat uvicorn access log into submission 2023-07-27 12:05:17 -04:00
e4c0102b3c unified uvicorn access log entries too 2023-07-27 11:59:29 -04:00
16d044336f (meta) hide the 'black' formatting commit from git blame
also remove lib/ from gitignore as it is hiding the installer code
from 'black'
2023-07-27 11:29:22 -04:00
c4a2808a4b use same logging infrastructure for uvicorn and backend 2023-07-27 11:24:07 -04:00
59716938bf Remove TensorRT support at the current time until we validate it works, remove time step recorder 2023-07-27 11:18:50 -04:00
611f31c057 fix: Adjust embedding button on PositivePrompt for new changes 2023-07-28 03:07:50 +12:00
b60adc31d0 feat: Unify Prompt Area Design Between SDXL and Regular Models 2023-07-28 03:07:50 +12:00
a98ed3a5ba fix: TextArea Resizer styling when disabled 2023-07-28 03:06:31 +12:00
f057d5c85b fix: Lint Errors 2023-07-28 03:06:31 +12:00
918a0dedc0 Always install onnx 2023-07-27 11:00:40 -04:00
218b6d0546 Apply black 2023-07-27 10:54:01 -04:00
2183dba5c5 Remove whitespace and commented out pre-commit hooks 2023-07-27 10:53:27 -04:00
a491e326c5 This is no longer needed 2023-07-27 10:52:36 -04:00
f7bb4c3f05 Remove more files no longer needed in main 2023-07-27 10:49:43 -04:00
57271ad125 Move onnx to optional dependencies 2023-07-27 10:28:26 -04:00
33245b37ad Removed things no longer needed in main 2023-07-27 10:23:55 -04:00
81d8fb8762 Removed things no longer needed in main 2023-07-27 10:14:55 -04:00
fc9dacd082 Black/flake8 line length 100->120 2023-07-27 10:12:25 -04:00
8b4af69d87 Black config, pre-commit and GHA 2023-07-27 10:09:04 -04:00
989d3d7f3c Remove onnx changes from canvas img2img, inpaint, and linear image2image 2023-07-27 10:08:45 -04:00
d2a46b4308 Fix dist and schema after merge 2023-07-27 09:55:28 -04:00
eb1ba8d74b Merge branch 'main' into feat/onnx 2023-07-27 09:54:30 -04:00
4ebde013ea Allow deleting onnx models in model manager ui 2023-07-27 09:50:20 -04:00
024f92f9a9 Add onnx models to the model manager UI 2023-07-27 09:37:37 -04:00
562c937a14 Updated new user flow 2023-07-27 21:46:39 +10:00
5300e353d8 updated community nodes doc 2023-07-27 18:58:44 +10:00
d78c97f8a8 Updated getting started guide and links 2023-07-27 18:51:48 +10:00
52f61698e9 added getting started with Invoke guide 2023-07-27 18:29:12 +10:00
6f54fe9003 fix(ui): fix types for controlnet models
`ControlNetModelConfig` was split into `ControlNetModelCheckpointConfig` and `ControlNetModelDiffusersConfig`, need to update the UI types
2023-07-27 15:46:50 +10:00
895917c3ab Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 01:02:38 -04:00
be00a837cc hotfix to remove duplicate key in INITIAL_MODELS 2023-07-27 00:38:18 -04:00
dcb85b0097 rebuild frontend; bump version 2023-07-27 00:37:23 -04:00
5956c601f7 Restore ability to convert SDXL checkpoints to diffusers (#4021)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] Not needed


## Description

This bugfix enables InvokeAI to convert sd-1, sd-2 and sdxl base model
checkpoints (.safetensors) to diffusers.
2023-07-27 00:29:13 -04:00
b67041dd29 Merge branch 'main' into bugfix/convert-sdxl-models 2023-07-27 00:24:37 -04:00
5b62d97a47 install SDXL "fixed" VAE (#4020)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] No


## Description

This PR causes the installer to install, by default, the fine-tuned
SDXL-1.0 VAE located at
https://huggingface.co/madebyollin/sdxl-vae-fp16-fix.

Although this VAE is supposed to run at fp16 resolution, currently it
only works in InvokeAI at fp32. However, because it is a fine tune, it
may have fewer of the watermark-related artifacts that we see with the
SDXL-1.0 VAE.
2023-07-27 00:14:58 -04:00
c02b9db064 Merge branch 'main' into bugfix/convert-sdxl-models 2023-07-27 00:08:15 -04:00
2e19b23eed Merge branch 'main' into feat/install-finetune-sdxl-vae 2023-07-27 00:06:00 -04:00
f7f20fdfe4 Configure script should not overwrite models.yaml if it is well formed (#4019)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not necessary


## Description

When adding new core models to a 3.0.0 root directory needed to support
SDXL, the configure script was (under some conditions) overwriting
models.yaml. This PR corrects the problem.
2023-07-27 00:03:51 -04:00
61aff8540c fix refiner conversion 2023-07-27 00:02:10 -04:00
2b7807e6a0 Merge branch 'main' into fix/yaml-file-delete 2023-07-26 23:45:43 -04:00
fc19624bd8 Rework configure/install TUI to require less space (#3989)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No


## Description

I have reworked the console TUIs for the configure and model install
scripts to require much less vertical space. In the event that the
"NEXT" button is still missing and "page 1/2" is displayed, scrolling
beyond the last checkbox will now automatically move to page 2 where the
buttons are displayed. This is not ideal, but will no longer block user
completely.

If users continue to have problems after this, I'll get rid of the TUI
altogether and replace with a web form.

## Added/updated tests?

- [ ] Yes
- [X ] No : not needed

## [optional] Are there any post deployment tasks we need to perform?
2023-07-26 23:44:50 -04:00
77946bfea5 restore ability to convert SDXL checkpoints to diffusers 2023-07-26 23:28:58 -04:00
d4d4d749f2 Merge branch 'release/invokeai-3-0-1' 2023-07-26 23:15:26 -04:00
43fe8b1dda Merge branch 'main' into fix/reduce-configure-vertical 2023-07-26 23:12:25 -04:00
3e441f773f Documentation updates for SDXL license terms, invisible watermark (#4012)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X ] No, because they trust me

      
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No


## Description

* Added the RAIL++ license for SDXL
* Updated configure script with URLs for both the original RAIL-M and
RAIL++ licenses
* Added invisible watermark documentation and renamed doc file
* Updated documentation for installation
* Updated documentation on settings in invokeai.yaml
2023-07-26 23:11:58 -04:00
9c4acb9d3f install SDXL "fixed" VAE 2023-07-26 23:06:27 -04:00
451b8c96e5 do not overwrite models.yaml if it is well formed 2023-07-26 22:29:39 -04:00
b8376a4932 Merge branch 'main' into fix/reduce-configure-vertical 2023-07-26 22:16:38 -04:00
0d344872f1 fix: Metadata Not Being Saved (#4009)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Metadata was not getting saved coz the accumulator was not plugged in if
watermark or nsfw nodes were turned off.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-26 22:15:32 -04:00
4bfbdb0d97 chore(ui): lint 2023-07-27 11:58:59 +10:00
049e666412 fix(ui): revise metadata edges in linear graphs
- always add metadata to l2i nodes
- no metadata handling for inpaint, removed
2023-07-27 09:43:45 +10:00
83a981b585 merge with main; fix SDXL repo_ids 2023-07-26 17:38:06 -04:00
049645d66e updated LICENSE files and added information about watermarking 2023-07-26 17:27:33 -04:00
4d732e06de Remove onnx models from img2img and unified canvas 2023-07-26 16:30:02 -04:00
c90c4a32ee Merge branch 'main' into metadata-fix 2023-07-27 08:08:11 +12:00
3ff8c87c09 feat: Upgrade Diffusers to 0.19.0 2023-07-27 08:00:12 +12:00
f26a423e95 Fix merge issue 2023-07-26 15:32:28 -04:00
0100ac8f2d Merge branch 'main' into release/invokeai-3-0-1 2023-07-26 15:27:06 -04:00
6a3a776f4e Bugfix/checkpoint conversion (#4010)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ x] No, because there was no time!

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] No


## Description

Hotfix for issue of SD1 and SD2 legacy safetensors models not converting
in 3.0.1rc1.
2023-07-26 15:21:16 -04:00
020031f376 add all legacy model .yaml files to configs directory unconditionally 2023-07-26 15:17:00 -04:00
7053347559 fix: Metadata Not Being Saved 2023-07-27 07:09:51 +12:00
bf1f6619df fix conversion for sd1 and sd2 models 2023-07-26 15:02:32 -04:00
6bdcc32414 rebuild frontend for rc1 release (again) 2023-07-26 13:36:42 -04:00
4f39c81dec Merge branch 'main' into release/invokeai-3-0-1 2023-07-26 13:33:15 -04:00
3376968cbb fix: Prompt Drawer Unpinned not having SDXL UI 2023-07-26 13:30:43 -04:00
0420d75d2b fix: Improve Styling of SDXL Prompt Area 2023-07-26 13:30:43 -04:00
3bd9c27a79 feat: Add SDXL Style Prompt Concat Toggle 2023-07-26 13:30:43 -04:00
b6522cf2cf fix: SDXL - Concat Prompt and Style for Style Prompt 2023-07-26 13:30:43 -04:00
861c0fe76b Correct issues caused by merging main 2023-07-26 12:25:46 -04:00
13ac5c6899 enable hide localization toggle (#4004)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-27 03:01:52 +12:00
05070304ff Merge branch 'release/invokeai-3-0-1' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-1
- fix log message
2023-07-26 11:00:57 -04:00
af8fc6ff82 final polish before release candidate
- Fix issue that prevented web ui from starting if
  ROOT/databases/invokeai.db not found.

- Rebuild front end
2023-07-26 10:59:23 -04:00
f86d0d1b69 hide localization toggle 2023-07-26 10:55:38 -04:00
e6741cee75 rebuid front end 2023-07-26 10:47:37 -04:00
c16da75ac7 Merge branch 'main' into feat/onnx 2023-07-26 10:42:31 -04:00
575ebaeb75 Merge PR #3944 2023-07-26 10:25:59 -04:00
385483ff8e Download all model types. (#3944) 2023-07-26 10:24:37 -04:00
c7f883d22a Merge branch 'main' into patch 2023-07-26 10:19:02 -04:00
58ff5d3f5b Merge branch 'main' into release/invokeai-3-0-1
- this includes the final set of PRs going into 3.0.1
2023-07-26 10:17:32 -04:00
f060e321eb NSFW checker and watermark nodes (#3923)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [] No

## Description

This PR adds NSFW checker and invisible watermark fields. The NSFW
checker takes an image input and produces an image output. If NSFW
content is detected, the output image will be blurred and a "caution"
icon pasted into its upper left corner. A boolean `active` field
controls whether the checker is active. If turned off it simply returns
a copy of the image.

The invisible watermark node adds an invisible text to the image,
defaulting to "InvokeAI". To decode the watermark use the
`invisible-watermark` command, which is part of the
`invisible-watermark` library:

```
$ invisible-watermark -v -a decode -t bytes -m dwtDct -l 64 ./bluebird-watermark.png 
decode time ms: 14.129877090454102
InvokeAI
```

Note that the `-l` (length) argument is mandatory. It is set to 64 here
because the watermark `InvokeAI` is 8 bytes/64 bits long. The length
must match in order for the watermark to be decoded correctly.

Both nodes are now incorporated into the linear Text2Image and
Image2Image UIs, including the canvas. They are not implemented for
inpaint currently.

The nodes can be disabled with configuration options:
```
invisible_watermark: false
nsfw_checker: false
```
or at launch time with `--no-invisible_watermark` and
`--no-nsfw_checker`.
2023-07-26 10:14:10 -04:00
dc8c3d8073 feat(ui): tweak menu style, increase icon size
feat(ui) use `as` for menuitem links

I had requested this be done with the chakra `Link` component, but actually using `as` is correct according to the docs. For other components, you are supposed to use `Link` but looks like `MenuItem` has this built in.

Fixed in all places where we use it.

Also:
- fix github icon
- give menu hamburger button padding
- add menu motion props so it animates the same as other menus

feat(ui): restore ColorModeButton

@maryhipp

chore(ui): lint

feat(ui): remove colormodebutton again

sry
2023-07-27 00:12:23 +10:00
819136c345 chore(ui): bump chakra versions
exposes more menu theming config
2023-07-27 00:12:23 +10:00
989b68c772 fix: Remove menu tooltip and fix incorrect issues page link 2023-07-27 00:12:23 +10:00
a6347a1d3c revert: Translation strings
These needs to be done through weblate. Only en.json needs to updated via the repo
2023-07-27 00:12:23 +10:00
a00d1e87e4 fix: Update Links to Links from Menu Items 2023-07-27 00:12:23 +10:00
c7d24081e2 fix: Scheduler list in Settings not displaying labels 2023-07-27 00:12:23 +10:00
17900e5140 fix: Fix Settings dropdown menu icons being too small 2023-07-27 00:12:23 +10:00
6fa42cb10c feat: consolidated app nav to settings & dropdown 2023-07-27 00:12:23 +10:00
4bea846199 Merge branch 'main' into feat/safety-checker-node 2023-07-26 10:04:23 -04:00
3dccc4d61e Add support for controlnet & sdxl checkpoint conversion (#3905)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No - not yet WIP


## Description

This PR adds support for loading and converting checkpoint-format
ControlNet and SDXL models. The SDXL and SDXL-refiner model conversions
are working; however saving the unet in safetensors format leads to
corrupted model files, so currently is saving in .bin format (after
scanning the input model).

ControlNet conversion seems to be working but needs further testing.

To use this PR, you will need to copy the files
`invokeai/configs/stable-diffusion/sd_xl_base.yaml` and
`invokeai/configs/stable-diffusion/sd_xl_refiner.yaml` into
`INVOKEAI/configs/stable-diffusion`. You will also need to run
`invokeai-configure --yes --skip-sd` in order to install additional core
model files needed by the converter.
2023-07-27 01:50:38 +12:00
bf0587da5f set defaults for watermark and NSFW checker to FALSE 2023-07-26 09:09:46 -04:00
58c0bee325 improved error message for running configure 2023-07-26 08:30:01 -04:00
b8f43f444a implemented startup sanity checks on core models 2023-07-26 08:26:29 -04:00
da76f6fee4 compress height needed by configure script 2023-07-26 08:00:19 -04:00
c4f064bbf3 Merge branch 'main' into feat/controlnet-and-sdxl-convert 2023-07-26 07:30:22 -04:00
0ce8472562 adjust unit test to account for nsfw always being true now 2023-07-26 07:29:33 -04:00
3e206d4d6a removed nsfw/watermark from invokeai.yaml 2023-07-26 06:53:35 -04:00
ce7fa96dbc Merge branch 'main' into feat/safety-checker-node 2023-07-26 06:39:46 -04:00
a705461c04 merge with recent main changes 2023-07-26 06:39:21 -04:00
fda7e0a71a 3.0.1 - Pre-Release UI Fixes (#4001)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes

      
## Description

- Update the Aspect Ratio tags to show the aspect ratio values rather
than Wide / Square and etc.
- Updated Lora Input to take values between -50 and 50 coz I found some
LoRA that are actually trained to work until -25 and +15 too. So these
input caps should mostly suffice. If there's ever a LoRA that goes
bonkers on that, we can change it.
- Fixed LoRA's being sorted the wrong way in Lora Select.
- Fixed Embeddings being sorted the wrong way in Embedding Select.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-26 21:22:33 +12:00
36455f6cac Merge branch 'main' into nodepromptsize 2023-07-26 18:54:54 +10:00
513b223ef6 fix(test): fix test_graph_subgraph_t2i
needed to be updated after adding the nsfw checker node to the graph
2023-07-26 18:49:29 +10:00
db05445103 fix(tests): fix test_path
- assets path has changed
2023-07-26 18:48:43 +10:00
30c3b7a6fc fix(ui): fix invoke button being disabled 2023-07-26 18:40:17 +10:00
2d0f932737 Lint Code 2023-07-26 18:35:04 +10:00
9e9dce44b4 fix: Embeddings not being sorted alphabetically 2023-07-26 20:34:14 +12:00
6fd8543e69 fix: LoRA's not being sorted alphabetically 2023-07-26 20:33:59 +12:00
db48f3230b feat(ui): add nsfw & watermark to linear ui
- add `addNSFWCheckerToGraph` and `addWatermarkerToGraph` functions
- use them in all linear graph creation
- add state & toggles to settings modal to enable these
- trigger queries for app config on socket connect
- disable the nsfw/watermark booleans if we get the app config and they are not available
2023-07-26 18:20:20 +10:00
397604a094 feat: Allow LoRA weights to be more than sliders via input
Found some LoRA's that need it.
2023-07-26 19:20:42 +12:00
f5139b174a fix(ui): Rename Aspect Ratio labels to their aspect ratios 2023-07-26 18:56:52 +12:00
050e5091db feat: Enable the Conversion button for SDXL Models 2023-07-26 17:32:50 +12:00
2c5b539d3a esrgan and its models are now nested in app config route 2023-07-26 15:27:04 +10:00
85ad5ef204 refactored code; added watermark and nsfw facilities to app config route 2023-07-26 15:27:04 +10:00
5beb11f4e2 tweaks in response to psychedelicious review of PR 2023-07-26 15:27:04 +10:00
844d37c642 rebuild schema 2023-07-26 15:27:04 +10:00
b3723d1ccf update documentation 2023-07-26 15:27:04 +10:00
bd43751323 update linear graphs to perform safety checking and watermarking 2023-07-26 15:27:04 +10:00
e32cd794f7 add safetychecker and watermark nodes 2023-07-26 15:26:45 +10:00
761fc4beb8 Temp fix for is intermediate switch for l2i 2023-07-26 15:17:59 +10:00
531bc40d3f feat: Add SDXL To Linear UI (#3973)
## What type of PR is this? (check all applicable)

- [x] Feature


## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Description

This PR adds support for SDXL Models in the Linear UI

### DONE

- SDXL Base Text To Image Support
- SDXL Base Image To Image Support
- SDXL Refiner Support
- SDXL Relevant UI


## [optional] Are there any post deployment tasks we need to perform?

Double check to ensure nothing major changed with 1.0 -- In any case
those changes would be backend related mostly. If Refiner is scrapped
for 1.0 models, then we simply disable the Refiner Graph.
2023-07-26 17:05:39 +12:00
676051edb9 fix(ui): fix missing args for model queries 2023-07-26 14:56:51 +10:00
de65b82569 chore: Fix lint errors 2023-07-26 16:51:58 +12:00
934f9afd7e feat(ui): Do not show SDXL Models in Canvas 2023-07-26 14:46:38 +10:00
1c01a31ee8 feat(ui): setActiveTab only works with tab names 2023-07-26 14:46:38 +10:00
c5389b3298 fix(ui): fix refiner steps math again 2023-07-26 14:46:38 +10:00
fdbab5ffa9 feat(ui): hide sync models button if feature is disabled 2023-07-26 14:46:38 +10:00
a6e544ebd5 fix(ui): fix refiner steps calculation for edge case of start = 1 2023-07-26 14:46:38 +10:00
75b0507434 feat(nodes): change denoising start/end min/max to 0/1 2023-07-26 14:46:38 +10:00
59c2556e6b feat: Move SDXL Image Denoising to own component 2023-07-26 14:46:38 +10:00
4fe889bbf8 fix: Possible fix to image to image / refiner setting sync
The main goal is to avoid noisy output no matter what the slider values are.
2023-07-26 14:46:38 +10:00
cbcd416b70 fix(ui): fix refiner missing from model manager
Rolled back the earlier split of the refiner model query.

Now, when you use `useGetMainModelsQuery()`, you must provide it an array of base model types.

They are provided as constants for simplicity:
- ALL_BASE_MODELS
- NON_REFINER_BASE_MODELS
- REFINER_BASE_MODELS

Opted to just use args for the hook instead of wrapping the hook in another hook, we can tidy this up later if desired.
2023-07-26 14:46:38 +10:00
6fa244a343 feat(ui): add vae precision select 2023-07-26 14:46:38 +10:00
e5a660930c feat(ui): add zod schemas for precision parameters 2023-07-26 14:46:38 +10:00
61291ea105 feat: sdxl metadata
- update `CoreMetadata` class & `MetadataAccumulator` with fields for SDXL-specific metadata
- update the linear UI graphs to populate this metadata
2023-07-26 14:46:38 +10:00
840205496a feat(nodes): fix model load events on sdxl nodes
they need the `context` to be provided to emit socket events
2023-07-26 14:46:38 +10:00
016797c890 feat(ui): add vaePrecision setting
no UI element for it yet
2023-07-26 14:46:38 +10:00
00e69d5d12 feat(ui): adjust seed param styling 2023-07-26 14:46:38 +10:00
8e90f9024d feat(ui): remove isRefinerAvailable state, update refiner node
We can derive `isRefinerAvailable` from the query result (eg are there any refiner models installed). This is a piece of server state, so by using the list models response directly, we can avoid needing to manually keep the client in sync with the server.

Created a `useIsRefinerAvailable()` hook to return this boolean wherever it is needed.

Also updated the main models & refiner models endpoints to only return the appropriate models. Now we don't need to filter the data on these endpoints.
2023-07-26 14:46:38 +10:00
751c4407e4 feat(ui): add node type to invocation started 2023-07-26 14:46:38 +10:00
6c46304eb8 fix: Replug Image To Latents VAE back in the Refiner graph for img2img 2023-07-26 14:46:38 +10:00
0eb31c5710 fix: Cyclic push in the graph 2023-07-26 14:46:38 +10:00
6295e56d96 feat: Add SDXL Refiner to Linear UI 2023-07-26 14:46:38 +10:00
5202610160 feat: Move SDXL Refiner to own route & set appropriate disabled statuses 2023-07-26 14:46:38 +10:00
8d1b8179af feat: Create UI for SDXL Refiner Options 2023-07-26 14:46:38 +10:00
3bdb059eb7 wip: SDXL Refiner UI Data 2023-07-26 14:46:38 +10:00
b0ebd148fa feat: Add Style Prompts to Linear UI 2023-07-26 14:46:38 +10:00
9f94d0e52a feat: Create SDXL Slice 2023-07-26 14:46:38 +10:00
9c180da58a feat: Add SDXL Image To Image to Linear UI 2023-07-26 14:46:38 +10:00
57d833035d feat: Add SDXL Base To Linear Text To Image 2023-07-26 14:46:38 +10:00
c145681488 bump version number; add SDXL-1.0 to installer 2023-07-26 00:17:00 -04:00
3eaf8c3b2f Update stale issues action (#3960)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Updated script to close stale issues with the newest version of the
actions/stale

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Not sure how this script gets kicked off
2023-07-26 14:08:22 +10:00
d9527bf445 Merge branch 'main' into main 2023-07-26 14:08:00 +10:00
032e9c8165 Merge branch 'main' into patch 2023-07-25 22:24:36 -04:00
dbc3d42afc install all recommended models with --yes; don't alter starter model screen 2023-07-25 22:24:03 -04:00
d5998ad3ef update images to link from docs/assets/nodes/ 2023-07-25 21:48:48 -04:00
a4c8d86faa add NODES.md image assets to docs/assets/nodes/ 2023-07-25 21:48:48 -04:00
f4da66aa0f Update NODES.md 2023-07-25 21:48:48 -04:00
7f5a89f567 add option to disable model syncing in UI 2023-07-26 11:18:38 +10:00
2db9b3b2ae Merge branch 'main' into patch 2023-07-25 16:27:10 -04:00
77107dfcbc Merge branch 'main' into main 2023-07-25 16:26:37 -04:00
e43e198102 rework configure/install TUI to require less space 2023-07-25 11:25:26 -04:00
2aefa921fe fix "unknown model type" error when rebasing a model with API
- Add command-line model probing script for dev use
- Minor documentation tweak
2023-07-25 08:36:57 -04:00
11e6ecc1bf Merge branch 'main' into feat/controlnet-and-sdxl-convert 2023-07-25 08:05:17 -04:00
7d337dccc2 docs generation: fix typo and remove trailing white space (#3972)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: This is a minor fix that I happened upon while
reading

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Within the `mkdocs.yml` file, there's a typo where `Model Merging` is
spelled as `Model Mergeing`. I also found some unnecessary white space
that I removed.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Not big enough of a change to require tests (unless it is)

## [optional] Are there any post deployment tasks we need to perform?
Might need to re-run the yml file for docs to regenerate, but I'm hardly
familiar with the codebase so 🤷
2023-07-24 23:11:37 -04:00
91e903c8ab esrgan and its models are now nested in app config route 2023-07-24 22:17:22 -04:00
efa615a8fd refactored code; added watermark and nsfw facilities to app config route 2023-07-24 22:02:57 -04:00
cf10852ee3 uses v8 actions/stale@v8 2023-07-25 11:23:00 +10:00
437532f2f9 fix: ✏️ fix docs generation typo and remove trailing white space 2023-07-24 17:42:01 -06:00
8c449c4756 update documentation and installer to accept 3.11 2023-07-24 17:21:56 -04:00
fc4e104c61 tested on 3.11 and 3.10 2023-07-24 17:13:32 -04:00
4194a0ed99 tweaks in response to psychedelicious review of PR 2023-07-24 09:23:51 -04:00
7ce5b6504f rebuild schema 2023-07-24 08:25:39 -04:00
aea8ad5670 Update close-inactive-issues.yml with latest stale version 2023-07-24 20:52:34 +10:00
97f4475fdf Update close-inactive-issues.yml 2023-07-24 20:50:33 +10:00
4f9c728db0 feat(ui): display canvas generation mode in status text (#3915)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: n/a

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No n/a


## Description

Add a generation mode indicator to canvas.

- use the existing logic to determine if generation is txt2img, img2img,
inpaint or outpaint
- technically `outpaint` and `inpaint` are the same, just display
"Inpaint" if its either
- debounce this by 1s to prevent jank

I was going to disable controlnet conditionally when the mode is inpaint
but that involves a lot of fiddly changes to the controlnet UI
components. Instead, I'm hoping we can get inpaint moved over to latents
by next release, at which point controlnet will work.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->


https://github.com/invoke-ai/InvokeAI/assets/4822129/87464ae9-4136-4367-b992-e243ff0d05b4

## Added/updated tests?

- [ ] Yes
- [x] No : n/a

## [optional] Are there any post deployment tasks we need to perform?

n/a
2023-07-24 20:37:45 +12:00
7ea477abef Merge branch 'main' into feat/canvas-generation-mode 2023-07-24 20:34:25 +12:00
d42c394ab7 feat(nodes,ui): fix soft locks on session/invocation retrieval (#3910)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No, n/a


## Description

When a queue item is popped for processing, we need to retrieve its
session from the DB. Pydantic serializes the graph at this stage.

It's possible for a graph to have been made invalid during the graph
preparation stage (e.g. an ancestor node executes, and its output is not
valid for its successor node's input field).

When this occurs, the session in the DB will fail validation, but we
don't have a chance to find out until it is retrieved and parsed by
pydantic.

This logic was previously not wrapped in any exception handling.

Just after retrieving a session, we retrieve the specific invocation to
execute from the session. It's possible that this could also have some
sort of error, though it should be impossible for it to be a pydantic
validation error (that would have been caught during session
validation). There was also no exception handling here.

When either of these processes fail, the processor gets soft-locked
because the processor's cleanup logic is never run. (I didn't dig deeper
into exactly what cleanup is not happening, because the fix is to just
handle the exceptions.)

This PR adds exception handling to both the session retrieval and node
retrieval and events for each: `session_retrieval_error` and
`invocation_retrieval_error`.

These events are caught and displayed in the UI as toasts, along with
the type of the python exception (e.g. `Validation Error`). The events
are also logged to the browser console.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

Closes #3860 , #3412

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

Create an valid graph that will become invalid during execution. Here's
an example:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/50aa824c-fb0c-4bd9-82f4-38a4c89436f9)

This is valid before execution, but the `width` field of the `Noise`
node will end up with an invalid value (`0`). Previously, this would
soft-lock the app and you'd have to restart it.

Now, with this graph, you will get an error toast, and the app will not
get locked up.

## Added/updated tests?

- [x] Yes (ish)
- [ ] No

@Kyle0654  @brandonrising 
It seems because the processor runs in its own thread, `pytest` cannot
catch exceptions raised in the processor.

I added a test that does work, insofar as it does recreate the issue.
But, because the exception occurs in a separate thread, the test doesn't
see it. The result is that the test passes even without the fix.

So when running the test, we see the exception:
```py
Exception in thread invoker_processor:
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/processor.py", line 50, in __process
    self.__invoker.services.graph_execution_manager.get(
  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/sqlite.py", line 79, in get
    return self._parse_item(result[0])

  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/sqlite.py", line 52, in _parse_item
    return parse_raw_as(item_type, item)
  File "pydantic/tools.py", line 82, in pydantic.tools.parse_raw_as
  File "pydantic/tools.py", line 38, in pydantic.tools.parse_obj_as
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
```

But `pytest` doesn't actually see it as an exception. Not sure how to
fix this, it's a bit beyond me.

## [optional] Are there any post deployment tasks we need to perform?

nope don't think so
2023-07-24 20:17:39 +12:00
61fa960a18 feat(ui): make generation mode calculation more granular 2023-07-24 18:16:15 +10:00
1969afd038 Merge branch 'main' into feat/fix-soft-locks 2023-07-24 20:12:10 +12:00
2b65e40896 Fix incorrect use of a singleton list (#3914)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
      
## Description

`search_for_models` is explicitly typed as taking a singular `Path` but
was given a list because some later function in the stack expects a
list. Fixed that to be compatible with the paths. This is the only use
of that function.

The `list()` call is unrelated but removes a type warning since it's
supposed to return a list, not a set. I can revert it if requested.

This was found through pylance type errors. Go types!
2023-07-24 20:08:21 +12:00
d6bf6513ef Merge branch 'main' into fix-types-2 2023-07-24 20:01:48 +12:00
14659277e7 Add missing import (#3917)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

This import is missing and used later in the file.
2023-07-24 20:01:12 +12:00
cbb90cbdbb Download all model types. 2023-07-24 10:59:59 +03:00
9c59083406 Merge branch 'main' into fix-types-1 2023-07-24 19:52:46 +12:00
86b62cfccc fix: Generate random seed using the generator instead of RandomState (#3940)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-24 19:52:04 +12:00
e766ddbcf4 fix: Generate random seed using the generator instead of RandomState 2023-07-24 19:38:21 +12:00
374b4a1b12 Merge branch 'main' into pr/3917 2023-07-24 18:58:34 +12:00
0cf7a10c5c fix: Other lora missing type 2023-07-24 18:58:24 +12:00
1c44a0feba feat: increase seed from int32 to uint32 (#3933)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No: n/a


## Description

At some point I typo'd this and set the max seed to signed int32 max. It
should be *un*signed int32 max.

This restored the seed range to what it was in v2.3.

Also fixed a bug in the Noise node which resulted in the max valid seed
being one less than intended.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issues
#2843 is against v2.3 and increases the range of valid seeds
substantially. Maybe we can explore this in the future but as of v3.0,
we use numpy for a RNG in a few places, and it maxes out at the max
`uint32`. I will close this PR as this supersedes it.
- Closes #3866

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

You should be able to use seeds up to and including `4294967295`.

## Added/updated tests?

- [ ] Yes
- [x] No : don't think we have any relevant tests

## [optional] Are there any post deployment tasks we need to perform?

nope!
2023-07-24 18:55:35 +12:00
66cdeba8a1 fix(nodes): fix seed modulus operation
This was incorect and resulted in the max seed being one less than intended.
2023-07-24 16:44:32 +10:00
d5a75eb833 feat: increase seed from int32 to uint32
At some point I typo'd this and set the max seed to signed int32 max. It should be *un*signed int32 max.

This restored the seed range to what it was in v2.3.
2023-07-24 16:34:50 +10:00
8eab96c441 update documentation 2023-07-23 23:41:44 -04:00
4754a94102 update linear graphs to perform safety checking and watermarking 2023-07-23 23:32:08 -04:00
5c6f417471 add safetychecker and watermark nodes 2023-07-23 16:24:34 -04:00
0beec08d38 Add missing import. 2023-07-23 16:40:05 +02:00
02618a701d fix: Fix app crashing when you upload an incorrect JSON to node editor (#3911)
## What type of PR is this? (check all applicable)

- [x] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [x] Yes, we feel very passionate about this.     

## Description

Uploading an incorrect JSON file to the Node Editor would crash the app.

While this is a much larger problem that we will tackle while refining
the Node Editor, this is a fix that should address 99% of the cases out
there.

When saving an InvokeAI node graph, there are three primary keys.

1. `nodes` - which has all the node related data.
2. `edges` - which has all the edges related data
3. `viewport` - which has all the viewport related data.

So when we load back the JSON, we now check if all three of these keys
exist in the retrieved JSON object. While the `viewport` itself is not a
mandatory key to repopulate the graph, checking for it will allow us to
treat it as an additional check to ensure that the graph was saved from
InvokeAI.

As a result ...

- If you upload an invalid JSON file, the app now warns you that the
JSON is invalid.
- If you upload a JSON of a graph editor that is not InvokeAI, it simply
warns you that you are uploading a non InvokeAI graph.

So effectively, you should not be able to load any graph that is not
generated by ReactFlow.

Here are the edge cases:

- What happens if a user maintains the above key structure but tampers
with the data inside them? Well tested it. Turns out because we validate
and build the graph based on the JSON data, if you tamper with any data
that is needed to rebuild that node, it simply will skip that and load
the rest of the graph with valid data.
- What happens if a user uploads a graph that was made by some other
random ReactFlow app? Well, same as above. Because we do not have to
parse that in our setup, it simply will skip it and only display what
are setup to do.

I think that just about covers 99% of the cases where this could go
wrong. If there's any other edges cases, can add checks if need be. But
can't think of any at the moment.

## Related Tickets & Documents

### Closes
- #3893 
- #3881

## [optional] Are there any post deployment tasks we need to perform?

Yes. Making @psychedelicious a little bit happier. :P
2023-07-24 02:15:46 +12:00
f2a6f0cf21 SDXL & SDXL-refiner models convert correctly 2023-07-23 09:31:14 -04:00
07a90c0198 Fix incorrect use of a singleton list.
This was found through pylance type errors. Go types!
2023-07-23 15:28:05 +02:00
28031ead70 feat(ui): display canvas generation mode in status text
- use the existing logic to determine if generation is txt2img, img2img, inpaint or outpaint
- technically `outpaint` and `inpaint` are the same, just display
"Inpaint" if its either
- debounce this by 1s to prevent jank
2023-07-23 23:22:59 +10:00
4b334be7d0 feat(nodes,ui): fix soft locks on session/invocation retrieval
When a queue item is popped for processing, we need to retrieve its session from the DB. Pydantic serializes the graph at this stage.

It's possible for a graph to have been made invalid during the graph preparation stage (e.g. an ancestor node executes, and its output is not valid for its successor node's input field).

When this occurs, the session in the DB will fail validation, but we don't have a chance to find out until it is retrieved and parsed by pydantic.

This logic was previously not wrapped in any exception handling.

Just after retrieving a session, we retrieve the specific invocation to execute from the session. It's possible that this could also have some sort of error, though it should be impossible for it to be a pydantic validation error (that would have been caught during session validation). There was also no exception handling here.

When either of these processes fail, the processor gets soft-locked because the processor's cleanup logic is never run. (I didn't dig deeper into exactly what cleanup is not happening, because the fix is to just handle the exceptions.)

This PR adds exception handling to both the session retrieval and node retrieval and events for each: `session_retrieval_error` and `invocation_retrieval_error`.

These events are caught and displayed in the UI as toasts, along with the type of the python exception (e.g. `Validation Error`). The events are also logged to the browser console.
2023-07-23 21:41:01 +10:00
de73e4f5b9 Merge branch 'main' into nodepromptsize 2023-07-23 18:28:25 +10:00
af4579b4d4 feat: Add more sanity checks for graph loading 2023-07-23 18:12:25 +12:00
35acb5de76 Merge branch 'main' into json-crash-fix 2023-07-23 16:50:36 +12:00
225f608556 fix: Add more sanity checks & rename buttons to Graphs 2023-07-23 16:49:52 +12:00
00d3cd4aed Fix 'Del' hotkey to delete current image. 2023-07-23 14:16:32 +10:00
5e59edfaf1 SDXL checkpoint models now convert and load; needs refactor 2023-07-23 00:00:31 -04:00
fdc444ed61 fix: Fix app crashing when you upload an incorrect JSON to node editor 2023-07-23 15:24:04 +12:00
075f9b3a7a ui: pay back tech debt (#3896)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: n/a

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No n/a


## Description

Big cleanup:
- improve & simplify the app logging
- resolve all TS issues
- resolve all circular dependencies
- fix all lint/format issues

## QA Instructions, Screenshots, Recordings

`yarn lint` passes:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7b763922-f00c-4b17-be23-2432da50f816)
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : n/a

## [optional] Are there any post deployment tasks we need to perform?

bask in the glory of what *should* be a fully-passing frontend lint on
this PR
2023-07-23 13:57:43 +12:00
b1d7c9b306 save text_encoder_2 config, not whole model 2023-07-22 21:33:40 -04:00
5607794dbb add support for controlnet & sdxl conversion - not fully working 2023-07-22 20:12:16 -04:00
c5147d0f57 fix(ui): fix all eslint & prettier issues 2023-07-22 23:45:24 +10:00
6452d0fc28 fix(ui): fix all circular dependencies 2023-07-22 22:48:39 +10:00
5468d9a9fc fix(ui): resolve all typescript issues 2023-07-22 21:38:50 +10:00
75863e7181 feat(ui): logging cleanup
- simplify access to app logger
- spruce up and make consistent log format
- improve messaging
2023-07-22 21:12:51 +10:00
0689e36390 Merge branch 'main' into nodepromptsize 2023-07-22 07:20:28 +10:00
78750042f5 Pass in dim overrides 2023-07-21 12:16:24 -04:00
13e7614508 add text so string node uses textarea 2023-07-21 19:36:27 +10:00
4e1786d9ae Remove Resize: none 2023-07-21 13:55:40 +10:00
585520d8d2 Only apply Textaera to Prompt 2023-07-21 13:17:27 +10:00
98b2734240 Merge branch 'main' into nodepromptsize 2023-07-21 08:07:55 +10:00
7b428b5240 Make height smaller and allow width to change with node 2023-07-21 08:03:01 +10:00
ce08aa350c Allow controlnet passthrough for now 2023-07-20 14:14:04 -04:00
ba1a934297 Fix Lora typings 2023-07-20 14:02:23 -04:00
4e90376d11 Allow passing in of precision, use available providers if none provided 2023-07-20 13:15:45 -04:00
f73b45bcb5 Feat: Change Input to Textbox 2023-07-20 19:11:18 +10:00
23f4a4ea1a Fix dist 2023-07-19 18:27:51 -04:00
6aab8f16ce Fix issue from merge 2023-07-19 18:27:15 -04:00
8f61413865 Setup dist folder 2023-07-19 17:49:27 -04:00
43b6a077fb io binding seems to be massively resource intensive compared to session.run 2023-07-19 17:42:28 -04:00
e8299d0abb Comment out erroniously removed del statement, comment out opt tests 2023-07-18 23:23:34 -04:00
a28ab654ef Setup dist folder 2023-07-18 23:18:46 -04:00
8699fd7050 Fix invoke UI graphs for onnx 2023-07-18 23:16:51 -04:00
9e65470ada Setup dist 2023-07-18 23:07:31 -04:00
f4e52fafac Fix as part of merging main in 2023-07-18 23:05:33 -04:00
ee7b36cea5 Merge branch 'main' into onnx-testing 2023-07-18 22:56:41 -04:00
487455ef2e Add model_type to the model state object 2023-07-18 22:40:27 -04:00
e201ad2f51 Switch to io_binding for run, testing different session options 2023-07-18 21:54:54 -04:00
869f418b03 Setup onnx on linear text2image 2023-07-18 14:27:54 -04:00
35d5ef9118 Emit step completions 2023-07-18 12:35:07 -04:00
bcce70fca6 Testing different session opts, added timings for testing 2023-07-17 16:27:33 -04:00
932112b640 testing being super wasteful with data 2023-07-16 00:17:33 -04:00
91112167b1 Fix syntax err 2023-07-15 23:56:48 -04:00
bd7b59910d Testing onnx in new ui updates 2023-07-14 14:24:15 -04:00
524888bf3b Merge branch 'main' into feat/onnx 2023-07-13 14:23:57 -04:00
0327eae509 chore: Regen API 2023-06-23 05:21:06 +12:00
bb85608890 Merge branch 'main' into feat/onnx 2023-06-23 05:18:41 +12:00
6c7668aaca Update onnx model structure, change code according 2023-06-22 20:03:17 +03:00
7759b3f75a Small refactor 2023-06-21 04:24:25 +03:00
4d337f6abc ONNX Model/runtime first implementation 2023-06-21 02:12:21 +03:00
92c86fd0b8 Set model type to const value in openapi schema, add model format enums to model schema(as they not not referenced in case of Literal definition) 2023-06-20 03:44:58 +03:00
46dc751139 Update model format field to use enums 2023-06-20 03:30:09 +03:00
4cefe37723 Rename format to model_format(still named format when work with config) 2023-06-20 03:25:08 +03:00
82b73c50a0 Remove default model logic 2023-06-20 03:13:10 +03:00
7df7a95299 Merge branch 'main' into model-manager-ui-30 2023-06-19 23:26:11 +12:00
85b4b359c2 tweal: UI colors 2023-06-19 23:16:14 +12:00
cfe81b5e00 fix: Adjust the Schedular select width
So the long names do not get cut off.
2023-06-19 23:05:32 +12:00
b0c4451324 Merge branch 'main' into model-manager-ui-30 2023-06-19 23:02:59 +12:00
d4931522d4 Merge branch 'main' into model-manager-ui-30 2023-06-19 22:53:13 +12:00
17e2a35228 fix: merge conflicts 2023-06-18 22:25:48 +12:00
91016d8b29 Merge branch 'main' into model-manager-ui-30 2023-06-18 22:23:18 +12:00
9fda21cf40 Revert "feat: Port Schedulers to Mantine"
This reverts commit e0c105f413.
2023-06-18 22:22:56 +12:00
809ec7163e fix: Remove type from Model type name 2023-06-18 19:41:30 +12:00
7c9a939b47 fix: Unserialization key issue 2023-06-18 19:38:15 +12:00
9634c96020 revert: getModels to receivedModels 2023-06-18 19:35:46 +12:00
e0c105f413 feat: Port Schedulers to Mantine 2023-06-18 19:31:53 +12:00
f0bf32c476 Merge branch 'main' into model-manager-ui-30 2023-06-18 17:37:34 +12:00
28373dbb98 cleanup: Updated model slice names to be more descriptive
Basically updated all slices to be more descriptive in their names. Did so in order to make sure theres good naming scheme available for secondary models.
2023-06-18 17:36:23 +12:00
4133d77772 wip: Move Model Selector to own file 2023-06-18 09:19:13 +12:00
61c426f502 feat: Enable 2.x Model Generation in Linear UI 2023-06-18 08:27:13 +12:00
bf0577c882 fix: 2.1 models breaking generation
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-06-18 08:26:25 +12:00
24673fd859 chore: Rebuild API - base_model and type added 2023-06-18 07:50:28 +12:00
dc669d1447 Add name, base_mode, type fields to model info 2023-06-17 22:48:44 +03:00
ce4110b9f4 wip: Add 2.x Models to the Model List 2023-06-18 07:01:44 +12:00
0f3b7d2b3d chore: Rebuild API with new Model API names 2023-06-18 03:00:16 +12:00
16dc78f6c6 Generate config names for openapi 2023-06-17 17:15:36 +03:00
7a66856785 wip: Update Linear UI Txt2Img and Img2Img Graphs
Update the text to imaeg and image to image graphs to work with the new model loader. Currently only supports 1.x models. Will update this soon to make it work with all models.
2023-06-18 01:38:01 +12:00
c8dfa49d86 fix: Update missing name types to new names 2023-06-17 22:04:28 +12:00
76dd749b1e chore: Rebuild API 2023-06-17 21:29:32 +12:00
67d05d2066 chore: Update model config type names 2023-06-17 21:28:43 +12:00
724 changed files with 38335 additions and 25213 deletions

View File

@ -20,13 +20,13 @@ def calc_images_mean_L1(image1_path, image2_path):
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('image1_path')
parser.add_argument('image2_path')
parser.add_argument("image1_path")
parser.add_argument("image2_path")
args = parser.parse_args()
return args
if __name__ == '__main__':
if __name__ == "__main__":
args = parse_args()
mean_L1 = calc_images_mean_L1(args.image1_path, args.image2_path)
print(mean_L1)

View File

@ -1 +1,2 @@
b3dccfaeb636599c02effc377cdd8a87d658256c
218b6d0546b990fc449c876fb99f44b50c4daa35

View File

@ -1,11 +1,11 @@
name: Close inactive issues
on:
schedule:
- cron: "00 6 * * *"
- cron: "00 4 * * *"
env:
DAYS_BEFORE_ISSUE_STALE: 14
DAYS_BEFORE_ISSUE_CLOSE: 28
DAYS_BEFORE_ISSUE_STALE: 30
DAYS_BEFORE_ISSUE_CLOSE: 14
jobs:
close-issues:
@ -14,7 +14,7 @@ jobs:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
- uses: actions/stale@v8
with:
days-before-issue-stale: ${{ env.DAYS_BEFORE_ISSUE_STALE }}
days-before-issue-close: ${{ env.DAYS_BEFORE_ISSUE_CLOSE }}
@ -23,5 +23,6 @@ jobs:
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
exempt-issue-labels: "Active Issue"
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

View File

@ -2,8 +2,6 @@ name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
@ -11,8 +9,6 @@ on:
push:
branches:
- 'main'
paths:
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:

27
.github/workflows/style-checks.yml vendored Normal file
View File

@ -0,0 +1,27 @@
name: style checks
# just formatting for now
# TODO: add isort and flake8 later
on:
pull_request:
push:
branches: main
jobs:
black:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies with pip
run: |
pip install black
# - run: isort --check-only .
- run: black --check .
# - run: flake8

View File

@ -1,50 +0,0 @@
name: Test invoke.py pip
# This is a dummy stand-in for the actual tests
# we don't need to run python tests on non-Python changes
# But PRs require passing tests to be mergeable
on:
pull_request:
paths:
- '**'
- '!pyproject.toml'
- '!invokeai/**'
- '!tests/**'
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
- '3.10'
pytorch:
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
include:
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
- pytorch: linux-cpu
os: ubuntu-22.04
- pytorch: macos-default
os: macOS-12
- pytorch: windows-cpu
os: windows-2022
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- name: skip
run: echo "no build required"

View File

@ -3,16 +3,7 @@ on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
pull_request:
paths:
- 'pyproject.toml'
- 'invokeai/**'
- 'tests/**'
- '!invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
@ -65,10 +56,23 @@ jobs:
id: checkout-sources
uses: actions/checkout@v3
- name: Check for changed python files
id: changed-files
uses: tj-actions/changed-files@v37
with:
files_yaml: |
python:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
- 'tests/**'
- name: set test prompt to main branch validation
if: steps.changed-files.outputs.python_any_changed == 'true'
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: setup python
if: steps.changed-files.outputs.python_any_changed == 'true'
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
@ -76,6 +80,7 @@ jobs:
cache-dependency-path: pyproject.toml
- name: install invokeai
if: steps.changed-files.outputs.python_any_changed == 'true'
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
@ -83,6 +88,7 @@ jobs:
--editable=".[test]"
- name: run pytest
if: steps.changed-files.outputs.python_any_changed == 'true'
id: run-pytest
run: pytest

1
.gitignore vendored
View File

@ -38,7 +38,6 @@ develop-eggs/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/

10
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,10 @@
# See https://pre-commit.com/ for usage and config
repos:
- repo: local
hooks:
- id: black
name: black
stages: [commit]
language: system
entry: black
types: [python]

290
LICENSE-SDXL.txt Normal file
View File

@ -0,0 +1,290 @@
Copyright (c) 2023 Stability AI
CreativeML Open RAIL++-M License dated July 26, 2023
Section I: PREAMBLE
Multimodal generative models are being widely adopted and used, and
have the potential to transform the way artists, among other
individuals, conceive and benefit from AI or ML technologies as a tool
for content creation.
Notwithstanding the current and potential benefits that these
artifacts can bring to society at large, there are also concerns about
potential misuses of them, either due to their technical limitations
or ethical considerations.
In short, this license strives for both the open and responsible
downstream use of the accompanying model. When it comes to the open
character, we took inspiration from open source permissive licenses
regarding the grant of IP rights. Referring to the downstream
responsible use, we added use-based restrictions not permitting the
use of the model in very specific scenarios, in order for the licensor
to be able to enforce the license in case potential misuses of the
Model may occur. At the same time, we strive to promote open and
responsible research on generative models for art and content
generation.
Even though downstream derivative versions of the model could be
released under different licensing terms, the latter will always have
to include - at minimum - the same use-based restrictions as the ones
in the original license (this license). We believe in the intersection
between open and responsible AI development; thus, this agreement aims
to strike a balance between both in order to enable responsible
open-science in the field of AI.
This CreativeML Open RAIL++-M License governs the use of the model
(and its derivatives) and is informed by the model card associated
with the model.
NOW THEREFORE, You and Licensor agree as follows:
Definitions
"License" means the terms and conditions for use, reproduction, and
Distribution as defined in this document.
"Data" means a collection of information and/or content extracted from
the dataset used with the Model, including to train, pretrain, or
otherwise evaluate the Model. The Data is not licensed under this
License.
"Output" means the results of operating a Model as embodied in
informational content resulting therefrom.
"Model" means any accompanying machine-learning based assemblies
(including checkpoints), consisting of learnt weights, parameters
(including optimizer states), corresponding to the model architecture
as embodied in the Complementary Material, that have been trained or
tuned, in whole or in part on the Data, using the Complementary
Material.
"Derivatives of the Model" means all modifications to the Model, works
based on the Model, or any other model which is created or initialized
by transfer of patterns of the weights, parameters, activations or
output of the Model, to the other model, in order to cause the other
model to perform similarly to the Model, including - but not limited
to - distillation methods entailing the use of intermediate data
representations or methods based on the generation of synthetic data
by the Model for training the other model.
"Complementary Material" means the accompanying source code and
scripts used to define, run, load, benchmark or evaluate the Model,
and used to prepare data for training or evaluation, if any. This
includes any accompanying documentation, tutorials, examples, etc, if
any.
"Distribution" means any transmission, reproduction, publication or
other sharing of the Model or Derivatives of the Model to a third
party, including providing the Model as a hosted service made
available by electronic or other remote means - e.g. API-based or web
access.
"Licensor" means the copyright owner or entity authorized by the
copyright owner that is granting the License, including the persons or
entities that may have rights in the Model and/or distributing the
Model.
"You" (or "Your") means an individual or Legal Entity exercising
permissions granted by this License and/or making use of the Model for
whichever purpose and in any field of use, including usage of the
Model in an end-use application - e.g. chatbot, translator, image
generator.
"Third Parties" means individuals or legal entities that are not under
common control with Licensor or You.
"Contribution" means any work of authorship, including the original
version of the Model and any modifications or additions to that Model
or Derivatives of the Model thereof, that is intentionally submitted
to Licensor for inclusion in the Model by the copyright owner or by an
individual or Legal Entity authorized to submit on behalf of the
copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent to
the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control
systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the
Model, but excluding communication that is conspicuously marked or
otherwise designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" means Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Model.
Section II: INTELLECTUAL PROPERTY RIGHTS
Both copyright and patent grants apply to the Model, Derivatives of
the Model and Complementary Material. The Model and Derivatives of the
Model are subject to additional terms as described in
Section III.
Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare, publicly display, publicly
perform, sublicense, and distribute the Complementary Material, the
Model, and Derivatives of the Model.
Grant of Patent License. Subject to the terms and conditions of this
License and where and as applicable, each Contributor hereby grants to
You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this paragraph) patent license to
make, have made, use, offer to sell, sell, import, and otherwise
transfer the Model and the Complementary Material, where such license
applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by
combination of their Contribution(s) with the Model to which such
Contribution(s) was submitted. If You institute patent litigation
against any entity (including a cross-claim or counterclaim in a
lawsuit) alleging that the Model and/or Complementary Material or a
Contribution incorporated within the Model and/or Complementary
Material constitutes direct or contributory patent infringement, then
any patent licenses granted to You under this License for the Model
and/or Work shall terminate as of the date such litigation is asserted
or filed.
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
Distribution and Redistribution. You may host for Third Party remote
access purposes (e.g. software-as-a-service), reproduce and distribute
copies of the Model or Derivatives of the Model thereof in any medium,
with or without modifications, provided that You meet the following
conditions: Use-based restrictions as referenced in paragraph 5 MUST
be included as an enforceable provision by You in any type of legal
agreement (e.g. a license) governing the use and/or distribution of
the Model or Derivatives of the Model, and You shall give notice to
subsequent users You Distribute to, that the Model or Derivatives of
the Model are subject to paragraph 5. This provision does not apply to
the use of Complementary Material. You must give any Third Party
recipients of the Model or Derivatives of the Model a copy of this
License; You must cause any modified files to carry prominent notices
stating that You changed the files; You must retain all copyright,
patent, trademark, and attribution notices excluding those notices
that do not pertain to any part of the Model, Derivatives of the
Model. You may add Your own copyright statement to Your modifications
and may provide additional or different license terms and conditions -
respecting paragraph 4.a. - for use, reproduction, or Distribution of
Your modifications, or for any such Derivatives of the Model as a
whole, provided Your use, reproduction, and Distribution of the Model
otherwise complies with the conditions stated in this License.
Use-based restrictions. The restrictions set forth in Attachment A are
considered Use-based restrictions. Therefore You cannot use the Model
and the Derivatives of the Model for the specified restricted
uses. You may use the Model subject to this License, including only
for lawful purposes and in accordance with the License. Use may
include creating any content with, finetuning, updating, running,
training, evaluating and/or reparametrizing the Model. You shall
require all of Your users who use the Model or a Derivative of the
Model to comply with the terms of this paragraph (paragraph 5).
The Output You Generate. Except as set forth herein, Licensor claims
no rights in the Output You generate using the Model. You are
accountable for the Output you generate and its subsequent uses. No
use of the output can contravene any provision as stated in the
License.
Section IV: OTHER PROVISIONS
Updates and Runtime Restrictions. To the maximum extent permitted by
law, Licensor reserves the right to restrict (remotely or otherwise)
usage of the Model in violation of this License.
Trademarks and related. Nothing in this License permits You to make
use of Licensors trademarks, trade names, logos or to otherwise
suggest endorsement or misrepresent the relationship between the
parties; and any rights not expressly granted herein are reserved by
the Licensors.
Disclaimer of Warranty. Unless required by applicable law or agreed to
in writing, Licensor provides the Model and the Complementary Material
(and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Model, Derivatives of
the Model, and the Complementary Material and assume any risks
associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise, unless
required by applicable law (such as deliberate and grossly negligent
acts) or agreed to in writing, shall any Contributor be liable to You
for damages, including any direct, indirect, special, incidental, or
consequential damages of any character arising as a result of this
License or out of the use or inability to use the Model and the
Complementary Material (including but not limited to damages for loss
of goodwill, work stoppage, computer failure or malfunction, or any
and all other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
Accepting Warranty or Additional Liability. While redistributing the
Model, Derivatives of the Model and the Complementary Material
thereof, You may choose to offer, and charge a fee for, acceptance of
support, warranty, indemnity, or other liability obligations and/or
rights consistent with this License. However, in accepting such
obligations, You may act only on Your own behalf and on Your sole
responsibility, not on behalf of any other Contributor, and only if
You agree to indemnify, defend, and hold each Contributor harmless for
any liability incurred by, or claims asserted against, such
Contributor by reason of your accepting any such warranty or
additional liability.
If any provision of this License is held to be invalid, illegal or
unenforceable, the remaining provisions shall be unaffected thereby
and remain valid as if such provision had not been set forth herein.
END OF TERMS AND CONDITIONS
Attachment A
Use Restrictions
You agree not to use the Model or Derivatives of the Model:
* In any way that violates any applicable national, federal, state,
local or international law or regulation;
* For the purpose of exploiting, harming or attempting to exploit or
harm minors in any way;
* To generate or disseminate verifiably false information and/or
content with the purpose of harming others;
* To generate or disseminate personal identifiable information that
can be used to harm an individual;
* To defame, disparage or otherwise harass others;
* For fully automated decision making that adversely impacts an
individuals legal rights or otherwise creates or modifies a
binding, enforceable obligation;
* For any use intended to or which has the effect of discriminating
against or harming individuals or groups based on online or offline
social behavior or known or predicted personal or personality
characteristics;
* To exploit any of the vulnerabilities of a specific group of persons
based on their age, social, physical or mental characteristics, in
order to materially distort the behavior of a person pertaining to
that group in a manner that causes or is likely to cause that person
or another person physical or psychological harm;
* For any use intended to or which has the effect of discriminating
against individuals or groups based on legally protected
characteristics or categories;
* To provide medical advice and medical results interpretation;
* To generate or disseminate information for the purpose to be used
for administration of justice, law enforcement, immigration or
asylum processes, such as predicting an individual will commit
fraud/crime commitment (e.g. by text profiling, drawing causal
relationships between assertions made in documents, indiscriminate
and arbitrarily-targeted use).

View File

@ -123,7 +123,7 @@ and go to http://localhost:9090.
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or
You must have Python 3.9 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
@ -161,7 +161,7 @@ the command `npm install -g yarn` if needed)
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
_For Linux with an AMD GPU:_
@ -184,8 +184,9 @@ the command `npm install -g yarn` if needed)
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure
invokeai-configure --root .
```
Don't miss the dot at the end!
7. Launch the web server (do it every time you run InvokeAI):
@ -193,15 +194,9 @@ the command `npm install -g yarn` if needed)
invokeai-web
```
8. Build Node.js assets
8. Point your browser to http://localhost:9090 to bring up the web interface.
```terminal
cd invokeai/frontend/web/
yarn vite build
```
9. Point your browser to http://localhost:9090 to bring up the web interface.
10. Type `banana sushi` in the box on the top left and click `Invoke`.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
@ -311,13 +306,30 @@ InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migration Caveats
#### Migrating Images
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. You will need to
manually import selected images into the 3.0 gallery via drag-and-drop.
images stored in your 2.3-format outputs directory. To do this, you
need to run an additional step:
1. From a working InvokeAI 3.0 root directory, start the launcher and
enter menu option [8] to open the "developer's console".
2. At the developer's console command line, type the command:
```bash
invokeai-import-images
```
3. This will lead you through the process of confirming the desired
source and destination for the imported images. The images will
appear in the gallery board of your choice, and contain the
original prompt, model name, and other parameters used to generate
the image.
(Many kudos to **techjedi** for contributing this script.)
## Hardware Requirements

View File

@ -29,8 +29,8 @@ configure() {
echo "To reconfigure InvokeAI, delete the above file."
echo "======================================================================"
else
mkdir -p ${INVOKEAI_ROOT}
chown --recursive ${USER} ${INVOKEAI_ROOT}
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}"
gosu ${USER} invokeai-configure --yes --default_only
fi
}
@ -50,16 +50,16 @@ fi
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
apt-get update
apt-get install -y openssh-server
pushd $HOME
pushd "$HOME"
mkdir -p .ssh
echo ${PUBLIC_KEY} > .ssh/authorized_keys
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
chmod -R 700 .ssh
popd
service ssh start
fi
cd ${INVOKEAI_ROOT}
cd "${INVOKEAI_ROOT}"
# Run the CMD as the Container User (not root).
exec gosu ${USER} "$@"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 310 KiB

After

Width:  |  Height:  |  Size: 297 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 335 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 948 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 563 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

View File

@ -16,7 +16,7 @@ If you don't feel ready to make a code contribution yet, no problem! You can als
There are two paths to making a development contribution:
1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors.
1. Additional items can be found on our roadmap <******************************link to roadmap>******************************. The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item youd like to help with, reach out to the contributor assigned to the item to see how you can help.
1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item youd like to help with, reach out to the contributor assigned to the item to see how you can help.
2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.**
*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no ones time is being misspent.*

View File

@ -65,7 +65,6 @@ InvokeAI:
esrgan: true
internet_available: true
log_tokenization: false
nsfw_checker: false
patchmatch: true
restore: true
...
@ -136,19 +135,16 @@ command-line options by giving the `--help` argument:
```
(.venv) > invokeai-web --help
usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ...]] [--allow_credentials | --no-allow_credentials]
[--allow_methods [ALLOW_METHODS ...]] [--allow_headers [ALLOW_HEADERS ...]] [--esrgan | --no-esrgan]
[--internet_available | --no-internet_available] [--log_tokenization | --no-log_tokenization]
[--nsfw_checker | --no-nsfw_checker] [--patchmatch | --no-patchmatch] [--restore | --no-restore]
[--always_use_cpu | --no-always_use_cpu] [--free_gpu_mem | --no-free_gpu_mem] [--max_cache_size MAX_CACHE_SIZE]
[--max_vram_cache_size MAX_VRAM_CACHE_SIZE] [--precision {auto,float16,float32,autocast}]
[--sequential_guidance | --no-sequential_guidance] [--xformers_enabled | --no-xformers_enabled]
[--tiled_decode | --no-tiled_decode] [--root ROOT] [--autoimport_dir AUTOIMPORT_DIR] [--lora_dir LORA_DIR]
[--embedding_dir EMBEDDING_DIR] [--controlnet_dir CONTROLNET_DIR] [--conf_path CONF_PATH] [--models_dir MODELS_DIR]
[--legacy_conf_dir LEGACY_CONF_DIR] [--db_dir DB_DIR] [--outdir OUTDIR] [--from_file FROM_FILE]
[--use_memory_db | --no-use_memory_db] [--model MODEL] [--log_handlers [LOG_HANDLERS ...]]
[--log_format {plain,color,syslog,legacy}] [--log_level {debug,info,warning,error,critical}]
...
usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ...]] [--allow_credentials | --no-allow_credentials] [--allow_methods [ALLOW_METHODS ...]]
[--allow_headers [ALLOW_HEADERS ...]] [--esrgan | --no-esrgan] [--internet_available | --no-internet_available] [--log_tokenization | --no-log_tokenization]
[--patchmatch | --no-patchmatch] [--restore | --no-restore]
[--always_use_cpu | --no-always_use_cpu] [--free_gpu_mem | --no-free_gpu_mem] [--max_loaded_models MAX_LOADED_MODELS] [--max_cache_size MAX_CACHE_SIZE]
[--max_vram_cache_size MAX_VRAM_CACHE_SIZE] [--gpu_mem_reserved GPU_MEM_RESERVED] [--precision {auto,float16,float32,autocast}]
[--sequential_guidance | --no-sequential_guidance] [--xformers_enabled | --no-xformers_enabled] [--tiled_decode | --no-tiled_decode] [--root ROOT]
[--autoimport_dir AUTOIMPORT_DIR] [--lora_dir LORA_DIR] [--embedding_dir EMBEDDING_DIR] [--controlnet_dir CONTROLNET_DIR] [--conf_path CONF_PATH]
[--models_dir MODELS_DIR] [--legacy_conf_dir LEGACY_CONF_DIR] [--db_dir DB_DIR] [--outdir OUTDIR] [--from_file FROM_FILE]
[--use_memory_db | --no-use_memory_db] [--model MODEL] [--log_handlers [LOG_HANDLERS ...]] [--log_format {plain,color,syslog,legacy}]
[--log_level {debug,info,warning,error,critical}] [--version | --no-version]
```
## The Configuration Settings
@ -178,7 +174,6 @@ These configuration settings allow you to enable and disable various InvokeAI fe
| `esrgan` | `true` | Activate the ESRGAN upscaling options|
| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet |
| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected |
| `nsfw_checker` | `true` | Activate the NSFW checker to blur out risque images |
| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting |
| `restore` | `true` | Activate the facial restoration features (DEPRECATED; restoration features will be removed in 3.0.0) |

View File

@ -61,11 +61,13 @@ A noise scheduler (eg. DPM++ 2M Karras) schedules the subtraction of noise from
| ImageInverseLerp | Inverse linear interpolation of all pixels of an image |
| ImageLerp | Linear interpolation of all pixels of an image |
| ImageMultiply | Multiplies two images together using `PIL.ImageChops.Multiply()` |
| ImageNSFWBlurInvocation | Detects and blurs images that may contain sexually explicit content |
| ImagePaste | Pastes an image into another image |
| ImageProcessor | Base class for invocations that reprocess images for ControlNet |
| ImageResize | Resizes an image to specific dimensions |
| ImageScale | Scales an image by a factor |
| ImageToLatents | Scales latents by a given factor |
| ImageWatermarkInvocation | Adds an invisible watermark to images |
| InfillColor | Infills transparent areas of an image with a solid color |
| InfillPatchMatch | Infills transparent areas of an image using the PatchMatch algorithm |
| InfillTile | Infills transparent areas of an image with tiles of the image |
@ -116,49 +118,49 @@ There are several node grouping concepts that can be examined with a narrow focu
As described, an initial noise tensor is necessary for the latent diffusion process. As a result, all non-image *ToLatents nodes require a noise node input.
<img width="654" alt="groupsnoise" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/2e8d297e-ad55-4d27-bc93-c119dad2a2c5">
![groupsnoise](../assets/nodes/groupsnoise.png)
### Conditioning
As described, conditioning is necessary for the latent diffusion process, whether empty or not. As a result, all non-image *ToLatents nodes require positive and negative conditioning inputs. Conditioning is reliant on a CLIP tokenizer provided by the Model Loader node.
<img width="1024" alt="groupsconditioning" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/f8f7ad8a-8d9c-418e-b5ad-1437b774b27e">
![groupsconditioning](../assets/nodes/groupsconditioning.png)
### Image Space & VAE
The ImageToLatents node doesn't require a noise node input, but requires a VAE input to convert the image from image space into latent space. In reverse, the LatentsToImage node requires a VAE input to convert from latent space back into image space.
<img width="637" alt="groupsimgvae" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/dd99969c-e0a8-4f78-9b17-3ffe179cef9a">
![groupsimgvae](../assets/nodes/groupsimgvae.png)
### Defined & Random Seeds
It is common to want to use both the same seed (for continuity) and random seeds (for variance). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
<img width="922" alt="groupsrandseed" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/af55bc20-60f6-438e-aba5-3ec871443710">
![groupsrandseed](../assets/nodes/groupsrandseed.png)
### Control
Control means to guide the diffusion process to adhere to a defined input or structure. Control can be provided as input to non-image *ToLatents nodes from ControlNet nodes. ControlNet nodes usually require an image processor which converts an input image for use with ControlNet.
<img width="805" alt="groupscontrol" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/cc9c5de7-23a7-46c8-bbad-1f3609d999a6">
![groupscontrol](../assets/nodes/groupscontrol.png)
### LoRA
The Lora Loader node lets you load a LoRA (say that ten times fast) and pass it as output to both the Prompt (Compel) and non-image *ToLatents nodes. A model's CLIP tokenizer is passed through the LoRA into Prompt (Compel), where it affects conditioning. A model's U-Net is also passed through the LoRA into a non-image *ToLatents node, where it affects noise prediction.
<img width="993" alt="groupslora" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/630962b0-d914-4505-b3ea-ccae9b0269da">
![groupslora](../assets/nodes/groupslora.png)
### Scaling
Use the ImageScale, ScaleLatents, and Upscale nodes to upscale images and/or latent images. The chosen method differs across contexts. However, be aware that latents are already noisy and compressed at their original resolution; scaling an image could produce more detailed results.
<img width="644" alt="groupsallscale" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/99314f05-dd9f-4b6d-b378-31de55346a13">
![groupsallscale](../assets/nodes/groupsallscale.png)
### Iteration + Multiple Images as Input
Iteration is a common concept in any processing, and means to repeat a process with given input. In nodes, you're able to use the Iterate node to iterate through collections usually gathered by the Collect node. The Iterate node has many potential uses, from processing a collection of images one after another, to varying seeds across multiple image generations and more. This screenshot demonstrates how to collect several images and pass them out one at a time.
<img width="788" alt="groupsiterate" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/4af5ca27-82c9-4018-8c5b-024d3ee0a121">
![groupsiterate](../assets/nodes/groupsiterate.png)
### Multiple Image Generation + Random Seeds
@ -166,7 +168,7 @@ Multiple image generation in the node editor is done using the RandomRange node.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
<img width="1027" alt="groupsmultigenseeding" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/518d1b2b-fed1-416b-a052-ab06552521b3">
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)
## Examples
@ -174,7 +176,7 @@ With our knowledge of node grouping and the diffusion process, lets break dow
### Basic text-to-image Node Graph
<img width="875" alt="nodest2i" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/17c67720-c376-4db8-94f0-5e00381a61ee">
![nodest2i](../assets/nodes/nodest2i.png)
- Model Loader: A necessity to generating images (as weve read above). We choose our model from the dropdown. It outputs a U-Net, CLIP tokenizer, and VAE.
- Prompt (Compel): Another necessity. Two prompt nodes are created. One will output positive conditioning (what you want, dog), one will output negative (what you dont want, cat). They both input the CLIP tokenizer that the Model Loader node outputs.
@ -184,7 +186,7 @@ With our knowledge of node grouping and the diffusion process, lets break dow
### Basic image-to-image Node Graph
<img width="998" alt="nodesi2i" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/3f2c95d5-cee7-4415-9b79-b46ee60a92fe">
![nodesi2i](../assets/nodes/nodesi2i.png)
- Model Loader: Choose a model from the dropdown.
- Prompt (Compel): Two prompt nodes. One positive (dog), one negative (dog). Same CLIP inputs from the Model Loader node as before.
@ -195,7 +197,7 @@ With our knowledge of node grouping and the diffusion process, lets break dow
### Basic ControlNet Node Graph
<img width="703" alt="nodescontrol" src="https://github.com/ymgenesis/InvokeAI/assets/25252829/b02ded86-ceb4-44a2-9910-e19ad184d471">
![nodescontrol](../assets/nodes/nodescontrol.png)
- Model Loader
- Prompt (Compel)

View File

@ -16,21 +16,24 @@ Output Example:
---
## **Seamless Tiling**
## **Invisible Watermark**
The seamless tiling mode causes generated images to seamlessly tile
with itself creating repetitive wallpaper-like patterns. To use it,
activate the Seamless Tiling option in the Web GUI and then select
whether to tile on the X (horizontal) and/or Y (vertical) axes. Tiling
will then be active for the next set of generations.
In keeping with the principles for responsible AI generation, and to
help AI researchers avoid synthetic images contaminating their
training sets, InvokeAI adds an invisible watermark to each of the
final images it generates. The watermark consists of the text
"InvokeAI" and can be viewed using the
[invisible-watermarks](https://github.com/ShieldMnt/invisible-watermark)
tool.
A nice prompt to test seamless tiling with is:
Watermarking is controlled using the `invisible-watermark` setting in
`invokeai.yaml`. To turn it off, add the following line under the `Features`
category.
```
pond garden with lotus by claude monet"
invisible_watermark: false
```
---
## **Weighted Prompts**
@ -39,34 +42,10 @@ priority to them, by adding `:<percent>` to the end of the section you wish to u
example consider this prompt:
```bash
tabby cat:0.25 white duck:0.75 hybrid
(tabby cat):0.25 (white duck):0.75 hybrid
```
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75%
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
combination of integers and floating point numbers, and they do not need to add up to 1.
## **Thresholding and Perlin Noise Initialization Options**
Under the Noise section of the Web UI, you will find two options named
Perlin Noise and Noise Threshold. [Perlin
noise](https://en.wikipedia.org/wiki/Perlin_noise) is a type of
structured noise used to simulate terrain and other natural
textures. The slider controls the percentage of perlin noise that will
be mixed into the image at the beginning of generation. Adding a little
perlin noise to a generation will alter the image substantially.
The noise threshold limits the range of the latent values during
sampling and helps combat the oversharpening seem with higher CFG
scale values.
For better intuition into what these options do in practice:
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
In generating this graphic, perlin noise at initialization was
programmatically varied going across on the diagram by values 0.0,
0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied
going down from 0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are
fixed using the prompt "a portrait of a beautiful young lady" a CFG of
20, 100 steps, and a seed of 1950357039.

View File

@ -4,35 +4,13 @@ title: Postprocessing
# :material-image-edit: Postprocessing
## Intro
This extension provides the ability to restore faces and upscale images.
This sections details the ability to improve faces and upscale images.
## Face Fixing
The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see
[CodeFormer Support](#codeformer-support) below.
As of InvokeAI 3.0, the easiest way to improve faces created during image generation is through the Inpainting functionality of the Unified Canvas. Simply add the image containing the faces that you would like to improve to the canvas, mask the face to be improved and run the invocation. For best results, make sure to use an inpainting specific model; these are usually identified by the "-inpainting" term in the model name.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
should "just work" without further intervention. Simply indicate the desired scale on
the popup in the Web GUI.
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `invokeai-configure`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
```bash
invokeai-configure
```
If you do not run this script in advance, the GFPGAN module will attempt to
download the models files the first time you try to perform facial
reconstruction.
### Upscaling
## Upscaling
Open the upscaling dialog by clicking on the "expand" icon located
above the image display area in the Web UI:
@ -41,82 +19,23 @@ above the image display area in the Web UI:
![upscale1](../assets/features/upscale-dialog.png)
</figure>
There are three different upscaling parameters that you can
adjust. The first is the scale itself, either 2x or 4x.
The default upscaling option is Real-ESRGAN x2 Plus, which will scale your image by a factor of two. This means upscaling a 512x512 image will result in a new 1024x1024 image.
The second is the "Denoising Strength." Higher values will smooth out
the image and remove digital chatter, but may lose fine detail at
higher values.
Other options are the x4 upscalers, which will scale your image by a factor of 4.
Third, "Upscale Strength" allows you to adjust how the You can set the
scaling stength between `0` and `1.0` to control the intensity of the
scaling. AI upscalers generally tend to smooth out texture details. If
you wish to retain some of those for natural looking results, we
recommend using values between `0.5 to 0.8`.
[This figure](../assets/features/upscaling-montage.png) illustrates
the effects of denoising and strength. The original image was 512x512,
4x scaled to 2048x2048. The "original" version on the upper left was
scaled using simple pixel averaging. The remainder use the ESRGAN
upscaling algorithm at different levels of denoising and strength.
<figure markdown>
![upscaling](../assets/features/upscaling-montage.png){ width=720 }
</figure>
Both denoising and strength default to 0.75.
### Face Restoration
InvokeAI offers alternative two face restoration algorithms,
[GFPGAN](https://github.com/TencentARC/GFPGAN) and
[CodeFormer](https://huggingface.co/spaces/sczhou/CodeFormer). These
algorithms improve the appearance of faces, particularly eyes and
mouths. Issues with faces are less common with the latest set of
Stable Diffusion models than with the original 1.4 release, but the
restoration algorithms can still make a noticeable improvement in
certain cases. You can also apply restoration to old photographs you
upload.
To access face restoration, click the "smiley face" icon in the
toolbar above the InvokeAI image panel. You will be presented with a
dialog that offers a choice between the two algorithm and sliders that
allow you to adjust their parameters. Alternatively, you may open the
left-hand accordion panel labeled "Face Restoration" and have the
restoration algorithm of your choice applied to generated images
automatically.
Like upscaling, there are a number of parameters that adjust the face
restoration output. GFPGAN has a single parameter, `strength`, which
controls how much the algorithm is allowed to adjust the
image. CodeFormer has two parameters, `strength`, and `fidelity`,
which together control the quality of the output image as described in
the [CodeFormer project
page](https://shangchenzhou.com/projects/CodeFormer/). Default values
are 0.75 for both parameters, which achieves a reasonable balance
between changing the image too much and not enough.
[This figure](../assets/features/restoration-montage.png) illustrates
the effects of adjusting GFPGAN and CodeFormer parameters.
<figure markdown>
![upscaling](../assets/features/restoration-montage.png){ width=720 }
</figure>
!!! note
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid crashes and memory overloads
Real-ESRGAN is memory intensive. In order to avoid crashes and memory overloads
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed
its work.
In single image generations, you will see the output right away but when you are using multiple
iterations, the images will first be generated and then upscaled and face restored after that
iterations, the images will first be generated and then upscaled after that
process is complete. While the image generation is taking place, you will still be able to preview
the base images.
## How to disable
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_restore` and
`--no_esrgan` options, respectively.
If, for some reason, you do not wish to load the ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_esrgan` options.

View File

@ -1,12 +1,40 @@
---
title: The NSFW Checker
title: Watermarking, NSFW Image Checking
---
# :material-image-off: NSFW Checker
# :material-image-off: Invisible Watermark and the NSFW Checker
## Watermarking
InvokeAI does not apply watermarking to images by default. However,
many computer scientists working in the field of generative AI worry
that a flood of computer-generated imagery will contaminate the image
data sets needed to train future generations of generative models.
InvokeAI offers an optional watermarking mode that writes a small bit
of text, **InvokeAI**, into each image that it generates using an
"invisible" watermarking library that spreads the information
throughout the image in a way that is not perceptible to the human
eye. If you are planning to share your generated images on
internet-accessible services, we encourage you to activate the
invisible watermark mode in order to help preserve the digital image
environment.
The downside of watermarking is that it increases the size of the
image moderately, and has been reported by some individuals to degrade
image quality. Your mileage may vary.
To read the watermark in an image, activate the InvokeAI virtual
environment (called the "developer's console" in the launcher) and run
the command:
```
invisible-watermark -a decode -t bytes -m dwtDct -l 64 /path/to/image.png
```
## The NSFW ("Safety") Checker
The Stable Diffusion image generation models will produce sexual
Stable Diffusion 1.5-based image generation models will produce sexual
imagery if deliberately prompted, and will occasionally produce such
images when this is not intended. Such images are colloquially known
as "Not Safe for Work" (NSFW). This behavior is due to the nature of
@ -18,35 +46,17 @@ jurisdictions it may be illegal to publicly distribute such imagery,
including mounting a publicly-available server that provides
unfiltered images to the public. Furthermore, the [Stable Diffusion
weights
License](https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE-ModelWeights.txt)
forbids the model from being used to "exploit any of the
License](https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE-SD1+SD2.txt),
and the [Stable Diffusion XL
License][https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE-SDXL.txt]
both forbid the models from being used to "exploit any of the
vulnerabilities of a specific group of persons."
For these reasons Stable Diffusion offers a "safety checker," a
machine learning model trained to recognize potentially disturbing
imagery. When a potentially NSFW image is detected, the checker will
blur the image and paste a warning icon on top. The checker can be
turned on and off on the command line using `--nsfw_checker` and
`--no-nsfw_checker`.
At installation time, InvokeAI will ask whether the checker should be
activated by default (neither argument given on the command line). The
response is stored in the InvokeAI initialization file
(`invokeai.yaml` in the InvokeAI root directory). You can change the
default at any time by opening this file in a text editor and
changing the line `nsfw_checker:` from true to false or vice-versa:
```
...
Features:
esrgan: true
internet_available: true
log_tokenization: false
nsfw_checker: true
patchmatch: true
restore: true
```
turned on and off in the Web interface under Settings.
## Caveats
@ -84,10 +94,3 @@ are encouraged to turn **off** intermediate image rendering when you
are using the checker. Future versions of InvokeAI will apply
additional blurring to intermediate images when the checker is active.
### Watermarking
InvokeAI does not apply any sort of watermark to images it
generates. However, it does write metadata into the PNG data area,
including the prompt used to generate the image and relevant parameter
settings. These fields can be examined using the `sd-metadata.py`
script that comes with the InvokeAI package.

View File

@ -4,6 +4,9 @@ title: Overview
Here you can find the documentation for InvokeAI's various features.
## The [Getting Started Guide](../help/gettingStartedWithAI)
A getting started guide for those new to AI image generation.
## The Basics
### * The [Web User Interface](WEB.md)
Guide to the Web interface. Also see the [WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
@ -46,7 +49,7 @@ Personalize models by adding your own style or subjects.
## Other Features
### * [The NSFW Checker](NSFW.md)
### * [The NSFW Checker](WATERMARK+NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
### * [Controlling Logging](LOGGING.md)

View File

@ -0,0 +1,95 @@
# Getting Started with AI Image Generation
New to image generation with AI? Youre in the right place!
This is a high level walkthrough of some of the concepts and terms youll see as you start using InvokeAI. Please note, this is not an exhaustive guide and may be out of date due to the rapidly changing nature of the space.
## Using InvokeAI
### **Prompt Crafting**
- Prompts are the basis of using InvokeAI, providing the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be.
*To get started, heres an easy template to use for structuring your prompts:*
- Subject, Style, Quality, Aesthetic
- **Subject:** What your image will be about. E.g. “a futuristic city with trains”, “penguins floating on icebergs”, “friends sharing beers”
- **Style:** The style or medium in which your image will be in. E.g. “photograph”, “pencil sketch”, “oil paints”, or “pop art”, “cubism”, “abstract”
- **Quality:** A particular aspect or trait that you would like to see emphasized in your image. E.g. "award-winning", "featured in {relevant set of high quality works}", "professionally acclaimed". Many people often use "masterpiece".
- **Aesthetics:** The visual impact and design of the artwork. This can be colors, mood, lighting, setting, etc.
- There are two prompt boxes: *Positive Prompt* & *Negative Prompt*.
- A **Positive** Prompt includes words you want the model to reference when creating an image.
- Negative Prompt is for anything you want the model to eliminate when creating an image. It doesnt always interpret things exactly the way you would, but helps control the generation process. Always try to include a few terms - you can typically use lower quality image terms like “blurry” or “distorted” with good success.
- Some examples prompts you can try on your own:
- A detailed oil painting of a tranquil forest at sunset with vibrant+ colors and soft, golden light filtering through the trees
- friends sharing beers in a busy city, realistic colored pencil sketch, twilight, masterpiece, bright, lively
### Generation Workflows
- Invoke offers a number of different workflows for interacting with models to produce images. Each is extremely powerful on its own, but together provide you an unparalleled way of producing high quality creative outputs that align with your vision.
- **Text to Image:** The text to image tab focuses on the key workflow of using a prompt to generate a new image. It includes other features that help control the generation process as well.
- **Image to Image:** With image to image, you provide an image as a reference (called the “initial image”), which provides more guidance around color and structure to the AI as it generates a new image. This is provided alongside the same features as Text to Image.
- **Unified Canvas:** The Unified Canvas is an advanced AI-first image editing tool that is easy to use, but hard to master. Drag an image onto the canvas from your gallery in order to regenerate certain elements, edit content or colors (known as inpainting), or extend the image with an exceptional degree of consistency and clarity (called outpainting).
### Improving Image Quality
- Fine tuning your prompt - the more specific you are, the closer the image will turn out to what is in your head! Adding more details in the Positive Prompt or Negative Prompt can help add / remove pieces of your image to improve it - You can also use advanced techniques like upweighting and downweighting to control the influence of certain words. [Learn more here](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-syntax-features).
- **Tip: If youre seeing poor results, try adding the things you dont like about the image to your negative prompt may help. E.g. distorted, low quality, unrealistic, etc.**
- Explore different models - Other models can produce different results due to the data theyve been trained on. Each model has specific language and settings it works best with; a models documentation is your friend here. Play around with some and see what works best for you!
- Increasing Steps - The number of steps used controls how much time the model is given to produce an image, and depends on the “Scheduler” used. The schedule controls how each step is processed by the model. More steps tends to mean better results, but will take longer - We recommend at least 30 steps for most
- Tweak and Iterate - Remember, its best to change one thing at a time so you know what is working and what isn't. Sometimes you just need to try a new image, and other times using a new prompt might be the ticket. For testing, consider turning off the “random” Seed - Using the same seed with the same settings will produce the same image, which makes it the perfect way to learn exactly what your changes are doing.
- Explore Advanced Settings - InvokeAI has a full suite of tools available to allow you complete control over your image creation process - Check out our [docs if you want to learn more](https://invoke-ai.github.io/InvokeAI/features/).
## Terms & Concepts
If you're interested in learning more, check out [this presentation](https://docs.google.com/presentation/d/1IO78i8oEXFTZ5peuHHYkVF-Y3e2M6iM5tCnc-YBfcCM/edit?usp=sharing) from one of our maintainers (@lstein).
### Stable Diffusion
Stable Diffusion is deep learning, text-to-image model that is the foundation of the capabilities found in InvokeAI. Since the release of Stable Diffusion, there have been many subsequent models created based on Stable Diffusion that are designed to generate specific types of images.
### Prompts
Prompts provide the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be.
### Models
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what youd like to see. (Like Stable Diffusion!)
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*
### Scheduler
Schedulers guide the process of removing noise (de-noising) from data. They determine:
1. The number of steps to take to remove the noise.
2. Whether the steps are random (stochastic) or predictable (deterministic).
3. The specific method (algorithm) used for de-noising.
Experimenting with different schedulers is recommended as each will produce different outputs!
### Steps
The number of de-noising steps each generation through.
Schedulers can be intricate and there's often a balance to strike between how quickly they can de-noise data and how well they can do it. It's typically advised to experiment with different schedulers to see which one gives the best results. There has been a lot written on the internet about different schedulers, as well as exploring what the right level of "steps" are for each. You can save generation time by reducing the number of steps used, but you'll want to make sure that you are satisfied with the quality of images produced!
### Low-Rank Adaptations / LoRAs
Low-Rank Adaptations (LoRAs) are like a smaller, more focused version of models, intended to focus on training a better understanding of how a specific character, style, or concept looks.
### Textual Inversion Embeddings
Textual Inversion Embeddings, like LoRAs, assist with more easily prompting for certain characters, styles, or concepts. However, embeddings are trained to update the relationship between a specific word (known as the “trigger”) and the intended output.
### ControlNet
ControlNets are neural network models that are able to extract key features from an existing image and use these features to guide the output of the image generation model.
### VAE
Variational auto-encoder (VAE) is a encode/decode model that translates the "latents" image produced during the image generation procees to the large pixel images that we see.

View File

@ -11,6 +11,33 @@ title: Home
```
-->
<!-- CSS styling -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@6.2.1/css/fontawesome.min.css">
<style>
.button {
width: 300px;
height: 50px;
background-color: #448AFF;
color: #fff;
font-size: 16px;
border: none;
cursor: pointer;
border-radius: 0.2rem;
}
.button-container {
display: grid;
grid-template-columns: repeat(3, 300px);
gap: 20px;
}
.button:hover {
background-color: #526CFE;
}
</style>
<div align="center" markdown>
@ -70,61 +97,23 @@ image-to-image generator. It provides a streamlined process with various new
features and options to aid the image generation process. It runs on Windows,
Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>]
[<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas &
Q&A</a>]
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
!!! note
!!! Note
This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
## :octicons-package-dependencies-24: Installation
## :octicons-link-24: Quick Links
This fork is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
### [Installation Getting Started Guide](installation)
#### **[Automated Installer](installation/010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
#### [Manual Installation](installation/020_INSTALL_MANUAL.md)
This method is recommended for experienced users and developers
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
### Other Installation Guides
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
- [XFormers](installation/070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
## :fontawesome-solid-computer: Hardware Requirements
### :octicons-cpu-24: System
You wil need one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
We do **not recommend** the following video cards due to issues with their
running in half-precision mode and having insufficient VRAM to render 512x512
images in full-precision mode:
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
### :fontawesome-solid-memory: Memory and Disk
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python, and
all its dependencies.
<div class="button-container">
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
<a href="features/"> <button class="button">Features</button> </a>
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>
<a href="https://discord.gg/ZmtBAhwWhy"> <button class="button"> Join the Discord Server!</button> </a>
</div>
## :octicons-gift-24: InvokeAI Features
@ -148,7 +137,7 @@ images in full-precision mode:
- [Model Merging](features/MODEL_MERGING.md)
- [ControlNet Models](features/CONTROLNET.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
- [Watermarking and the Not Safe for Work (NSFW) Checker](features/WATERMARK+NSFW.md)
<!-- seperator -->
### Prompt Engineering
- [Prompt Syntax](features/PROMPTS.md)
@ -230,7 +219,7 @@ encouraged to do so.
## :octicons-person-24: Contributors
This fork is a combined effort of various people from across the world.
This software is a combined effort of various people from across the world.
[Check out the list of all these amazing people](other/CONTRIBUTORS.md). We
thank them for their time, hard work and effort.

View File

@ -40,10 +40,8 @@ experimental versions later.
this, open up a command-line window ("Terminal" on Linux and
Macintosh, "Command" or "Powershell" on Windows) and type `python
--version`. If Python is installed, it will print out the version
number. If it is version `3.9.*` or `3.10.*`, you meet
requirements. We do not recommend using Python 3.11 or higher,
as not all the libraries that InvokeAI depends on work properly
with this version.
number. If it is version `3.9.*`, `3.10.*` or `3.11.*` you meet
requirements.
!!! warning "What to do if you have an unsupported version"
@ -215,17 +213,6 @@ experimental versions later.
Generally the defaults are fine, and you can come back to this screen at
any time to tweak your system. Here are the options you can adjust:
- ***Output directory for images***
This is the path to a directory in which InvokeAI will store all its
generated images.
- ***NSFW checker***
If checked, InvokeAI will test images for potential sexual content
and blur them out if found. Note that the NSFW checker consumes
an additional 0.6 GB of VRAM on top of the 2-3 GB of VRAM used
by most image models. If you have a low VRAM GPU (4-6 GB), you
can reduce out of memory errors by disabling the checker.
- ***HuggingFace Access Token***
InvokeAI has the ability to download embedded styles and subjects
from the HuggingFace Concept Library on-demand. However, some of
@ -257,20 +244,30 @@ experimental versions later.
and graphics cards. The "autocast" option is deprecated and
shouldn't be used unless you are asked to by a member of the team.
- ***Number of models to cache in CPU memory***
- **Size of the RAM cache used for fast model switching***
This allows you to keep models in memory and switch rapidly among
them rather than having them load from disk each time. This slider
controls how many models to keep loaded at once. Each
model will use 2-4 GB of RAM, so use this cautiously
controls how many models to keep loaded at once. A typical SD-1 or SD-2 model
uses 2-3 GB of memory. A typical SDXL model uses 6-7 GB. Providing more
RAM will allow more models to be co-resident.
- ***Directory containing embedding/textual inversion files***
This is the directory in which you can place custom embedding
files (.pt or .bin). During startup, this directory will be
scanned and InvokeAI will print out the text terms that
are available to trigger the embeddings.
- ***Output directory for images***
This is the path to a directory in which InvokeAI will store all its
generated images.
- ***Autoimport Folder***
This is the directory in which you can place models you have
downloaded and wish to load into InvokeAI. You can place a variety
of models in this directory, including diffusers folders, .ckpt files,
.safetensors files, as well as LoRAs, ControlNet and Textual Inversion
files (both folder and file versions). To help organize this folder,
you can create several levels of subfolders and drop your models into
whichever ones you want.
- ***LICENSE***
At the bottom of the screen you will see a checkbox for accepting
the CreativeML Responsible AI License. You need to accept the license
the CreativeML Responsible AI Licenses. You need to accept the license
in order to download Stable Diffusion models from the next screen.
_You can come back to the startup options form_ as many times as you like.
@ -375,8 +372,71 @@ experimental versions later.
Once InvokeAI is installed, do not move or remove this directory."
<a name="troubleshooting"></a>
## Troubleshooting
### _OSErrors on Windows while installing dependencies_
During a zip file installation or an online update, installation stops
with an error like this:
![broken-dependency-screenshot](../assets/troubleshooting/broken-dependency.png){:width="800px"}
This seems to happen particularly often with the `pydantic` and
`numpy` packages. The most reliable solution requires several manual
steps to complete installation.
Open up a Powershell window and navigate to the `invokeai` directory
created by the installer. Then give the following series of commands:
```cmd
rm .\.venv -r -force
python -mvenv .venv
.\.venv\Scripts\activate
pip install invokeai
invokeai-configure --yes --root .
```
If you see anything marked as an error during this process please stop
and seek help on the Discord [installation support
channel](https://discord.com/channels/1020123559063990373/1041391462190956654). A
few warning messages are OK.
If you are updating from a previous version, this should restore your
system to a working state. If you are installing from scratch, there
is one additional command to give:
```cmd
wget -O invoke.bat https://raw.githubusercontent.com/invoke-ai/InvokeAI/main/installer/templates/invoke.bat.in
```
This will create the `invoke.bat` script needed to launch InvokeAI and
its related programs.
### _Stable Diffusion XL Generation Fails after Trying to Load unet_
InvokeAI is working in other respects, but when trying to generate
images with Stable Diffusion XL you get a "Server Error". The text log
in the launch window contains this log line above several more lines of
error messages:
```INFO --> Loading model:D:\LONG\PATH\TO\MODEL, type sdxl:main:unet```
This failure mode occurs when there is a network glitch during
downloading the very large SDXL model.
To address this, first go to the Web Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then navigate to HuggingFace and
manually download the .safetensors version of the model. The 1.0
version is located at
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main
and the file is named `sd_xl_base_1.0.safetensors`.
Save this file to disk and then reenter the Model Manager. Navigate to
Import Models->Add Model, then type (or drag-and-drop) the path to the
.safetensors file. Press "Add Model".
### _Package dependency conflicts_
If you have previously installed InvokeAI or another Stable Diffusion
@ -411,7 +471,7 @@ Then type the following commands:
=== "NVIDIA System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu117
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118
pip install xformers
```

View File

@ -32,7 +32,7 @@ gaming):
* **Python**
version 3.9 or 3.10 (3.11 is not recommended).
version 3.9 through 3.11
* **CUDA Tools**
@ -65,7 +65,7 @@ gaming):
To install InvokeAI with virtual environments and the PIP package
manager, please follow these steps:
1. Please make sure you are using Python 3.9 or 3.10. The rest of the install
1. Please make sure you are using Python 3.9 through 3.11. The rest of the install
procedure depends on this and will not work with other versions:
```bash
@ -148,7 +148,7 @@ manager, please follow these steps:
=== "CUDA (NVidia)"
```bash
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
=== "ROCm (AMD)"
@ -192,8 +192,10 @@ manager, please follow these steps:
your outputs.
```terminal
invokeai-configure
invokeai-configure --root .
```
Don't miss the dot at the end of the command!
The script `invokeai-configure` will interactively guide you through the
process of downloading and installing the weights files needed for InvokeAI.
@ -225,12 +227,6 @@ manager, please follow these steps:
!!! warning "Make sure that the virtual environment is activated, which should create `(.venv)` in front of your prompt!"
=== "CLI"
```bash
invokeai
```
=== "local Webserver"
```bash
@ -243,6 +239,12 @@ manager, please follow these steps:
invokeai --web --host 0.0.0.0
```
=== "CLI"
```bash
invokeai
```
If you choose the run the web interface, point your browser at
http://localhost:9090 in order to load the GUI.
@ -310,7 +312,7 @@ installation protocol (important!)
=== "CUDA (NVidia)"
```bash
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
=== "ROCm (AMD)"
@ -354,7 +356,7 @@ you can do so using this unsupported recipe:
mkdir ~/invokeai
conda create -n invokeai python=3.10
conda activate invokeai
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
invokeai-configure --root ~/invokeai
invokeai --root ~/invokeai --web
```

View File

@ -34,11 +34,11 @@ directly from NVIDIA. **Do not try to install Ubuntu's
nvidia-cuda-toolkit package. It is out of date and will cause
conflicts among the NVIDIA driver and binaries.**
Go to [CUDA Toolkit 11.7
Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive),
and use the target selection wizard to choose your operating system,
hardware platform, and preferred installation method (e.g. "local"
versus "network").
Go to [CUDA Toolkit
Downloads](https://developer.nvidia.com/cuda-downloads), and use the
target selection wizard to choose your operating system, hardware
platform, and preferred installation method (e.g. "local" versus
"network").
This will provide you with a downloadable install file or, depending
on your choices, a recipe for downloading and running a install shell
@ -61,7 +61,7 @@ Runtime Site](https://developer.nvidia.com/nvidia-container-runtime)
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/cu117` as described in the [Manual
https://download.pytorch.org/whl/cu118` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
## :simple-amd: ROCm

View File

@ -124,7 +124,7 @@ installation. Examples:
invokeai-model-install --list controlnet
# (install the model at the indicated URL)
invokeai-model-install --add http://civitai.com/2860
invokeai-model-install --add https://civitai.com/api/download/models/128713
# (delete the named model)
invokeai-model-install --delete sd-1/main/analog-diffusion
@ -170,4 +170,4 @@ elsewhere on disk and they will be autoimported. You can also create
subfolders and organize them as you wish.
The location of the autoimport directories are controlled by settings
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).

View File

@ -28,18 +28,21 @@ command line, then just be sure to activate it's virtual environment.
Then run the following three commands:
```sh
pip install xformers==0.0.16rc425
pip install triton
pip install xformers~=0.0.19
pip install triton # WON'T WORK ON WINDOWS
python -m xformers.info output
```
The first command installs `xformers`, the second installs the
`triton` training accelerator, and the third prints out the `xformers`
installation status. If all goes well, you'll see a report like the
installation status. On Windows, please omit the `triton` package,
which is not available on that platform.
If all goes well, you'll see a report like the
following:
```sh
xFormers 0.0.16rc425
xFormers 0.0.20
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
@ -48,22 +51,28 @@ memory_efficient_attention.smallkF: available
memory_efficient_attention.smallkB: available
memory_efficient_attention.tritonflashattF: available
memory_efficient_attention.tritonflashattB: available
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
is_functorch_available: False
pytorch.version: 1.13.1+cu117
pytorch.version: 2.0.1+cu118
pytorch.cuda: available
gpu.compute_capability: 8.6
gpu.name: NVIDIA RTX A2000 12GB
gpu.compute_capability: 8.9
gpu.name: NVIDIA GeForce RTX 4070
build.info: available
build.cuda_version: 1107
build.python_version: 3.10.9
build.torch_version: 1.13.1+cu117
build.cuda_version: 1108
build.python_version: 3.10.11
build.torch_version: 2.0.1+cu118
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.16rc425
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.20
build.nvcc_version: 11.8.89
source.privacy: open source
```
@ -83,14 +92,14 @@ installed from source. These instructions were written for a system
running Ubuntu 22.04, but other Linux distributions should be able to
adapt this recipe.
#### 1. Install CUDA Toolkit 11.7
#### 1. Install CUDA Toolkit 11.8
You will need the CUDA developer's toolkit in order to compile and
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
package.** It is out of date and will cause conflicts among the NVIDIA
driver and binaries. Instead install the CUDA Toolkit package provided
by NVIDIA itself. Go to [CUDA Toolkit 11.7
Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive)
by NVIDIA itself. Go to [CUDA Toolkit 11.8
Downloads](https://developer.nvidia.com/cuda-11-8-0-download-archive)
and use the target selection wizard to choose your platform and Linux
distribution. Select an installer type of "runfile (local)" at the
last step.
@ -101,17 +110,17 @@ example, the install script recipe for Ubuntu 22.04 running on a
x86_64 system is:
```
wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda_11.7.0_515.43.04_linux.run
sudo sh cuda_11.7.0_515.43.04_linux.run
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run
```
Rather than cut-and-paste this example, We recommend that you walk
through the toolkit wizard in order to get the most up to date
installer for your system.
#### 2. Confirm/Install pyTorch 1.13 with CUDA 11.7 support
#### 2. Confirm/Install pyTorch 2.01 with CUDA 11.8 support
If you are using InvokeAI 2.3 or higher, these will already be
If you are using InvokeAI 3.0.2 or higher, these will already be
installed. If not, you can check whether you have the needed libraries
using a quick command. Activate the invokeai virtual environment,
either by entering the "developer's console", or manually with a
@ -124,7 +133,7 @@ Then run the command:
python -c 'exec("import torch\nprint(torch.__version__)")'
```
If it prints __1.13.1+cu117__ you're good. If not, you can install the
If it prints __1.13.1+cu118__ you're good. If not, you can install the
most up to date libraries with this command:
```sh

View File

@ -1,6 +1,4 @@
---
title: Overview
---
# Overview
We offer several ways to install InvokeAI, each one suited to your
experience and preferences. We suggest that everyone start by
@ -15,6 +13,56 @@ See the [troubleshooting
section](010_INSTALL_AUTOMATED.md#troubleshooting) of the automated
install guide for frequently-encountered installation issues.
This fork is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
### [Installation Getting Started Guide](installation)
#### **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
#### [Manual Installation](020_INSTALL_MANUAL.md)
This method is recommended for experienced users and developers
#### [Docker Installation](040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
### Other Installation Guides
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [XFormers](070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](050_INSTALLING_MODELS.md)
## :fontawesome-solid-computer: Hardware Requirements
### :octicons-cpu-24: System
You wil need one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
** SDXL 1.0 Requirements*
To use SDXL, user must have one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 8 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 16 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
### :fontawesome-solid-memory: Memory and Disk
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python, and
all its dependencies.
We do **not recommend** the following video cards due to issues with their
running in half-precision mode and having insufficient VRAM to render 512x512
images in full-precision mode:
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
## Installation options
1. [Automated Installer](010_INSTALL_AUTOMATED.md)

View File

@ -14,23 +14,28 @@ The nodes linked below have been developed and contributed by members of the Inv
## List of Nodes
### Face Mask
### FaceTools
**Description:** This node autodetects a face in the image using MediaPipe and masks it by making it transparent. Via outpainting you can swap faces with other faces, or invert the mask and swap things around the face with other things. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control. The node also outputs an all-white mask in the same dimensions as the input image. This is needed by the inpaint node (and unified canvas) for outpainting.
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
**Node Link:** https://github.com/ymgenesis/InvokeAI/blob/facemaskmediapipe/invokeai/app/invocations/facemask.py
**Node Link:** https://github.com/ymgenesis/FaceTools/
**Example Node Graph:** https://www.mediafire.com/file/gohn5sb1bfp8use/21-July_2023-FaceMask.json/file
**FaceMask Output Examples**
**Output Examples**
![5cc8abce-53b0-487a-b891-3bf94dcc8960](https://github.com/invoke-ai/InvokeAI/assets/25252829/43f36d24-1429-4ab1-bd06-a4bedfe0955e)
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
![2e3168cb-af6a-475d-bfac-c7b7fd67b4c2](https://github.com/ymgenesis/InvokeAI/assets/25252829/a5ad7d44-2ada-4b3c-a56e-a21f8244a1ac)
![2_annotated](https://github.com/ymgenesis/InvokeAI/assets/25252829/53416c8a-a23b-4d76-bb6d-3cfd776e0096)
![2fe2150c-fd08-4e26-8c36-f0610bf441bb](https://github.com/ymgenesis/InvokeAI/assets/25252829/b0f7ecfe-f093-4147-a904-b9f131b41dc9)
![831b6b98-4f0f-4360-93c8-69a9c1338cbe](https://github.com/ymgenesis/InvokeAI/assets/25252829/fc7b0622-e361-4155-8a76-082894d084f0)
<hr>
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
--------------------------------
### Super Cool Node Template
### Example Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
@ -40,13 +45,9 @@ The nodes linked below have been developed and contributed by members of the Inv
**Output Examples**
![Invoke AI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
![Example Image](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png){: style="height:115px;width:240px"}
## Help
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).

25
flake.lock generated Normal file
View File

@ -0,0 +1,25 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1690630721,
"narHash": "sha256-Y04onHyBQT4Erfr2fc82dbJTfXGYrf4V0ysLUYnPOP8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "d2b52322f35597c62abf56de91b0236746b2a03d",
"type": "github"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

91
flake.nix Normal file
View File

@ -0,0 +1,91 @@
# Important note: this flake does not attempt to create a fully isolated, 'pure'
# Python environment for InvokeAI. Instead, it depends on local invocations of
# virtualenv/pip to install the required (binary) packages, most importantly the
# prebuilt binary pytorch packages with CUDA support.
# ML Python packages with CUDA support, like pytorch, are notoriously expensive
# to compile so it's purposefuly not what this flake does.
{
description = "An (impure) flake to develop on InvokeAI.";
outputs = { self, nixpkgs }:
let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config.allowUnfree = true;
};
python = pkgs.python310;
mkShell = { dir, install }:
let
setupScript = pkgs.writeScript "setup-invokai" ''
# This must be sourced using 'source', not executed.
${python}/bin/python -m venv ${dir}
${dir}/bin/python -m pip install ${install}
# ${dir}/bin/python -c 'import torch; assert(torch.cuda.is_available())'
source ${dir}/bin/activate
'';
in
pkgs.mkShell rec {
buildInputs = with pkgs; [
# Backend: graphics, CUDA.
cudaPackages.cudnn
cudaPackages.cuda_nvrtc
cudatoolkit
pkgconfig
libconfig
cmake
blas
freeglut
glib
gperf
procps
libGL
libGLU
linuxPackages.nvidia_x11
python
(opencv4.override {
enableGtk3 = true;
enableFfmpeg = true;
enableCuda = true;
enableUnfree = true;
})
stdenv.cc
stdenv.cc.cc.lib
xorg.libX11
xorg.libXext
xorg.libXi
xorg.libXmu
xorg.libXrandr
xorg.libXv
zlib
# Pre-commit hooks.
black
# Frontend.
yarn
nodejs
];
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath buildInputs;
CUDA_PATH = pkgs.cudatoolkit;
EXTRA_LDFLAGS = "-L${pkgs.linuxPackages.nvidia_x11}/lib";
shellHook = ''
if [[ -f "${dir}/bin/activate" ]]; then
source "${dir}/bin/activate"
echo "Using Python: $(which python)"
else
echo "Use 'source ${setupScript}' to set up the environment."
fi
'';
};
in
{
devShells.${system} = rec {
develop = mkShell { dir = "venv"; install = "-e '.[xformers]' --extra-index-url https://download.pytorch.org/whl/cu118"; };
default = develop;
};
};
}

View File

@ -9,16 +9,20 @@ cd $scriptdir
function version { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; }
MINIMUM_PYTHON_VERSION=3.9.0
MAXIMUM_PYTHON_VERSION=3.11.0
MAXIMUM_PYTHON_VERSION=3.11.100
PYTHON=""
for candidate in python3.10 python3.9 python3 python ; do
for candidate in python3.11 python3.10 python3.9 python3 python ; do
if ppath=`which $candidate`; then
# when using `pyenv`, the executable for an inactive Python version will exist but will not be operational
# we check that this found executable can actually run
if [ $($candidate --version &>/dev/null; echo ${PIPESTATUS}) -gt 0 ]; then continue; fi
python_version=$($ppath -V | awk '{ print $2 }')
if [ $(version $python_version) -ge $(version "$MINIMUM_PYTHON_VERSION") ]; then
if [ $(version $python_version) -lt $(version "$MAXIMUM_PYTHON_VERSION") ]; then
PYTHON=$ppath
break
fi
if [ $(version $python_version) -le $(version "$MAXIMUM_PYTHON_VERSION") ]; then
PYTHON=$ppath
break
fi
fi
fi
done

View File

@ -13,7 +13,7 @@ from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Union
SUPPORTED_PYTHON = ">=3.9.0,<3.11"
SUPPORTED_PYTHON = ">=3.9.0,<=3.11.100"
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
BOOTSTRAP_VENV_PREFIX = "invokeai-installer-tmp"
@ -141,15 +141,16 @@ class Installer:
# upgrade pip in Python 3.9 environments
if int(platform.python_version_tuple()[1]) == 9:
from plumbum import FG, local
pip = local[get_pip_from_venv(venv_dir)]
pip[ "install", "--upgrade", "pip"] & FG
pip["install", "--upgrade", "pip"] & FG
return venv_dir
def install(self, root: str = "~/invokeai-3", version: str = "latest", yes_to_all=False, find_links: Path = None) -> None:
def install(
self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None
) -> None:
"""
Install the InvokeAI application into the given runtime path
@ -167,7 +168,8 @@ class Installer:
messages.welcome()
self.dest = Path(root).expanduser().resolve() if yes_to_all else messages.dest_path(root)
default_path = os.environ.get("INVOKEAI_ROOT") or Path(root).expanduser().resolve()
self.dest = default_path if yes_to_all else messages.dest_path(root)
# create the venv for the app
self.venv = self.app_venv()
@ -175,7 +177,7 @@ class Installer:
self.instance = InvokeAiInstance(runtime=self.dest, venv=self.venv, version=version)
# install dependencies and the InvokeAI application
(extra_index_url,optional_modules) = get_torch_source() if not yes_to_all else (None,None)
(extra_index_url, optional_modules) = get_torch_source() if not yes_to_all else (None, None)
self.instance.install(
extra_index_url,
optional_modules,
@ -188,6 +190,7 @@ class Installer:
# run through the configuration flow
self.instance.configure()
class InvokeAiInstance:
"""
Manages an installed instance of InvokeAI, comprising a virtual environment and a runtime directory.
@ -196,7 +199,6 @@ class InvokeAiInstance:
"""
def __init__(self, runtime: Path, venv: Path, version: str) -> None:
self.runtime = runtime
self.venv = venv
self.pip = get_pip_from_venv(venv)
@ -247,6 +249,9 @@ class InvokeAiInstance:
pip[
"install",
"--require-virtualenv",
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch~=2.0.0",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
@ -312,7 +317,7 @@ class InvokeAiInstance:
"install",
"--require-virtualenv",
"--use-pep517",
str(src)+(optional_modules if optional_modules else ''),
str(src) + (optional_modules if optional_modules else ""),
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
@ -329,21 +334,21 @@ class InvokeAiInstance:
# set sys.argv to a consistent state
new_argv = [sys.argv[0]]
for i in range(1,len(sys.argv)):
for i in range(1, len(sys.argv)):
el = sys.argv[i]
if el in ['-r','--root']:
if el in ["-r", "--root"]:
new_argv.append(el)
new_argv.append(sys.argv[i+1])
elif el in ['-y','--yes','--yes-to-all']:
new_argv.append(sys.argv[i + 1])
elif el in ["-y", "--yes", "--yes-to-all"]:
new_argv.append(el)
sys.argv = new_argv
import requests # to catch download exceptions
from messages import introduction
introduction()
from invokeai.frontend.install import invokeai_configure
from invokeai.frontend.install.invokeai_configure import invokeai_configure
# NOTE: currently the config script does its own arg parsing! this means the command-line switches
# from the installer will also automatically propagate down to the config script.
@ -353,16 +358,16 @@ class InvokeAiInstance:
invokeai_configure()
succeeded = True
except requests.exceptions.ConnectionError as e:
print(f'\nA network error was encountered during configuration and download: {str(e)}')
print(f"\nA network error was encountered during configuration and download: {str(e)}")
except OSError as e:
print(f'\nAn OS error was encountered during configuration and download: {str(e)}')
print(f"\nAn OS error was encountered during configuration and download: {str(e)}")
except Exception as e:
print(f'\nA problem was encountered during the configuration and download steps: {str(e)}')
print(f"\nA problem was encountered during the configuration and download steps: {str(e)}")
finally:
if not succeeded:
print('To try again, find the "invokeai" directory, run the script "invoke.sh" or "invoke.bat"')
print('and choose option 7 to fix a broken install, optionally followed by option 5 to install models.')
print('Alternatively you can relaunch the installer.')
print("and choose option 7 to fix a broken install, optionally followed by option 5 to install models.")
print("Alternatively you can relaunch the installer.")
def install_user_scripts(self):
"""
@ -371,11 +376,11 @@ class InvokeAiInstance:
ext = "bat" if OS == "Windows" else "sh"
#scripts = ['invoke', 'update']
scripts = ['invoke']
# scripts = ['invoke', 'update']
scripts = ["invoke"]
for script in scripts:
src = Path(__file__).parent / '..' / "templates" / f"{script}.{ext}.in"
src = Path(__file__).parent / ".." / "templates" / f"{script}.{ext}.in"
dest = self.runtime / f"{script}.{ext}"
shutil.copy(src, dest)
os.chmod(dest, 0o0755)
@ -420,11 +425,7 @@ def set_sys_path(venv_path: Path) -> None:
# filter out any paths in sys.path that may be system- or user-wide
# but leave the temporary bootstrap virtualenv as it contains packages we
# temporarily need at install time
sys.path = list(filter(
lambda p: not p.endswith("-packages")
or p.find(BOOTSTRAP_VENV_PREFIX) != -1,
sys.path
))
sys.path = list(filter(lambda p: not p.endswith("-packages") or p.find(BOOTSTRAP_VENV_PREFIX) != -1, sys.path))
# determine site-packages/lib directory location for the venv
lib = "Lib" if OS == "Windows" else f"lib/python{sys.version_info.major}.{sys.version_info.minor}"
@ -433,7 +434,7 @@ def set_sys_path(venv_path: Path) -> None:
sys.path.append(str(Path(venv_path, lib, "site-packages").expanduser().resolve()))
def get_torch_source() -> (Union[str, None],str):
def get_torch_source() -> (Union[str, None], str):
"""
Determine the extra index URL for pip to use for torch installation.
This depends on the OS and the graphics accelerator in use.
@ -454,16 +455,19 @@ def get_torch_source() -> (Union[str, None],str):
device = graphical_accelerator()
url = None
optional_modules = None
optional_modules = "[onnx]"
if OS == "Linux":
if device == "rocm":
url = "https://download.pytorch.org/whl/rocm5.4.2"
elif device == "cpu":
url = "https://download.pytorch.org/whl/cpu"
if device == 'cuda':
url = 'https://download.pytorch.org/whl/cu117'
optional_modules = '[xformers]'
if device == "cuda":
url = "https://download.pytorch.org/whl/cu118"
optional_modules = "[xformers,onnx-cuda]"
if device == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu118"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@ -3,6 +3,7 @@ InvokeAI Installer
"""
import argparse
import os
from pathlib import Path
from installer import Installer
@ -15,7 +16,7 @@ if __name__ == "__main__":
dest="root",
type=str,
help="Destination path for installation",
default="~/invokeai",
default=os.environ.get("INVOKEAI_ROOT") or "~/invokeai",
)
parser.add_argument(
"-y",
@ -41,7 +42,7 @@ if __name__ == "__main__":
type=Path,
default=None,
)
args = parser.parse_args()
inst = Installer()

View File

@ -36,13 +36,15 @@ else:
def welcome():
@group()
def text():
if (platform_specific := _platform_specific_help()) != "":
yield platform_specific
yield ""
yield Text.from_markup("Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.", justify="center")
yield Text.from_markup(
"Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.",
justify="center",
)
console.rule()
print(
@ -58,6 +60,7 @@ def welcome():
)
console.line()
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":exclamation: Directory {dest} already exists :exclamation:")
@ -92,7 +95,6 @@ def dest_path(dest=None) -> Path:
dest_confirmed = confirm_install(dest)
while not dest_confirmed:
# if the given destination already exists, the starting point for browsing is its parent directory.
# the user may have made a typo, or otherwise wants to place the root dir next to an existing one.
# if the destination dir does NOT exist, then the user must have changed their mind about the selection.
@ -165,6 +167,10 @@ def graphical_accelerator():
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
"cuda",
)
nvidia_with_dml = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
"cuda_and_dml",
)
amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)",
"rocm",
@ -179,7 +185,7 @@ def graphical_accelerator():
)
if OS == "Windows":
options = [nvidia, cpu]
options = [nvidia, nvidia_with_dml, cpu]
if OS == "Linux":
options = [nvidia, amd, cpu]
elif OS == "Darwin":
@ -300,15 +306,20 @@ def introduction() -> None:
)
console.line(2)
def _platform_specific_help()->str:
def _platform_specific_help() -> str:
if OS == "Darwin":
text = Text.from_markup("""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/].""")
text = Text.from_markup(
"""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/]."""
)
elif OS == "Windows":
text = Text.from_markup("""[b wheat1]Windows Users![/]\n\nBefore you start, please do the following:
text = Text.from_markup(
"""[b wheat1]Windows Users![/]\n\nBefore you start, please do the following:
1. Double-click on the file [b wheat1]WinLongPathsEnabled.reg[/] in order to
enable long path support on your system.
2. Make sure you have the [b wheat1]Visual C++ core libraries[/] installed. If not, install from
[deep_sky_blue1]https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170[/]""")
[deep_sky_blue1]https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170[/]"""
)
else:
text = ""
return text

View File

@ -8,16 +8,13 @@ Preparations:
to work. Instructions are given here:
https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/
NOTE: At this time we do not recommend Python 3.11. We recommend
Version 3.10.9, which has been extensively tested with InvokeAI.
Before you start the installer, please open up your system's command
line window (Terminal or Command) and type the commands:
python --version
If all is well, it will print "Python 3.X.X", where the version number
is at least 3.9.1, and less than 3.11.
is at least 3.9.*, and not higher than 3.11.*.
If this works, check the version of the Python package manager, pip:

View File

@ -41,7 +41,7 @@ IF /I "%choice%" == "1" (
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%choice%" == "7" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --default_only
python .venv\Scripts\invokeai-configure.exe --yes --skip-sd-weight
) ELSE IF /I "%choice%" == "8" (
echo Developer Console
echo Python command is:

View File

@ -82,7 +82,7 @@ do_choice() {
7)
clear
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only --skip-sd-weights
;;
8)
clear

View File

@ -1,7 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Optional
from logging import Logger
import os
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
)
@ -29,6 +29,7 @@ from ..services.invoker import Invoker
from ..services.processor import DefaultInvocationProcessor
from ..services.sqlite import SqliteItemStorage
from ..services.model_manager_service import ModelManagerService
from ..services.invocation_stats import InvocationStatsService
from .events import FastAPIEventService
@ -54,7 +55,7 @@ logger = InvokeAILogger.getLogger()
class ApiDependencies:
"""Contains and initializes all dependencies for the API"""
invoker: Invoker = None
invoker: Invoker
@staticmethod
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger):
@ -67,8 +68,9 @@ class ApiDependencies:
output_folder = config.output_path
# TODO: build a file/path manager?
db_location = config.db_path
db_location.parent.mkdir(parents=True, exist_ok=True)
db_path = config.db_path
db_path.parent.mkdir(parents=True, exist_ok=True)
db_location = str(db_path)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](
filename=db_location, table_name="graph_executions"
@ -78,9 +80,7 @@ class ApiDependencies:
image_record_storage = SqliteImageRecordStorage(db_location)
image_file_storage = DiskImageFileStorage(f"{output_folder}/images")
names = SimpleNameService()
latents = ForwardCacheLatentsStorage(
DiskLatentsStorage(f"{output_folder}/latents")
)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
board_record_storage = SqliteBoardRecordStorage(db_location)
board_image_record_storage = SqliteBoardImageRecordStorage(db_location)
@ -125,12 +125,11 @@ class ApiDependencies:
boards=boards,
board_images=board_images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"
),
graph_library=SqliteItemStorage[LibraryGraph](filename=db_location, table_name="graphs"),
graph_execution_manager=graph_execution_manager,
processor=DefaultInvocationProcessor(),
configuration=config,
performance_statistics=InvocationStatsService(graph_execution_manager),
logger=logger,
)

View File

@ -1,14 +1,21 @@
import typing
from enum import Enum
from fastapi import Body
from fastapi.routing import APIRouter
from pathlib import Path
from pydantic import BaseModel, Field
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.version import __version__
from ..dependencies import ApiDependencies
from invokeai.backend.util.logging import logging
class LogLevel(int, Enum):
NotSet = logging.NOTSET
Debug = logging.DEBUG
@ -16,7 +23,13 @@ class LogLevel(int, Enum):
Warning = logging.WARNING
Error = logging.ERROR
Critical = logging.CRITICAL
class Upscaler(BaseModel):
upscaling_method: str = Field(description="Name of upscaling method")
upscaling_models: list[str] = Field(description="List of upscaling models for this method")
app_router = APIRouter(prefix="/v1/app", tags=["app"])
@ -30,43 +43,62 @@ class AppConfig(BaseModel):
"""App Config Response"""
infill_methods: list[str] = Field(description="List of available infill methods")
upscaling_methods: list[Upscaler] = Field(description="List of upscaling methods")
nsfw_methods: list[str] = Field(description="List of NSFW checking methods")
watermarking_methods: list[str] = Field(description="List of invisible watermark methods")
@app_router.get(
"/version", operation_id="app_version", status_code=200, response_model=AppVersion
)
@app_router.get("/version", operation_id="app_version", status_code=200, response_model=AppVersion)
async def get_version() -> AppVersion:
return AppVersion(version=__version__)
@app_router.get(
"/config", operation_id="get_config", status_code=200, response_model=AppConfig
)
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
async def get_config() -> AppConfig:
infill_methods = ['tile']
infill_methods = ["tile"]
if PatchMatch.patchmatch_available():
infill_methods.append('patchmatch')
return AppConfig(infill_methods=infill_methods)
infill_methods.append("patchmatch")
upscaling_models = []
for model in typing.get_args(ESRGAN_MODELS):
upscaling_models.append(str(Path(model).stem))
upscaler = Upscaler(upscaling_method="esrgan", upscaling_models=upscaling_models)
nsfw_methods = []
if SafetyChecker.safety_checker_available():
nsfw_methods.append("nsfw_checker")
watermarking_methods = []
if InvisibleWatermark.invisible_watermark_available():
watermarking_methods.append("invisible_watermark")
return AppConfig(
infill_methods=infill_methods,
upscaling_methods=[upscaler],
nsfw_methods=nsfw_methods,
watermarking_methods=watermarking_methods,
)
@app_router.get(
"/logging",
operation_id="get_log_level",
responses={200: {"description" : "The operation was successful"}},
response_model = LogLevel,
responses={200: {"description": "The operation was successful"}},
response_model=LogLevel,
)
async def get_log_level(
) -> LogLevel:
async def get_log_level() -> LogLevel:
"""Returns the log level"""
return LogLevel(ApiDependencies.invoker.services.logger.level)
@app_router.post(
"/logging",
operation_id="set_log_level",
responses={200: {"description" : "The operation was successful"}},
response_model = LogLevel,
responses={200: {"description": "The operation was successful"}},
response_model=LogLevel,
)
async def set_log_level(
level: LogLevel = Body(description="New log verbosity level"),
level: LogLevel = Body(description="New log verbosity level"),
) -> LogLevel:
"""Sets the log verbosity level"""
ApiDependencies.invoker.services.logger.setLevel(level)

View File

@ -1,24 +1,30 @@
from fastapi import Body, HTTPException, Path, Query
from fastapi import Body, HTTPException
from fastapi.routing import APIRouter
from invokeai.app.services.board_record_storage import BoardRecord, BoardChanges
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.models.image_record import ImageDTO
from pydantic import BaseModel, Field
from ..dependencies import ApiDependencies
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])
class AddImagesToBoardResult(BaseModel):
board_id: str = Field(description="The id of the board the images were added to")
added_image_names: list[str] = Field(description="The image names that were added to the board")
class RemoveImagesFromBoardResult(BaseModel):
removed_image_names: list[str] = Field(description="The image names that were removed from their board")
@board_images_router.post(
"/",
operation_id="create_board_image",
operation_id="add_image_to_board",
responses={
201: {"description": "The image was added to a board successfully"},
},
status_code=201,
)
async def create_board_image(
async def add_image_to_board(
board_id: str = Body(description="The id of the board to add to"),
image_name: str = Body(description="The name of the image to add"),
):
@ -29,27 +35,78 @@ async def create_board_image(
)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to add to board")
raise HTTPException(status_code=500, detail="Failed to add image to board")
@board_images_router.delete(
"/",
operation_id="remove_board_image",
operation_id="remove_image_from_board",
responses={
201: {"description": "The image was removed from the board successfully"},
},
status_code=201,
)
async def remove_board_image(
board_id: str = Body(description="The id of the board"),
image_name: str = Body(description="The name of the image to remove"),
async def remove_image_from_board(
image_name: str = Body(description="The name of the image to remove", embed=True),
):
"""Deletes a board_image"""
"""Removes an image from its board, if it had one"""
try:
result = ApiDependencies.invoker.services.board_images.remove_image_from_board(
board_id=board_id, image_name=image_name
)
result = ApiDependencies.invoker.services.board_images.remove_image_from_board(image_name=image_name)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to update board")
raise HTTPException(status_code=500, detail="Failed to remove image from board")
@board_images_router.post(
"/batch",
operation_id="add_images_to_board",
responses={
201: {"description": "Images were added to board successfully"},
},
status_code=201,
response_model=AddImagesToBoardResult,
)
async def add_images_to_board(
board_id: str = Body(description="The id of the board to add to"),
image_names: list[str] = Body(description="The names of the images to add", embed=True),
) -> AddImagesToBoardResult:
"""Adds a list of images to a board"""
try:
added_image_names: list[str] = []
for image_name in image_names:
try:
ApiDependencies.invoker.services.board_images.add_image_to_board(
board_id=board_id, image_name=image_name
)
added_image_names.append(image_name)
except:
pass
return AddImagesToBoardResult(board_id=board_id, added_image_names=added_image_names)
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to add images to board")
@board_images_router.post(
"/batch/delete",
operation_id="remove_images_from_board",
responses={
201: {"description": "Images were removed from board successfully"},
},
status_code=201,
response_model=RemoveImagesFromBoardResult,
)
async def remove_images_from_board(
image_names: list[str] = Body(description="The names of the images to remove", embed=True),
) -> RemoveImagesFromBoardResult:
"""Removes a list of images from their board, if they had one"""
try:
removed_image_names: list[str] = []
for image_name in image_names:
try:
ApiDependencies.invoker.services.board_images.remove_image_from_board(image_name=image_name)
removed_image_names.append(image_name)
except:
pass
return RemoveImagesFromBoardResult(removed_image_names=removed_image_names)
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to remove images from board")

View File

@ -18,9 +18,7 @@ class DeleteBoardResult(BaseModel):
deleted_board_images: list[str] = Field(
description="The image names of the board-images relationships that were deleted."
)
deleted_images: list[str] = Field(
description="The names of the images that were deleted."
)
deleted_images: list[str] = Field(description="The names of the images that were deleted.")
@boards_router.post(
@ -73,22 +71,16 @@ async def update_board(
) -> BoardDTO:
"""Updates a board"""
try:
result = ApiDependencies.invoker.services.boards.update(
board_id=board_id, changes=changes
)
result = ApiDependencies.invoker.services.boards.update(board_id=board_id, changes=changes)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to update board")
@boards_router.delete(
"/{board_id}", operation_id="delete_board", response_model=DeleteBoardResult
)
@boards_router.delete("/{board_id}", operation_id="delete_board", response_model=DeleteBoardResult)
async def delete_board(
board_id: str = Path(description="The id of board to delete"),
include_images: Optional[bool] = Query(
description="Permanently delete all images on the board", default=False
),
include_images: Optional[bool] = Query(description="Permanently delete all images on the board", default=False),
) -> DeleteBoardResult:
"""Deletes a board"""
try:
@ -96,9 +88,7 @@ async def delete_board(
deleted_images = ApiDependencies.invoker.services.board_images.get_all_board_image_names_for_board(
board_id=board_id
)
ApiDependencies.invoker.services.images.delete_images_on_board(
board_id=board_id
)
ApiDependencies.invoker.services.images.delete_images_on_board(board_id=board_id)
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
return DeleteBoardResult(
board_id=board_id,
@ -127,9 +117,7 @@ async def delete_board(
async def list_boards(
all: Optional[bool] = Query(default=None, description="Whether to list all boards"),
offset: Optional[int] = Query(default=None, description="The page offset"),
limit: Optional[int] = Query(
default=None, description="The number of boards per page"
),
limit: Optional[int] = Query(default=None, description="The number of boards per page"),
) -> Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]]:
"""Gets a list of boards"""
if all:

View File

@ -1,31 +1,31 @@
import io
from typing import Optional
from PIL import Image
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field
from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.item_storage import PaginatedResults
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecordChanges,
ImageUrlsDTO,
)
from ..dependencies import ApiDependencies
images_router = APIRouter(prefix="/v1/images", tags=["images"])
# images are immutable; set a high max-age
IMAGE_MAX_AGE = 31536000
@images_router.post(
"/",
"/upload",
operation_id="upload_image",
responses={
201: {"description": "The image was uploaded successfully"},
@ -40,15 +40,9 @@ async def upload_image(
response: Response,
image_category: ImageCategory = Query(description="The category of the image"),
is_intermediate: bool = Query(description="Whether this is an intermediate image"),
board_id: Optional[str] = Query(
default=None, description="The board to add this image to, if any"
),
session_id: Optional[str] = Query(
default=None, description="The session ID associated with this upload, if any"
),
crop_visible: Optional[bool] = Query(
default=False, description="Whether to crop the image"
),
board_id: Optional[str] = Query(default=None, description="The board to add this image to, if any"),
session_id: Optional[str] = Query(default=None, description="The session ID associated with this upload, if any"),
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
) -> ImageDTO:
"""Uploads an image"""
if not file.content_type.startswith("image"):
@ -83,7 +77,7 @@ async def upload_image(
raise HTTPException(status_code=500, detail="Failed to create image")
@images_router.delete("/{image_name}", operation_id="delete_image")
@images_router.delete("/i/{image_name}", operation_id="delete_image")
async def delete_image(
image_name: str = Path(description="The name of the image to delete"),
) -> None:
@ -109,15 +103,13 @@ async def clear_intermediates() -> int:
@images_router.patch(
"/{image_name}",
"/i/{image_name}",
operation_id="update_image",
response_model=ImageDTO,
)
async def update_image(
image_name: str = Path(description="The name of the image to update"),
image_changes: ImageRecordChanges = Body(
description="The changes to apply to the image"
),
image_changes: ImageRecordChanges = Body(description="The changes to apply to the image"),
) -> ImageDTO:
"""Updates an image"""
@ -128,7 +120,7 @@ async def update_image(
@images_router.get(
"/{image_name}",
"/i/{image_name}",
operation_id="get_image_dto",
response_model=ImageDTO,
)
@ -144,7 +136,7 @@ async def get_image_dto(
@images_router.get(
"/{image_name}/metadata",
"/i/{image_name}/metadata",
operation_id="get_image_metadata",
response_model=ImageMetadata,
)
@ -159,8 +151,9 @@ async def get_image_metadata(
raise HTTPException(status_code=404)
@images_router.get(
"/{image_name}/full",
@images_router.api_route(
"/i/{image_name}/full",
methods=["GET", "HEAD"],
operation_id="get_image_full",
response_class=Response,
responses={
@ -195,7 +188,7 @@ async def get_image_full(
@images_router.get(
"/{image_name}/thumbnail",
"/i/{image_name}/thumbnail",
operation_id="get_image_thumbnail",
response_class=Response,
responses={
@ -212,15 +205,11 @@ async def get_image_thumbnail(
"""Gets a thumbnail image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(
image_name, thumbnail=True
)
path = ApiDependencies.invoker.services.images.get_path(image_name, thumbnail=True)
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
response = FileResponse(
path, media_type="image/webp", content_disposition_type="inline"
)
response = FileResponse(path, media_type="image/webp", content_disposition_type="inline")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception as e:
@ -228,7 +217,7 @@ async def get_image_thumbnail(
@images_router.get(
"/{image_name}/urls",
"/i/{image_name}/urls",
operation_id="get_image_urls",
response_model=ImageUrlsDTO,
)
@ -239,9 +228,7 @@ async def get_image_urls(
try:
image_url = ApiDependencies.invoker.services.images.get_url(image_name)
thumbnail_url = ApiDependencies.invoker.services.images.get_url(
image_name, thumbnail=True
)
thumbnail_url = ApiDependencies.invoker.services.images.get_url(image_name, thumbnail=True)
return ImageUrlsDTO(
image_name=image_name,
image_url=image_url,
@ -257,15 +244,9 @@ async def get_image_urls(
response_model=OffsetPaginatedResults[ImageDTO],
)
async def list_image_dtos(
image_origin: Optional[ResourceOrigin] = Query(
default=None, description="The origin of images to list."
),
categories: Optional[list[ImageCategory]] = Query(
default=None, description="The categories of image to include."
),
is_intermediate: Optional[bool] = Query(
default=None, description="Whether to list intermediate images."
),
image_origin: Optional[ResourceOrigin] = Query(default=None, description="The origin of images to list."),
categories: Optional[list[ImageCategory]] = Query(default=None, description="The categories of image to include."),
is_intermediate: Optional[bool] = Query(default=None, description="Whether to list intermediate images."),
board_id: Optional[str] = Query(
default=None,
description="The board id to filter by. Use 'none' to find images without a board.",
@ -285,3 +266,62 @@ async def list_image_dtos(
)
return image_dtos
class DeleteImagesFromListResult(BaseModel):
deleted_images: list[str]
@images_router.post("/delete", operation_id="delete_images_from_list", response_model=DeleteImagesFromListResult)
async def delete_images_from_list(
image_names: list[str] = Body(description="The list of names of images to delete", embed=True),
) -> DeleteImagesFromListResult:
try:
deleted_images: list[str] = []
for image_name in image_names:
try:
ApiDependencies.invoker.services.images.delete(image_name)
deleted_images.append(image_name)
except:
pass
return DeleteImagesFromListResult(deleted_images=deleted_images)
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to delete images")
class ImagesUpdatedFromListResult(BaseModel):
updated_image_names: list[str] = Field(description="The image names that were updated")
@images_router.post("/star", operation_id="star_images_in_list", response_model=ImagesUpdatedFromListResult)
async def star_images_in_list(
image_names: list[str] = Body(description="The list of names of images to star", embed=True),
) -> ImagesUpdatedFromListResult:
try:
updated_image_names: list[str] = []
for image_name in image_names:
try:
ApiDependencies.invoker.services.images.update(image_name, changes=ImageRecordChanges(starred=True))
updated_image_names.append(image_name)
except:
pass
return ImagesUpdatedFromListResult(updated_image_names=updated_image_names)
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to star images")
@images_router.post("/unstar", operation_id="unstar_images_in_list", response_model=ImagesUpdatedFromListResult)
async def unstar_images_in_list(
image_names: list[str] = Body(description="The list of names of images to unstar", embed=True),
) -> ImagesUpdatedFromListResult:
try:
updated_image_names: list[str] = []
for image_name in image_names:
try:
ApiDependencies.invoker.services.images.update(image_name, changes=ImageRecordChanges(starred=False))
updated_image_names.append(image_name)
except:
pass
return ImagesUpdatedFromListResult(updated_image_names=updated_image_names)
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to unstar images")

View File

@ -28,49 +28,52 @@ ConvertModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
MergeModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)]
class ModelsList(BaseModel):
models: list[Union[tuple(OPENAPI_MODEL_CONFIGS)]]
@models_router.get(
"/",
operation_id="list_models",
responses={200: {"model": ModelsList }},
responses={200: {"model": ModelsList}},
)
async def list_models(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
) -> ModelsList:
"""Gets a list of models"""
if base_models and len(base_models)>0:
if base_models and len(base_models) > 0:
models_raw = list()
for base_model in base_models:
models_raw.extend(ApiDependencies.invoker.services.model_manager.list_models(base_model, model_type))
else:
models_raw = ApiDependencies.invoker.services.model_manager.list_models(None, model_type)
models = parse_obj_as(ModelsList, { "models": models_raw })
models = parse_obj_as(ModelsList, {"models": models_raw})
return models
@models_router.patch(
"/{base_model}/{model_type}/{model_name}",
operation_id="update_model",
responses={200: {"description" : "The model was updated successfully"},
400: {"description" : "Bad request"},
404: {"description" : "The model could not be found"},
409: {"description" : "There is already a model corresponding to the new name"},
},
status_code = 200,
response_model = UpdateModelResponse,
responses={
200: {"description": "The model was updated successfully"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
409: {"description": "There is already a model corresponding to the new name"},
},
status_code=200,
response_model=UpdateModelResponse,
)
async def update_model(
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
info: Union[tuple(OPENAPI_MODEL_CONFIGS)] = Body(description="Model configuration"),
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
info: Union[tuple(OPENAPI_MODEL_CONFIGS)] = Body(description="Model configuration"),
) -> UpdateModelResponse:
""" Update model contents with a new config. If the model name or base fields are changed, then the model is renamed. """
"""Update model contents with a new config. If the model name or base fields are changed, then the model is renamed."""
logger = ApiDependencies.invoker.services.logger
try:
previous_info = ApiDependencies.invoker.services.model_manager.list_model(
model_name=model_name,
@ -81,13 +84,13 @@ async def update_model(
# rename operation requested
if info.model_name != model_name or info.base_model != base_model:
ApiDependencies.invoker.services.model_manager.rename_model(
base_model = base_model,
model_type = model_type,
model_name = model_name,
new_name = info.model_name,
new_base = info.base_model,
base_model=base_model,
model_type=model_type,
model_name=model_name,
new_name=info.model_name,
new_base=info.base_model,
)
logger.info(f'Successfully renamed {base_model}/{model_name}=>{info.base_model}/{info.model_name}')
logger.info(f"Successfully renamed {base_model.value}/{model_name}=>{info.base_model}/{info.model_name}")
# update information to support an update of attributes
model_name = info.model_name
base_model = info.base_model
@ -96,16 +99,19 @@ async def update_model(
base_model=base_model,
model_type=model_type,
)
if new_info.get('path') != previous_info.get('path'): # model manager moved model path during rename - don't overwrite it
info.path = new_info.get('path')
if new_info.get("path") != previous_info.get(
"path"
): # model manager moved model path during rename - don't overwrite it
info.path = new_info.get("path")
# replace empty string values with None/null to avoid phenomenon of vae: ''
info_dict = info.dict()
info_dict = {x: info_dict[x] if info_dict[x] else None for x in info_dict.keys()}
ApiDependencies.invoker.services.model_manager.update_model(
model_name=model_name,
base_model=base_model,
model_type=model_type,
model_attributes=info.dict()
model_name=model_name, base_model=base_model, model_type=model_type, model_attributes=info_dict
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=model_name,
base_model=base_model,
@ -123,49 +129,48 @@ async def update_model(
return model_response
@models_router.post(
"/import",
operation_id="import_model",
responses= {
201: {"description" : "The model imported successfully"},
404: {"description" : "The model could not be found"},
415: {"description" : "Unrecognized file/folder format"},
424: {"description" : "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description" : "There is already a model corresponding to this path or repo_id"},
responses={
201: {"description": "The model imported successfully"},
404: {"description": "The model could not be found"},
415: {"description": "Unrecognized file/folder format"},
424: {"description": "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
response_model=ImportModelResponse
response_model=ImportModelResponse,
)
async def import_model(
location: str = Body(description="A model path, repo_id or URL to import"),
prediction_type: Optional[Literal['v_prediction','epsilon','sample']] = \
Body(description='Prediction type for SDv2 checkpoint files', default="v_prediction"),
location: str = Body(description="A model path, repo_id or URL to import"),
prediction_type: Optional[Literal["v_prediction", "epsilon", "sample"]] = Body(
description="Prediction type for SDv2 checkpoint files", default="v_prediction"
),
) -> ImportModelResponse:
""" Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically """
"""Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically"""
items_to_import = {location}
prediction_types = { x.value: x for x in SchedulerPredictionType }
prediction_types = {x.value: x for x in SchedulerPredictionType}
logger = ApiDependencies.invoker.services.logger
try:
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import = items_to_import,
prediction_type_helper = lambda x: prediction_types.get(prediction_type)
items_to_import=items_to_import, prediction_type_helper=lambda x: prediction_types.get(prediction_type)
)
info = installed_models.get(location)
if not info:
logger.error("Import failed")
raise HTTPException(status_code=415)
logger.info(f'Successfully imported {location}, got {info}')
logger.info(f"Successfully imported {location}, got {info}")
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.name,
base_model=info.base_model,
model_type=info.model_type
model_name=info.name, base_model=info.base_model, model_type=info.model_type
)
return parse_obj_as(ImportModelResponse, model_raw)
except ModelNotFoundException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@ -175,38 +180,34 @@ async def import_model(
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
@models_router.post(
"/add",
operation_id="add_model",
responses= {
201: {"description" : "The model added successfully"},
404: {"description" : "The model could not be found"},
424: {"description" : "The model appeared to add successfully, but could not be found in the model manager"},
409: {"description" : "There is already a model corresponding to this path or repo_id"},
responses={
201: {"description": "The model added successfully"},
404: {"description": "The model could not be found"},
424: {"description": "The model appeared to add successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
response_model=ImportModelResponse
response_model=ImportModelResponse,
)
async def add_model(
info: Union[tuple(OPENAPI_MODEL_CONFIGS)] = Body(description="Model configuration"),
info: Union[tuple(OPENAPI_MODEL_CONFIGS)] = Body(description="Model configuration"),
) -> ImportModelResponse:
""" Add a model using the configuration information appropriate for its type. Only local models can be added by path"""
"""Add a model using the configuration information appropriate for its type. Only local models can be added by path"""
logger = ApiDependencies.invoker.services.logger
try:
ApiDependencies.invoker.services.model_manager.add_model(
info.model_name,
info.base_model,
info.model_type,
model_attributes = info.dict()
info.model_name, info.base_model, info.model_type, model_attributes=info.dict()
)
logger.info(f'Successfully added {info.model_name}')
logger.info(f"Successfully added {info.model_name}")
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.model_name,
base_model=info.base_model,
model_type=info.model_type
model_name=info.model_name, base_model=info.base_model, model_type=info.model_type
)
return parse_obj_as(ImportModelResponse, model_raw)
except ModelNotFoundException as e:
@ -216,66 +217,66 @@ async def add_model(
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
@models_router.delete(
"/{base_model}/{model_type}/{model_name}",
operation_id="del_model",
responses={
204: { "description": "Model deleted successfully" },
404: { "description": "Model not found" }
},
status_code = 204,
response_model = None,
responses={204: {"description": "Model deleted successfully"}, 404: {"description": "Model not found"}},
status_code=204,
response_model=None,
)
async def delete_model(
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
) -> Response:
"""Delete Model"""
logger = ApiDependencies.invoker.services.logger
try:
ApiDependencies.invoker.services.model_manager.del_model(model_name,
base_model = base_model,
model_type = model_type
)
ApiDependencies.invoker.services.model_manager.del_model(
model_name, base_model=base_model, model_type=model_type
)
logger.info(f"Deleted model: {model_name}")
return Response(status_code=204)
except ModelNotFoundException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@models_router.put(
"/convert/{base_model}/{model_type}/{model_name}",
operation_id="convert_model",
responses={
200: { "description": "Model converted successfully" },
400: {"description" : "Bad request" },
404: { "description": "Model not found" },
200: {"description": "Model converted successfully"},
400: {"description": "Bad request"},
404: {"description": "Model not found"},
},
status_code = 200,
response_model = ConvertModelResponse,
status_code=200,
response_model=ConvertModelResponse,
)
async def convert_model(
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
convert_dest_directory: Optional[str] = Query(default=None, description="Save the converted model to the designated directory"),
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
convert_dest_directory: Optional[str] = Query(
default=None, description="Save the converted model to the designated directory"
),
) -> ConvertModelResponse:
"""Convert a checkpoint model into a diffusers model, optionally saving to the indicated destination directory, or `models` if none."""
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Converting model: {model_name}")
dest = pathlib.Path(convert_dest_directory) if convert_dest_directory else None
ApiDependencies.invoker.services.model_manager.convert_model(model_name,
base_model = base_model,
model_type = model_type,
convert_dest_directory = dest,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(model_name,
base_model = base_model,
model_type = model_type)
ApiDependencies.invoker.services.model_manager.convert_model(
model_name,
base_model=base_model,
model_type=model_type,
convert_dest_directory=dest,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name, base_model=base_model, model_type=model_type
)
response = parse_obj_as(ConvertModelResponse, model_raw)
except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=f"Model '{model_name}' not found: {str(e)}")
@ -283,91 +284,101 @@ async def convert_model(
raise HTTPException(status_code=400, detail=str(e))
return response
@models_router.get(
"/search",
operation_id="search_for_models",
responses={
200: { "description": "Directory searched successfully" },
404: { "description": "Invalid directory path" },
200: {"description": "Directory searched successfully"},
404: {"description": "Invalid directory path"},
},
status_code = 200,
response_model = List[pathlib.Path]
status_code=200,
response_model=List[pathlib.Path],
)
async def search_for_models(
search_path: pathlib.Path = Query(description="Directory path to search for models")
)->List[pathlib.Path]:
search_path: pathlib.Path = Query(description="Directory path to search for models"),
) -> List[pathlib.Path]:
if not search_path.is_dir():
raise HTTPException(status_code=404, detail=f"The search path '{search_path}' does not exist or is not directory")
return ApiDependencies.invoker.services.model_manager.search_for_models([search_path])
raise HTTPException(
status_code=404, detail=f"The search path '{search_path}' does not exist or is not directory"
)
return ApiDependencies.invoker.services.model_manager.search_for_models(search_path)
@models_router.get(
"/ckpt_confs",
operation_id="list_ckpt_configs",
responses={
200: { "description" : "paths retrieved successfully" },
200: {"description": "paths retrieved successfully"},
},
status_code = 200,
response_model = List[pathlib.Path]
status_code=200,
response_model=List[pathlib.Path],
)
async def list_ckpt_configs(
)->List[pathlib.Path]:
async def list_ckpt_configs() -> List[pathlib.Path]:
"""Return a list of the legacy checkpoint configuration files stored in `ROOT/configs/stable-diffusion`, relative to ROOT."""
return ApiDependencies.invoker.services.model_manager.list_checkpoint_configs()
@models_router.post(
"/sync",
operation_id="sync_to_config",
responses={
201: { "description": "synchronization successful" },
201: {"description": "synchronization successful"},
},
status_code = 201,
response_model = bool
status_code=201,
response_model=bool,
)
async def sync_to_config(
)->bool:
async def sync_to_config() -> bool:
"""Call after making changes to models.yaml, autoimport directories or models directory to synchronize
in-memory data structures with disk data structures."""
ApiDependencies.invoker.services.model_manager.sync_to_config()
return True
@models_router.put(
"/merge/{base_model}",
operation_id="merge_models",
responses={
200: { "description": "Model converted successfully" },
400: { "description": "Incompatible models" },
404: { "description": "One or more models not found" },
200: {"description": "Model converted successfully"},
400: {"description": "Incompatible models"},
404: {"description": "One or more models not found"},
},
status_code = 200,
response_model = MergeModelResponse,
status_code=200,
response_model=MergeModelResponse,
)
async def merge_models(
base_model: BaseModelType = Path(description="Base model"),
model_names: List[str] = Body(description="model name", min_items=2, max_items=3),
merged_model_name: Optional[str] = Body(description="Name of destination model"),
alpha: Optional[float] = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method"),
force: Optional[bool] = Body(description="Force merging of models created with different versions of diffusers", default=False),
merge_dest_directory: Optional[str] = Body(description="Save the merged model to the designated directory (with 'merged_model_name' appended)", default=None)
base_model: BaseModelType = Path(description="Base model"),
model_names: List[str] = Body(description="model name", min_items=2, max_items=3),
merged_model_name: Optional[str] = Body(description="Name of destination model"),
alpha: Optional[float] = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method"),
force: Optional[bool] = Body(
description="Force merging of models created with different versions of diffusers", default=False
),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> MergeModelResponse:
"""Convert a checkpoint model into a diffusers model"""
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Merging models: {model_names} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
result = ApiDependencies.invoker.services.model_manager.merge_models(model_names,
base_model,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
merge_dest_directory = dest
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(result.name,
base_model = base_model,
model_type = ModelType.Main,
)
result = ApiDependencies.invoker.services.model_manager.merge_models(
model_names,
base_model,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
merge_dest_directory=dest,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
result.name,
base_model=base_model,
model_type=ModelType.Main,
)
response = parse_obj_as(ConvertModelResponse, model_raw)
except ModelNotFoundException:
raise HTTPException(status_code=404, detail=f"One or more of the models '{model_names}' not found")

View File

@ -30,9 +30,7 @@ session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"])
},
)
async def create_session(
graph: Optional[Graph] = Body(
default=None, description="The graph to initialize the session with"
)
graph: Optional[Graph] = Body(default=None, description="The graph to initialize the session with")
) -> GraphExecutionState:
"""Creates a new session, optionally initializing it with an invocation graph"""
session = ApiDependencies.invoker.create_execution_state(graph)
@ -42,7 +40,7 @@ async def create_session(
@session_router.get(
"/",
operation_id="list_sessions",
responses={200: {"model": PaginatedResults[GraphExecutionState]}},
responses={200: {"model": PaginatedResults[dict]}},
)
async def list_sessions(
page: int = Query(default=0, description="The page of results to get"),
@ -51,13 +49,9 @@ async def list_sessions(
) -> PaginatedResults[GraphExecutionState]:
"""Gets a list of sessions, optionally searching"""
if query == "":
result = ApiDependencies.invoker.services.graph_execution_manager.list(
page, per_page
)
result = ApiDependencies.invoker.services.graph_execution_manager.list(page, per_page)
else:
result = ApiDependencies.invoker.services.graph_execution_manager.search(
query, page, per_page
)
result = ApiDependencies.invoker.services.graph_execution_manager.search(query, page, per_page)
return result
@ -91,9 +85,9 @@ async def get_session(
)
async def add_node(
session_id: str = Path(description="The id of the session"),
node: Annotated[
Union[BaseInvocation.get_invocations()], Field(discriminator="type") # type: ignore
] = Body(description="The node to add"),
node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
description="The node to add"
),
) -> str:
"""Adds a node to the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
@ -124,9 +118,9 @@ async def add_node(
async def update_node(
session_id: str = Path(description="The id of the session"),
node_path: str = Path(description="The path to the node in the graph"),
node: Annotated[
Union[BaseInvocation.get_invocations()], Field(discriminator="type") # type: ignore
] = Body(description="The new node"),
node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
description="The new node"
),
) -> GraphExecutionState:
"""Updates a node in the graph and removes all linked edges"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
@ -230,7 +224,7 @@ async def delete_edge(
try:
edge = Edge(
source=EdgeConnection(node_id=from_node_id, field=from_field),
destination=EdgeConnection(node_id=to_node_id, field=to_field)
destination=EdgeConnection(node_id=to_node_id, field=to_field),
)
session.delete_edge(edge)
ApiDependencies.invoker.services.graph_execution_manager.set(
@ -255,9 +249,7 @@ async def delete_edge(
)
async def invoke_session(
session_id: str = Path(description="The id of the session to invoke"),
all: bool = Query(
default=False, description="Whether or not to invoke all remaining invocations"
),
all: bool = Query(default=False, description="Whether or not to invoke all remaining invocations"),
) -> Response:
"""Invokes a session"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
@ -274,9 +266,7 @@ async def invoke_session(
@session_router.delete(
"/{session_id}/invoke",
operation_id="cancel_session_invoke",
responses={
202: {"description": "The invocation is canceled"}
},
responses={202: {"description": "The invocation is canceled"}},
)
async def cancel_session_invoke(
session_id: str = Path(description="The id of the session to cancel"),

View File

@ -16,9 +16,7 @@ class SocketIO:
self.__sio.on("subscribe", handler=self._handle_sub)
self.__sio.on("unsubscribe", handler=self._handle_unsub)
local_handler.register(
event_name=EventServiceBase.session_event, _func=self._handle_session_event
)
local_handler.register(event_name=EventServiceBase.session_event, _func=self._handle_session_event)
async def _handle_session_event(self, event: Event):
await self.__sio.emit(

View File

@ -3,6 +3,7 @@ import asyncio
import sys
from inspect import signature
import logging
import uvicorn
import socket
@ -16,9 +17,10 @@ from fastapi_events.middleware import EventHandlerASGIMiddleware
from pathlib import Path
from pydantic.schema import schema
#This should come early so that modules can log their initialization properly
# This should come early so that modules can log their initialization properly
from .services.config import InvokeAIAppConfig
from ..backend.util.logging import InvokeAILogger
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
logger = InvokeAILogger.getLogger(config=app_config)
@ -27,7 +29,7 @@ from invokeai.version.invokeai_version import __version__
# we call this early so that the message appears before
# other invokeai initialization messages
if app_config.version:
print(f'InvokeAI version {__version__}')
print(f"InvokeAI version {__version__}")
sys.exit(0)
import invokeai.frontend.web as web_dir
@ -36,18 +38,19 @@ import mimetypes
from .api.dependencies import ApiDependencies
from .api.routers import sessions, models, images, boards, board_images, app_info
from .api.sockets import SocketIO
from .invocations.baseinvocation import BaseInvocation
from .invocations.baseinvocation import BaseInvocation, _InputField, _OutputField, UIConfigBase
import torch
import invokeai.backend.util.hotfixes
if torch.backends.mps.is_available():
import invokeai.backend.util.mps_fixes
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
mimetypes.add_type('application/javascript', '.js')
mimetypes.add_type('text/css', '.css')
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
@ -57,14 +60,13 @@ app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None)
event_handler_id: int = id(app)
app.add_middleware(
EventHandlerASGIMiddleware,
handlers=[
local_handler
], # TODO: consider doing this in services to support different configurations
handlers=[local_handler], # TODO: consider doing this in services to support different configurations
middleware_id=event_handler_id,
)
socket_io = SocketIO(app)
# Add startup event to load dependencies
@app.on_event("startup")
async def startup_event():
@ -76,9 +78,7 @@ async def startup_event():
allow_headers=app_config.allow_headers,
)
ApiDependencies.initialize(
config=app_config, event_handler_id=event_handler_id, logger=logger
)
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
# Shut down threads
@ -103,7 +103,8 @@ app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
app.include_router(app_info.app_router, prefix='/api')
app.include_router(app_info.app_router, prefix="/api")
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
@ -133,6 +134,11 @@ def custom_openapi():
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
# Add Node Editor UI helper schemas
ui_config_schemas = schema([UIConfigBase, _InputField, _OutputField], ref_prefix="#/components/schemas/")
for schema_key, output_schema in ui_config_schemas["definitions"].items():
openapi_schema["components"]["schemas"][schema_key] = output_schema
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__
@ -144,6 +150,7 @@ def custom_openapi():
invoker_schema["output"] = outputs_ref
from invokeai.backend.model_management.models import get_model_config_enums
for model_config_format_enum in set(get_model_config_enums()):
name = model_config_format_enum.__qualname__
@ -166,7 +173,8 @@ def custom_openapi():
app.openapi = custom_openapi
# Override API doc favicons
app.mount("/static", StaticFiles(directory=Path(web_dir.__path__[0], 'static/dream_web')), name="static")
app.mount("/static", StaticFiles(directory=Path(web_dir.__path__[0], "static/dream_web")), name="static")
@app.get("/docs", include_in_schema=False)
def overridden_swagger():
@ -187,11 +195,8 @@ def overridden_redoc():
# Must mount *after* the other routes else it borks em
app.mount("/",
StaticFiles(directory=Path(web_dir.__path__[0],"dist"),
html=True
), name="ui"
)
app.mount("/", StaticFiles(directory=Path(web_dir.__path__[0], "dist"), html=True), name="ui")
def invoke_api():
def find_port(port: int):
@ -204,15 +209,34 @@ def invoke_api():
else:
return port
from invokeai.backend.install.check_root import check_invokeai_root
check_invokeai_root(app_config) # note, may exit with an exception if root not set up
port = find_port(app_config.port)
if port != app_config.port:
logger.warn(f"Port {app_config.port} in use, using port {port}")
# Start our own event loop for eventing usage
loop = asyncio.new_event_loop()
config = uvicorn.Config(app=app, host=app_config.host, port=port, loop=loop)
# Use access_log to turn off logging
config = uvicorn.Config(
app=app,
host=app_config.host,
port=port,
loop=loop,
log_level=app_config.log_level,
)
server = uvicorn.Server(config)
# replace uvicorn's loggers with InvokeAI's for consistent appearance
for logname in ["uvicorn.access", "uvicorn"]:
l = logging.getLogger(logname)
l.handlers.clear()
for ch in logger.handlers:
l.addHandler(ch)
loop.run_until_complete(server.serve())
if __name__ == "__main__":
invoke_api()

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -14,8 +14,14 @@ from ..services.graph import GraphExecutionState, LibraryGraph, Edge
from ..services.invoker import Invoker
def add_field_argument(command_parser, name: str, field, default_override = None):
default = default_override if default_override is not None else field.default if field.default_factory is None else field.default_factory()
def add_field_argument(command_parser, name: str, field, default_override=None):
default = (
default_override
if default_override is not None
else field.default
if field.default_factory is None
else field.default_factory()
)
if get_origin(field.type_) == Literal:
allowed_values = get_args(field.type_)
allowed_types = set()
@ -47,8 +53,8 @@ def add_parsers(
commands: list[type],
command_field: str = "type",
exclude_fields: list[str] = ["id", "type"],
add_arguments: Union[Callable[[argparse.ArgumentParser], None],None] = None
):
add_arguments: Union[Callable[[argparse.ArgumentParser], None], None] = None,
):
"""Adds parsers for each command to the subparsers"""
# Create subparsers for each command
@ -61,7 +67,7 @@ def add_parsers(
add_arguments(command_parser)
# Convert all fields to arguments
fields = command.__fields__ # type: ignore
fields = command.__fields__ # type: ignore
for name, field in fields.items():
if name in exclude_fields:
continue
@ -70,13 +76,11 @@ def add_parsers(
def add_graph_parsers(
subparsers,
graphs: list[LibraryGraph],
add_arguments: Union[Callable[[argparse.ArgumentParser], None], None] = None
subparsers, graphs: list[LibraryGraph], add_arguments: Union[Callable[[argparse.ArgumentParser], None], None] = None
):
for graph in graphs:
command_parser = subparsers.add_parser(graph.name, help=graph.description)
if add_arguments is not None:
add_arguments(command_parser)
@ -128,6 +132,7 @@ class CliContext:
class ExitCli(Exception):
"""Exception to exit the CLI"""
pass
@ -155,7 +160,7 @@ class BaseCommand(ABC, BaseModel):
@classmethod
def get_commands_map(cls):
# Get the type strings out of the literals and into a dictionary
return dict(map(lambda t: (get_args(get_type_hints(t)['type'])[0], t),BaseCommand.get_all_subclasses()))
return dict(map(lambda t: (get_args(get_type_hints(t)["type"])[0], t), BaseCommand.get_all_subclasses()))
@abstractmethod
def run(self, context: CliContext) -> None:
@ -165,7 +170,8 @@ class BaseCommand(ABC, BaseModel):
class ExitCommand(BaseCommand):
"""Exits the CLI"""
type: Literal['exit'] = 'exit'
type: Literal["exit"] = "exit"
def run(self, context: CliContext) -> None:
raise ExitCli()
@ -173,7 +179,8 @@ class ExitCommand(BaseCommand):
class HelpCommand(BaseCommand):
"""Shows help"""
type: Literal['help'] = 'help'
type: Literal["help"] = "help"
def run(self, context: CliContext) -> None:
context.parser.print_help()
@ -183,11 +190,7 @@ def get_graph_execution_history(
graph_execution_state: GraphExecutionState,
) -> Iterable[str]:
"""Gets the history of fully-executed invocations for a graph execution"""
return (
n
for n in reversed(graph_execution_state.executed_history)
if n in graph_execution_state.graph.nodes
)
return (n for n in reversed(graph_execution_state.executed_history) if n in graph_execution_state.graph.nodes)
def get_invocation_command(invocation) -> str:
@ -218,7 +221,8 @@ def get_invocation_command(invocation) -> str:
class HistoryCommand(BaseCommand):
"""Shows the invocation history"""
type: Literal['history'] = 'history'
type: Literal["history"] = "history"
# Inputs
# fmt: off
@ -235,7 +239,8 @@ class HistoryCommand(BaseCommand):
class SetDefaultCommand(BaseCommand):
"""Sets a default value for a field"""
type: Literal['default'] = 'default'
type: Literal["default"] = "default"
# Inputs
# fmt: off
@ -253,7 +258,8 @@ class SetDefaultCommand(BaseCommand):
class DrawGraphCommand(BaseCommand):
"""Debugs a graph"""
type: Literal['draw_graph'] = 'draw_graph'
type: Literal["draw_graph"] = "draw_graph"
def run(self, context: CliContext) -> None:
session: GraphExecutionState = context.invoker.services.graph_execution_manager.get(context.session.id)
@ -271,7 +277,8 @@ class DrawGraphCommand(BaseCommand):
class DrawExecutionGraphCommand(BaseCommand):
"""Debugs an execution graph"""
type: Literal['draw_xgraph'] = 'draw_xgraph'
type: Literal["draw_xgraph"] = "draw_xgraph"
def run(self, context: CliContext) -> None:
session: GraphExecutionState = context.invoker.services.graph_execution_manager.get(context.session.id)
@ -286,6 +293,7 @@ class DrawExecutionGraphCommand(BaseCommand):
plt.axis("off")
plt.show()
class SortedHelpFormatter(argparse.HelpFormatter):
def _iter_indented_subactions(self, action):
try:

View File

@ -19,8 +19,8 @@ from ..services.invocation_services import InvocationServices
# singleton object, class variable
completer = None
class Completer(object):
def __init__(self, model_manager: ModelManager):
self.commands = self.get_commands()
self.matches = None
@ -43,7 +43,7 @@ class Completer(object):
except IndexError:
pass
options = options or list(self.parse_commands().keys())
if not text: # first time
self.matches = options
else:
@ -56,17 +56,17 @@ class Completer(object):
return match
@classmethod
def get_commands(self)->List[object]:
def get_commands(self) -> List[object]:
"""
Return a list of all the client commands and invocations.
"""
return BaseCommand.get_commands() + BaseInvocation.get_invocations()
def get_current_command(self, buffer: str)->tuple[str, str]:
def get_current_command(self, buffer: str) -> tuple[str, str]:
"""
Parse the readline buffer to find the most recent command and its switch.
"""
if len(buffer)==0:
if len(buffer) == 0:
return None, None
tokens = shlex.split(buffer)
command = None
@ -78,11 +78,11 @@ class Completer(object):
else:
switch = t
# don't try to autocomplete switches that are already complete
if switch and buffer.endswith(' '):
switch=None
return command or '', switch or ''
if switch and buffer.endswith(" "):
switch = None
return command or "", switch or ""
def parse_commands(self)->Dict[str, List[str]]:
def parse_commands(self) -> Dict[str, List[str]]:
"""
Return a dict in which the keys are the command name
and the values are the parameters the command takes.
@ -90,11 +90,11 @@ class Completer(object):
result = dict()
for command in self.commands:
hints = get_type_hints(command)
name = get_args(hints['type'])[0]
result.update({name:hints})
name = get_args(hints["type"])[0]
result.update({name: hints})
return result
def get_command_options(self, command: str, switch: str)->List[str]:
def get_command_options(self, command: str, switch: str) -> List[str]:
"""
Return all the parameters that can be passed to the command as
command-line switches. Returns None if the command is unrecognized.
@ -102,42 +102,46 @@ class Completer(object):
parsed_commands = self.parse_commands()
if command not in parsed_commands:
return None
# handle switches in the format "-foo=bar"
argument = None
if switch and '=' in switch:
switch, argument = switch.split('=')
parameter = switch.strip('-')
if switch and "=" in switch:
switch, argument = switch.split("=")
parameter = switch.strip("-")
if parameter in parsed_commands[command]:
if argument is None:
return self.get_parameter_options(parameter, parsed_commands[command][parameter])
else:
return [f"--{parameter}={x}" for x in self.get_parameter_options(parameter, parsed_commands[command][parameter])]
return [
f"--{parameter}={x}"
for x in self.get_parameter_options(parameter, parsed_commands[command][parameter])
]
else:
return [f"--{x}" for x in parsed_commands[command].keys()]
def get_parameter_options(self, parameter: str, typehint)->List[str]:
def get_parameter_options(self, parameter: str, typehint) -> List[str]:
"""
Given a parameter type (such as Literal), offers autocompletions.
"""
if get_origin(typehint) == Literal:
return get_args(typehint)
if parameter == 'model':
if parameter == "model":
return self.manager.model_names()
def _pre_input_hook(self):
if self.linebuffer:
readline.insert_text(self.linebuffer)
readline.redisplay()
self.linebuffer = None
def set_autocompleter(services: InvocationServices) -> Completer:
global completer
if completer:
return completer
completer = Completer(services.model_manager)
readline.set_completer(completer.complete)
@ -162,8 +166,6 @@ def set_autocompleter(services: InvocationServices) -> Completer:
pass
except OSError: # file likely corrupted
newname = f"{histfile}.old"
logger.error(
f"Your history file {histfile} couldn't be loaded and may be corrupted. Renaming it to {newname}"
)
logger.error(f"Your history file {histfile} couldn't be loaded and may be corrupted. Renaming it to {newname}")
histfile.replace(Path(newname))
atexit.register(readline.write_history_file, histfile)

View File

@ -13,6 +13,7 @@ from pydantic.fields import Field
# This should come early so that the logger can pick up its configuration options
from .services.config import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger().getLogger(config=config)
@ -20,7 +21,7 @@ from invokeai.version.invokeai_version import __version__
# we call this early so that the message appears before other invokeai initialization messages
if config.version:
print(f'InvokeAI version {__version__}')
print(f"InvokeAI version {__version__}")
sys.exit(0)
from invokeai.app.services.board_image_record_storage import (
@ -36,18 +37,22 @@ from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from .services.default_graphs import (default_text_to_image_graph_id,
create_system_graphs)
from invokeai.app.services.invocation_stats import InvocationStatsService
from .services.default_graphs import default_text_to_image_graph_id, create_system_graphs
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from .cli.commands import (BaseCommand, CliContext, ExitCli,
SortedHelpFormatter, add_graph_parsers, add_parsers)
from .cli.commands import BaseCommand, CliContext, ExitCli, SortedHelpFormatter, add_graph_parsers, add_parsers
from .cli.completer import set_autocompleter
from .invocations.baseinvocation import BaseInvocation
from .services.events import EventServiceBase
from .services.graph import (Edge, EdgeConnection, GraphExecutionState,
GraphInvocation, LibraryGraph,
are_connection_types_compatible)
from .services.graph import (
Edge,
EdgeConnection,
GraphExecutionState,
GraphInvocation,
LibraryGraph,
are_connection_types_compatible,
)
from .services.image_file_storage import DiskImageFileStorage
from .services.invocation_queue import MemoryInvocationQueue
from .services.invocation_services import InvocationServices
@ -58,6 +63,7 @@ from .services.sqlite import SqliteItemStorage
import torch
import invokeai.backend.util.hotfixes
if torch.backends.mps.is_available():
import invokeai.backend.util.mps_fixes
@ -69,6 +75,7 @@ class CliCommand(BaseModel):
class InvalidArgs(Exception):
pass
def add_invocation_args(command_parser):
# Add linking capability
command_parser.add_argument(
@ -113,7 +120,7 @@ def get_command_parser(services: InvocationServices) -> argparse.ArgumentParser:
return parser
class NodeField():
class NodeField:
alias: str
node_path: str
field: str
@ -126,15 +133,20 @@ class NodeField():
self.field_type = field_type
def fields_from_type_hints(hints: dict[str, type], node_path: str) -> dict[str,NodeField]:
return {k:NodeField(alias=k, node_path=node_path, field=k, field_type=v) for k, v in hints.items()}
def fields_from_type_hints(hints: dict[str, type], node_path: str) -> dict[str, NodeField]:
return {k: NodeField(alias=k, node_path=node_path, field=k, field_type=v) for k, v in hints.items()}
def get_node_input_field(graph: LibraryGraph, field_alias: str, node_id: str) -> NodeField:
"""Gets the node field for the specified field alias"""
exposed_input = next(e for e in graph.exposed_inputs if e.alias == field_alias)
node_type = type(graph.graph.get_node(exposed_input.node_path))
return NodeField(alias=exposed_input.alias, node_path=f'{node_id}.{exposed_input.node_path}', field=exposed_input.field, field_type=get_type_hints(node_type)[exposed_input.field])
return NodeField(
alias=exposed_input.alias,
node_path=f"{node_id}.{exposed_input.node_path}",
field=exposed_input.field,
field_type=get_type_hints(node_type)[exposed_input.field],
)
def get_node_output_field(graph: LibraryGraph, field_alias: str, node_id: str) -> NodeField:
@ -142,7 +154,12 @@ def get_node_output_field(graph: LibraryGraph, field_alias: str, node_id: str) -
exposed_output = next(e for e in graph.exposed_outputs if e.alias == field_alias)
node_type = type(graph.graph.get_node(exposed_output.node_path))
node_output_type = node_type.get_output_type()
return NodeField(alias=exposed_output.alias, node_path=f'{node_id}.{exposed_output.node_path}', field=exposed_output.field, field_type=get_type_hints(node_output_type)[exposed_output.field])
return NodeField(
alias=exposed_output.alias,
node_path=f"{node_id}.{exposed_output.node_path}",
field=exposed_output.field,
field_type=get_type_hints(node_output_type)[exposed_output.field],
)
def get_node_inputs(invocation: BaseInvocation, context: CliContext) -> dict[str, NodeField]:
@ -165,9 +182,7 @@ def get_node_outputs(invocation: BaseInvocation, context: CliContext) -> dict[st
return {e.alias: get_node_output_field(graph, e.alias, invocation.id) for e in graph.exposed_outputs}
def generate_matching_edges(
a: BaseInvocation, b: BaseInvocation, context: CliContext
) -> list[Edge]:
def generate_matching_edges(a: BaseInvocation, b: BaseInvocation, context: CliContext) -> list[Edge]:
"""Generates all possible edges between two invocations"""
afields = get_node_outputs(a, context)
bfields = get_node_inputs(b, context)
@ -179,12 +194,14 @@ def generate_matching_edges(
matching_fields = matching_fields.difference(invalid_fields)
# Validate types
matching_fields = [f for f in matching_fields if are_connection_types_compatible(afields[f].field_type, bfields[f].field_type)]
matching_fields = [
f for f in matching_fields if are_connection_types_compatible(afields[f].field_type, bfields[f].field_type)
]
edges = [
Edge(
source=EdgeConnection(node_id=afields[alias].node_path, field=afields[alias].field),
destination=EdgeConnection(node_id=bfields[alias].node_path, field=bfields[alias].field)
destination=EdgeConnection(node_id=bfields[alias].node_path, field=bfields[alias].field),
)
for alias in matching_fields
]
@ -193,6 +210,7 @@ def generate_matching_edges(
class SessionError(Exception):
"""Raised when a session error has occurred"""
pass
@ -209,22 +227,23 @@ def invoke_all(context: CliContext):
context.invoker.services.logger.error(
f"Error in node {n} (source node {context.session.prepared_source_mapping[n]}): {context.session.errors[n]}"
)
raise SessionError()
def invoke_cli():
logger.info(f'InvokeAI version {__version__}')
logger.info(f"InvokeAI version {__version__}")
# get the optional list of invocations to execute on the command line
parser = config.get_parser()
parser.add_argument('commands',nargs='*')
parser.add_argument("commands", nargs="*")
invocation_commands = parser.parse_args().commands
# get the optional file to read commands from.
# Simplest is to use it for STDIN
if infile := config.from_file:
sys.stdin = open(infile,"r")
model_manager = ModelManagerService(config,logger)
sys.stdin = open(infile, "r")
model_manager = ModelManagerService(config, logger)
events = EventServiceBase()
output_folder = config.output_path
@ -234,13 +253,13 @@ def invoke_cli():
db_location = ":memory:"
else:
db_location = config.db_path
db_location.parent.mkdir(parents=True,exist_ok=True)
db_location.parent.mkdir(parents=True, exist_ok=True)
logger.info(f'InvokeAI database location is "{db_location}"')
graph_execution_manager = SqliteItemStorage[GraphExecutionState](
filename=db_location, table_name="graph_executions"
)
filename=db_location, table_name="graph_executions"
)
urls = LocalUrlService()
image_record_storage = SqliteImageRecordStorage(db_location)
@ -281,24 +300,22 @@ def invoke_cli():
graph_execution_manager=graph_execution_manager,
)
)
services = InvocationServices(
model_manager=model_manager,
events=events,
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f'{output_folder}/latents')),
latents=ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents")),
images=images,
boards=boards,
board_images=board_images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"
),
graph_library=SqliteItemStorage[LibraryGraph](filename=db_location, table_name="graphs"),
graph_execution_manager=graph_execution_manager,
processor=DefaultInvocationProcessor(),
performance_statistics=InvocationStatsService(graph_execution_manager),
logger=logger,
configuration=config,
)
system_graphs = create_system_graphs(services.graph_library)
system_graph_names = set([g.name for g in system_graphs])
@ -308,7 +325,7 @@ def invoke_cli():
session: GraphExecutionState = invoker.create_execution_state()
parser = get_command_parser(services)
re_negid = re.compile('^-[0-9]+$')
re_negid = re.compile("^-[0-9]+$")
# Uncomment to print out previous sessions at startup
# print(services.session_manager.list())
@ -318,7 +335,7 @@ def invoke_cli():
command_line_args_exist = len(invocation_commands) > 0
done = False
while not done:
try:
if command_line_args_exist:
@ -332,7 +349,7 @@ def invoke_cli():
try:
# Refresh the state of the session
#history = list(get_graph_execution_history(context.session))
# history = list(get_graph_execution_history(context.session))
history = list(reversed(context.nodes_added))
# Split the command for piping
@ -353,17 +370,17 @@ def invoke_cli():
args[field_name] = field_default
# Parse invocation
command: CliCommand = None # type:ignore
command: CliCommand = None # type:ignore
system_graph: Optional[LibraryGraph] = None
if args['type'] in system_graph_names:
system_graph = next(filter(lambda g: g.name == args['type'], system_graphs))
if args["type"] in system_graph_names:
system_graph = next(filter(lambda g: g.name == args["type"], system_graphs))
invocation = GraphInvocation(graph=system_graph.graph, id=str(current_id))
for exposed_input in system_graph.exposed_inputs:
if exposed_input.alias in args:
node = invocation.graph.get_node(exposed_input.node_path)
field = exposed_input.field
setattr(node, field, args[exposed_input.alias])
command = CliCommand(command = invocation)
command = CliCommand(command=invocation)
context.graph_nodes[invocation.id] = system_graph.id
else:
args["id"] = current_id
@ -385,17 +402,13 @@ def invoke_cli():
# Pipe previous command output (if there was a previous command)
edges: list[Edge] = list()
if len(history) > 0 or current_id != start_id:
from_id = (
history[0] if current_id == start_id else str(current_id - 1)
)
from_id = history[0] if current_id == start_id else str(current_id - 1)
from_node = (
next(filter(lambda n: n[0].id == from_id, new_invocations))[0]
if current_id != start_id
else context.session.graph.get_node(from_id)
)
matching_edges = generate_matching_edges(
from_node, command.command, context
)
matching_edges = generate_matching_edges(from_node, command.command, context)
edges.extend(matching_edges)
# Parse provided links
@ -406,16 +419,18 @@ def invoke_cli():
node_id = str(current_id + int(node_id))
link_node = context.session.graph.get_node(node_id)
matching_edges = generate_matching_edges(
link_node, command.command, context
)
matching_edges = generate_matching_edges(link_node, command.command, context)
matching_destinations = [e.destination for e in matching_edges]
edges = [e for e in edges if e.destination not in matching_destinations]
edges.extend(matching_edges)
if "link" in args and args["link"]:
for link in args["link"]:
edges = [e for e in edges if e.destination.node_id != command.command.id or e.destination.field != link[2]]
edges = [
e
for e in edges
if e.destination.node_id != command.command.id or e.destination.field != link[2]
]
node_id = link[0]
if re_negid.match(node_id):
@ -428,7 +443,7 @@ def invoke_cli():
edges.append(
Edge(
source=EdgeConnection(node_id=node_output.node_path, field=node_output.field),
destination=EdgeConnection(node_id=node_input.node_path, field=node_input.field)
destination=EdgeConnection(node_id=node_input.node_path, field=node_input.field),
)
)

View File

@ -4,9 +4,5 @@ __all__ = []
dirname = os.path.dirname(os.path.abspath(__file__))
for f in os.listdir(dirname):
if (
f != "__init__.py"
and os.path.isfile("%s/%s" % (dirname, f))
and f[-3:] == ".py"
):
if f != "__init__.py" and os.path.isfile("%s/%s" % (dirname, f)) and f[-3:] == ".py":
__all__.append(f[:-3])

View File

@ -3,16 +3,366 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from enum import Enum
from inspect import signature
from typing import (TYPE_CHECKING, Dict, List, Literal, TypedDict, get_args,
get_type_hints)
from typing import (
TYPE_CHECKING,
AbstractSet,
Any,
Callable,
ClassVar,
Mapping,
Optional,
Type,
TypeVar,
Union,
get_args,
get_type_hints,
)
from pydantic import BaseConfig, BaseModel, Field
from pydantic import BaseModel, Field
from pydantic.fields import Undefined
from pydantic.typing import NoArgAnyCallable
if TYPE_CHECKING:
from ..services.invocation_services import InvocationServices
class FieldDescriptions:
denoising_start = "When to start denoising, expressed a percentage of total steps"
denoising_end = "When to stop denoising, expressed a percentage of total steps"
cfg_scale = "Classifier-Free Guidance scale"
scheduler = "Scheduler to use during inference"
positive_cond = "Positive conditioning tensor"
negative_cond = "Negative conditioning tensor"
noise = "Noise tensor"
clip = "CLIP (tokenizer, text encoder, LoRAs) and skipped layer count"
unet = "UNet (scheduler, LoRAs)"
vae = "VAE"
cond = "Conditioning tensor"
controlnet_model = "ControlNet model to load"
vae_model = "VAE model to load"
lora_model = "LoRA model to load"
main_model = "Main model (UNet, VAE, CLIP) to load"
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
lora_weight = "The weight at which the LoRA is applied to each model"
compel_prompt = "Prompt to be parsed by Compel to create a conditioning tensor"
raw_prompt = "Raw prompt text (no parsing)"
sdxl_aesthetic = "The aesthetic score to apply to the conditioning tensor"
skipped_layers = "Number of layers to skip in text encoder"
seed = "Seed for random number generation"
steps = "Number of steps to run"
width = "Width of output (px)"
height = "Height of output (px)"
control = "ControlNet(s) to apply"
denoised_latents = "Denoised latents tensor"
latents = "Latents tensor"
strength = "Strength of denoising (proportional to steps)"
core_metadata = "Optional core metadata to be written to image"
interp_mode = "Interpolation mode"
torch_antialias = "Whether or not to apply antialiasing (bilinear or bicubic only)"
fp32 = "Whether or not to use full float32 precision"
precision = "Precision to use"
tiled = "Processing using overlapping tiles (reduce memory consumption)"
detect_res = "Pixel resolution for detection"
image_res = "Pixel resolution for output image"
safe_mode = "Whether or not to use safe mode"
scribble_mode = "Whether or not to use scribble mode"
scale_factor = "The factor by which to scale"
num_1 = "The first number"
num_2 = "The second number"
mask = "The mask to use for the operation"
class Input(str, Enum):
"""
The type of input a field accepts.
- `Input.Direct`: The field must have its value provided directly, when the invocation and field \
are instantiated.
- `Input.Connection`: The field must have its value provided by a connection.
- `Input.Any`: The field may have its value provided either directly or by a connection.
"""
Connection = "connection"
Direct = "direct"
Any = "any"
class UIType(str, Enum):
"""
Type hints for the UI.
If a field should be provided a data type that does not exactly match the python type of the field, \
use this to provide the type that should be used instead. See the node development docs for detail \
on adding a new field type, which involves client-side changes.
"""
# region Primitives
Integer = "integer"
Float = "float"
Boolean = "boolean"
String = "string"
Array = "array"
Image = "ImageField"
Latents = "LatentsField"
Conditioning = "ConditioningField"
Control = "ControlField"
Color = "ColorField"
ImageCollection = "ImageCollection"
ConditioningCollection = "ConditioningCollection"
ColorCollection = "ColorCollection"
LatentsCollection = "LatentsCollection"
IntegerCollection = "IntegerCollection"
FloatCollection = "FloatCollection"
StringCollection = "StringCollection"
BooleanCollection = "BooleanCollection"
# endregion
# region Models
MainModel = "MainModelField"
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VaeModel = "VaeModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
UNet = "UNetField"
Vae = "VaeField"
CLIP = "ClipField"
# endregion
# region Iterate/Collect
Collection = "Collection"
CollectionItem = "CollectionItem"
# endregion
# region Misc
FilePath = "FilePath"
Enum = "enum"
# endregion
class UIComponent(str, Enum):
"""
The type of UI component to use for a field, used to override the default components, which are \
inferred from the field type.
"""
None_ = "none"
Textarea = "textarea"
Slider = "slider"
class _InputField(BaseModel):
"""
*DO NOT USE*
This helper class is used to tell the client about our custom field attributes via OpenAPI
schema generation, and Typescript type generation from that schema. It serves no functional
purpose in the backend.
"""
input: Input
ui_hidden: bool
ui_type: Optional[UIType]
ui_component: Optional[UIComponent]
class _OutputField(BaseModel):
"""
*DO NOT USE*
This helper class is used to tell the client about our custom field attributes via OpenAPI
schema generation, and Typescript type generation from that schema. It serves no functional
purpose in the backend.
"""
ui_hidden: bool
ui_type: Optional[UIType]
def InputField(
*args: Any,
default: Any = Undefined,
default_factory: Optional[NoArgAnyCallable] = None,
alias: Optional[str] = None,
title: Optional[str] = None,
description: Optional[str] = None,
exclude: Optional[Union[AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], Any]] = None,
include: Optional[Union[AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], Any]] = None,
const: Optional[bool] = None,
gt: Optional[float] = None,
ge: Optional[float] = None,
lt: Optional[float] = None,
le: Optional[float] = None,
multiple_of: Optional[float] = None,
allow_inf_nan: Optional[bool] = None,
max_digits: Optional[int] = None,
decimal_places: Optional[int] = None,
min_items: Optional[int] = None,
max_items: Optional[int] = None,
unique_items: Optional[bool] = None,
min_length: Optional[int] = None,
max_length: Optional[int] = None,
allow_mutation: bool = True,
regex: Optional[str] = None,
discriminator: Optional[str] = None,
repr: bool = True,
input: Input = Input.Any,
ui_type: Optional[UIType] = None,
ui_component: Optional[UIComponent] = None,
ui_hidden: bool = False,
**kwargs: Any,
) -> Any:
"""
Creates an input field for an invocation.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param Input input: [Input.Any] The kind of input this field requires. \
`Input.Direct` means a value must be provided on instantiation. \
`Input.Connection` means the value must be provided by a connection. \
`Input.Any` means either will do.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
:param UIComponent ui_component: [None] Optionally specifies a specific component to use in the UI. \
The UI will always render a suitable component, but sometimes you want something different than the default. \
For example, a `string` field will default to a single-line input, but you may want a multi-line textarea instead. \
For this case, you could provide `UIComponent.Textarea`.
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
"""
return Field(
*args,
default=default,
default_factory=default_factory,
alias=alias,
title=title,
description=description,
exclude=exclude,
include=include,
const=const,
gt=gt,
ge=ge,
lt=lt,
le=le,
multiple_of=multiple_of,
allow_inf_nan=allow_inf_nan,
max_digits=max_digits,
decimal_places=decimal_places,
min_items=min_items,
max_items=max_items,
unique_items=unique_items,
min_length=min_length,
max_length=max_length,
allow_mutation=allow_mutation,
regex=regex,
discriminator=discriminator,
repr=repr,
input=input,
ui_type=ui_type,
ui_component=ui_component,
ui_hidden=ui_hidden,
**kwargs,
)
def OutputField(
*args: Any,
default: Any = Undefined,
default_factory: Optional[NoArgAnyCallable] = None,
alias: Optional[str] = None,
title: Optional[str] = None,
description: Optional[str] = None,
exclude: Optional[Union[AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], Any]] = None,
include: Optional[Union[AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], Any]] = None,
const: Optional[bool] = None,
gt: Optional[float] = None,
ge: Optional[float] = None,
lt: Optional[float] = None,
le: Optional[float] = None,
multiple_of: Optional[float] = None,
allow_inf_nan: Optional[bool] = None,
max_digits: Optional[int] = None,
decimal_places: Optional[int] = None,
min_items: Optional[int] = None,
max_items: Optional[int] = None,
unique_items: Optional[bool] = None,
min_length: Optional[int] = None,
max_length: Optional[int] = None,
allow_mutation: bool = True,
regex: Optional[str] = None,
discriminator: Optional[str] = None,
repr: bool = True,
ui_type: Optional[UIType] = None,
ui_hidden: bool = False,
**kwargs: Any,
) -> Any:
"""
Creates an output field for an invocation output.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
"""
return Field(
*args,
default=default,
default_factory=default_factory,
alias=alias,
title=title,
description=description,
exclude=exclude,
include=include,
const=const,
gt=gt,
ge=ge,
lt=lt,
le=le,
multiple_of=multiple_of,
allow_inf_nan=allow_inf_nan,
max_digits=max_digits,
decimal_places=decimal_places,
min_items=min_items,
max_items=max_items,
unique_items=unique_items,
min_length=min_length,
max_length=max_length,
allow_mutation=allow_mutation,
regex=regex,
discriminator=discriminator,
repr=repr,
ui_type=ui_type,
ui_hidden=ui_hidden,
**kwargs,
)
class UIConfigBase(BaseModel):
"""
Provides additional node configuration to the UI.
This is used internally by the @tags and @title decorator logic. You probably want to use those
decorators, though you may add this class to a node definition to specify the title and tags.
"""
tags: Optional[list[str]] = Field(default_factory=None, description="The tags to display in the UI")
title: Optional[str] = Field(default=None, description="The display name of the node")
class InvocationContext:
services: InvocationServices
graph_execution_state_id: str
@ -40,6 +390,20 @@ class BaseInvocationOutput(BaseModel):
return tuple(subclasses)
class RequiredConnectionException(Exception):
"""Raised when an field which requires a connection did not receive a value."""
def __init__(self, node_id: str, field_name: str):
super().__init__(f"Node {node_id} missing connections for field {field_name}")
class MissingInputException(Exception):
"""Raised when an field which requires some input, but did not receive a value."""
def __init__(self, node_id: str, field_name: str):
super().__init__(f"Node {node_id} missing value or connection for field {field_name}")
class BaseInvocation(ABC, BaseModel):
"""A node to process inputs and produce outputs.
May use dependency injection in __init__ to receive providers.
@ -77,70 +441,81 @@ class BaseInvocation(ABC, BaseModel):
def get_output_type(cls):
return signature(cls.invoke).return_annotation
class Config:
@staticmethod
def schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
uiconfig = getattr(model_class, "UIConfig", None)
if uiconfig and hasattr(uiconfig, "title"):
schema["title"] = uiconfig.title
if uiconfig and hasattr(uiconfig, "tags"):
schema["tags"] = uiconfig.tags
@abstractmethod
def invoke(self, context: InvocationContext) -> BaseInvocationOutput:
"""Invoke with provided context and return outputs."""
pass
# fmt: off
id: str = Field(description="The id of this node. Must be unique among all nodes.")
is_intermediate: bool = Field(default=False, description="Whether or not this node is an intermediate node.")
# fmt: on
def __init__(self, **data):
# nodes may have required fields, that can accept input from connections
# on instantiation of the model, we need to exclude these from validation
restore = dict()
try:
field_names = list(self.__fields__.keys())
for field_name in field_names:
# if the field is required and may get its value from a connection, exclude it from validation
field = self.__fields__[field_name]
_input = field.field_info.extra.get("input", None)
if _input in [Input.Connection, Input.Any] and field.required:
if field_name not in data:
restore[field_name] = self.__fields__.pop(field_name)
# instantiate the node, which will validate the data
super().__init__(**data)
finally:
# restore the removed fields
for field_name, field in restore.items():
self.__fields__[field_name] = field
def invoke_internal(self, context: InvocationContext) -> BaseInvocationOutput:
for field_name, field in self.__fields__.items():
_input = field.field_info.extra.get("input", None)
if field.required and not hasattr(self, field_name):
if _input == Input.Connection:
raise RequiredConnectionException(self.__fields__["type"].default, field_name)
elif _input == Input.Any:
raise MissingInputException(self.__fields__["type"].default, field_name)
return self.invoke(context)
id: str = InputField(description="The id of this node. Must be unique among all nodes.")
is_intermediate: bool = InputField(
default=False, description="Whether or not this node is an intermediate node.", input=Input.Direct
)
UIConfig: ClassVar[Type[UIConfigBase]]
# TODO: figure out a better way to provide these hints
# TODO: when we can upgrade to python 3.11, we can use the`NotRequired` type instead of `total=False`
class UIConfig(TypedDict, total=False):
type_hints: Dict[
str,
Literal[
"integer",
"float",
"boolean",
"string",
"enum",
"image",
"latents",
"model",
"control",
"image_collection",
"vae_model",
"lora_model",
],
]
tags: List[str]
title: str
T = TypeVar("T", bound=BaseInvocation)
class CustomisedSchemaExtra(TypedDict):
ui: UIConfig
def title(title: str) -> Callable[[Type[T]], Type[T]]:
"""Adds a title to the invocation. Use this to override the default title generation, which is based on the class name."""
def wrapper(cls: Type[T]) -> Type[T]:
uiconf_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name:
cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict())
cls.UIConfig.title = title
return cls
return wrapper
class InvocationConfig(BaseConfig):
"""Customizes pydantic's BaseModel.Config class for use by Invocations.
def tags(*tags: str) -> Callable[[Type[T]], Type[T]]:
"""Adds tags to the invocation. Use this to improve the streamline finding the invocation in the UI."""
Provide `schema_extra` a `ui` dict to add hints for generated UIs.
def wrapper(cls: Type[T]) -> Type[T]:
uiconf_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name:
cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict())
cls.UIConfig.tags = list(tags)
return cls
`tags`
- A list of strings, used to categorise invocations.
`type_hints`
- A dict of field types which override the types in the invocation definition.
- Each key should be the name of one of the invocation's fields.
- Each value should be one of the valid types:
- `integer`, `float`, `boolean`, `string`, `enum`, `image`, `latents`, `model`
```python
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["stable-diffusion", "image"],
"type_hints": {
"initial_image": "image",
},
},
}
```
"""
schema_extra: CustomisedSchemaExtra
return wrapper

View File

@ -3,64 +3,25 @@
from typing import Literal
import numpy as np
from pydantic import Field, validator
from pydantic import validator
from invokeai.app.models.image import ImageField
from invokeai.app.invocations.primitives import ImageCollectionOutput, ImageField, IntegerCollectionOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext, UIConfig)
class IntCollectionOutput(BaseInvocationOutput):
"""A collection of integers"""
type: Literal["int_collection"] = "int_collection"
# Outputs
collection: list[int] = Field(default=[], description="The int collection")
class FloatCollectionOutput(BaseInvocationOutput):
"""A collection of floats"""
type: Literal["float_collection"] = "float_collection"
# Outputs
collection: list[float] = Field(
default=[], description="The float collection")
class ImageCollectionOutput(BaseInvocationOutput):
"""A collection of images"""
type: Literal["image_collection"] = "image_collection"
# Outputs
collection: list[ImageField] = Field(
default=[], description="The output images")
class Config:
schema_extra = {"required": ["type", "collection"]}
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIType, tags, title
@title("Integer Range")
@tags("collection", "integer", "range")
class RangeInvocation(BaseInvocation):
"""Creates a range of numbers from start to stop with step"""
type: Literal["range"] = "range"
# Inputs
start: int = Field(default=0, description="The start of the range")
stop: int = Field(default=10, description="The stop of the range")
step: int = Field(default=1, description="The step of the range")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Range",
"tags": ["range", "integer", "collection"]
},
}
start: int = InputField(default=0, description="The start of the range")
stop: int = InputField(default=10, description="The stop of the range")
step: int = InputField(default=1, description="The step of the range")
@validator("stop")
def stop_gt_start(cls, v, values):
@ -68,94 +29,44 @@ class RangeInvocation(BaseInvocation):
raise ValueError("stop must be greater than start")
return v
def invoke(self, context: InvocationContext) -> IntCollectionOutput:
return IntCollectionOutput(
collection=list(range(self.start, self.stop, self.step))
)
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput(collection=list(range(self.start, self.stop, self.step)))
@title("Integer Range of Size")
@tags("range", "integer", "size", "collection")
class RangeOfSizeInvocation(BaseInvocation):
"""Creates a range from start to start + size with step"""
type: Literal["range_of_size"] = "range_of_size"
# Inputs
start: int = Field(default=0, description="The start of the range")
size: int = Field(default=1, description="The number of values")
step: int = Field(default=1, description="The step of the range")
start: int = InputField(default=0, description="The start of the range")
size: int = InputField(default=1, description="The number of values")
step: int = InputField(default=1, description="The step of the range")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Sized Range",
"tags": ["range", "integer", "size", "collection"]
},
}
def invoke(self, context: InvocationContext) -> IntCollectionOutput:
return IntCollectionOutput(
collection=list(
range(
self.start, self.start + self.size,
self.step)))
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput(collection=list(range(self.start, self.start + self.size, self.step)))
@title("Random Range")
@tags("range", "integer", "random", "collection")
class RandomRangeInvocation(BaseInvocation):
"""Creates a collection of random numbers"""
type: Literal["random_range"] = "random_range"
# Inputs
low: int = Field(default=0, description="The inclusive low value")
high: int = Field(
default=np.iinfo(np.int32).max, description="The exclusive high value"
)
size: int = Field(default=1, description="The number of values to generate")
seed: int = Field(
low: int = InputField(default=0, description="The inclusive low value")
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
size: int = InputField(default=1, description="The number of values to generate")
seed: int = InputField(
ge=0,
le=SEED_MAX,
description="The seed for the RNG (omit for random)",
default_factory=get_random_seed,
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Random Range",
"tags": ["range", "integer", "random", "collection"]
},
}
def invoke(self, context: InvocationContext) -> IntCollectionOutput:
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
rng = np.random.default_rng(self.seed)
return IntCollectionOutput(
collection=list(
rng.integers(
low=self.low, high=self.high,
size=self.size)))
class ImageCollectionInvocation(BaseInvocation):
"""Load a collection of images and provide it as output."""
# fmt: off
type: Literal["image_collection"] = "image_collection"
# Inputs
images: list[ImageField] = Field(
default=[], description="The image collection to load"
)
# fmt: on
def invoke(self, context: InvocationContext) -> ImageCollectionOutput:
return ImageCollectionOutput(collection=self.images)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"type_hints": {
"title": "Image Collection",
"images": "image_collection",
}
},
}
return IntegerCollectionOutput(collection=list(rng.integers(low=self.low, high=self.high, size=self.size)))

View File

@ -1,130 +1,116 @@
from typing import Literal, Optional, Union, List, Annotated
from pydantic import BaseModel, Field
import re
from dataclasses import dataclass
from typing import List, Literal, Union
import torch
from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import (Blend, Conjunction,
CrossAttentionControlSubstitute,
FlattenedPrompt, Fragment)
from ...backend.util.devices import torch_dtype
from ...backend.model_management import ModelType
from ...backend.model_management.models import ModelNotFoundException
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import (
BasicConditioningInfo,
SDXLConditioningInfo,
)
from ...backend.model_management import ModelPatcher, ModelType
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import ModelNotFoundException
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from ...backend.util.devices import torch_dtype
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
Input,
InputField,
InvocationContext,
OutputField,
UIComponent,
tags,
title,
)
from .model import ClipField
from dataclasses import dataclass
class ConditioningField(BaseModel):
conditioning_name: Optional[str] = Field(
default=None, description="The name of conditioning data")
class Config:
schema_extra = {"required": ["conditioning_name"]}
@dataclass
class BasicConditioningInfo:
#type: Literal["basic_conditioning"] = "basic_conditioning"
embeds: torch.Tensor
extra_conditioning: Optional[InvokeAIDiffuserComponent.ExtraConditioningInfo]
# weight: float
# mode: ConditioningAlgo
@dataclass
class SDXLConditioningInfo(BasicConditioningInfo):
#type: Literal["sdxl_conditioning"] = "sdxl_conditioning"
pooled_embeds: torch.Tensor
add_time_ids: torch.Tensor
ConditioningInfoType = Annotated[
Union[BasicConditioningInfo, SDXLConditioningInfo],
Field(discriminator="type")
]
@dataclass
class ConditioningFieldData:
conditionings: List[Union[BasicConditioningInfo, SDXLConditioningInfo]]
#unconditioned: Optional[torch.Tensor]
conditionings: List[BasicConditioningInfo]
# unconditioned: Optional[torch.Tensor]
#class ConditioningAlgo(str, Enum):
# class ConditioningAlgo(str, Enum):
# Compose = "compose"
# ComposeEx = "compose_ex"
# PerpNeg = "perp_neg"
class CompelOutput(BaseInvocationOutput):
"""Compel parser output"""
#fmt: off
type: Literal["compel_output"] = "compel_output"
conditioning: ConditioningField = Field(default=None, description="Conditioning")
#fmt: on
@title("Compel Prompt")
@tags("prompt", "compel")
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
type: Literal["compel"] = "compel"
prompt: str = Field(default="", description="Prompt")
clip: ClipField = Field(None, description="Clip to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Prompt (Compel)",
"tags": ["prompt", "compel"],
"type_hints": {
"model": "model"
}
},
}
prompt: str = InputField(
default="",
description=FieldDescriptions.compel_prompt,
ui_component=UIComponent.Textarea,
)
clip: ClipField = InputField(
title="CLIP",
description=FieldDescriptions.clip,
input=Input.Connection,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.dict(), context=context,
**self.clip.tokenizer.dict(),
context=context,
)
text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.dict(), context=context,
**self.clip.text_encoder.dict(),
context=context,
)
def _lora_loader():
for lora in self.clip.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}))
lora_info = context.services.model_manager.get_model(**lora.dict(exclude={"weight"}), context=context)
yield (lora_info.context.model, lora.weight)
del lora_info
return
#loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
# loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
context=context,
).context.model
(
name,
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
context=context,
).context.model,
)
)
except ModelNotFoundException:
# print(e)
#import traceback
#print(traceback.format_exc())
print(f"Warn: trigger: \"{trigger}\" not found")
with ModelPatcher.apply_lora_text_encoder(text_encoder_info.context.model, _lora_loader()),\
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (tokenizer, ti_manager),\
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, self.clip.skipped_layers),\
text_encoder_info as text_encoder:
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
with ModelPatcher.apply_lora_text_encoder(
text_encoder_info.context.model, _lora_loader()
), ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
tokenizer,
ti_manager,
), ModelPatcher.apply_clip_skip(
text_encoder_info.context.model, self.clip.skipped_layers
), text_encoder_info as text_encoder:
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,
@ -139,14 +125,12 @@ class CompelInvocation(BaseInvocation):
if context.services.configuration.log_tokenization:
log_tokenization_for_prompt_object(prompt, tokenizer)
c, options = compel.build_conditioning_tensor_for_prompt_object(
prompt)
c, options = compel.build_conditioning_tensor_for_prompt_object(prompt)
ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(
tokenizer, conjunction),
cross_attention_control_args=options.get(
"cross_attention_control", None),)
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
cross_attention_control_args=options.get("cross_attention_control", None),
)
c = c.detach().to("cpu")
@ -162,129 +146,93 @@ class CompelInvocation(BaseInvocation):
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
return CompelOutput(
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
class SDXLPromptInvocationBase:
def run_clip_raw(self, context, clip_field, prompt, get_pooled):
def run_clip_compel(
self,
context: InvocationContext,
clip_field: ClipField,
prompt: str,
get_pooled: bool,
lora_prefix: str,
zero_on_empty: bool,
):
tokenizer_info = context.services.model_manager.get_model(
**clip_field.tokenizer.dict(),
context=context,
)
text_encoder_info = context.services.model_manager.get_model(
**clip_field.text_encoder.dict(),
context=context,
)
def _lora_loader():
for lora in clip_field.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}))
yield (lora_info.context.model, lora.weight)
del lora_info
return
#loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", prompt):
name = trigger[1:-1]
try:
ti_list.append(
context.services.model_manager.get_model(
model_name=name,
base_model=clip_field.text_encoder.base_model,
model_type=ModelType.TextualInversion,
).context.model
)
except ModelNotFoundException:
# print(e)
#import traceback
#print(traceback.format_exc())
print(f"Warn: trigger: \"{trigger}\" not found")
with ModelPatcher.apply_lora_text_encoder(text_encoder_info.context.model, _lora_loader()),\
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (tokenizer, ti_manager),\
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers),\
text_encoder_info as text_encoder:
text_inputs = tokenizer(
prompt,
padding="max_length",
max_length=tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
prompt_embeds = text_encoder(
text_input_ids.to(text_encoder.device),
output_hidden_states=True,
# return zero on empty
if prompt == "" and zero_on_empty:
cpu_text_encoder = text_encoder_info.context.model
c = torch.zeros(
(1, cpu_text_encoder.config.max_position_embeddings, cpu_text_encoder.config.hidden_size),
dtype=text_encoder_info.context.cache.precision,
)
if get_pooled:
c_pooled = prompt_embeds[0]
c_pooled = torch.zeros(
(1, cpu_text_encoder.config.hidden_size),
dtype=c.dtype,
)
else:
c_pooled = None
c = prompt_embeds.hidden_states[-2]
del tokenizer
del text_encoder
del tokenizer_info
del text_encoder_info
c = c.detach().to("cpu")
if c_pooled is not None:
c_pooled = c_pooled.detach().to("cpu")
return c, c_pooled, None
def run_clip_compel(self, context, clip_field, prompt, get_pooled):
tokenizer_info = context.services.model_manager.get_model(
**clip_field.tokenizer.dict(),
)
text_encoder_info = context.services.model_manager.get_model(
**clip_field.text_encoder.dict(),
)
return c, c_pooled, None
def _lora_loader():
for lora in clip_field.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}))
lora_info = context.services.model_manager.get_model(**lora.dict(exclude={"weight"}), context=context)
yield (lora_info.context.model, lora.weight)
del lora_info
return
#loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
# loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", prompt):
name = trigger[1:-1]
try:
ti_list.append(
context.services.model_manager.get_model(
model_name=name,
base_model=clip_field.text_encoder.base_model,
model_type=ModelType.TextualInversion,
).context.model
(
name,
context.services.model_manager.get_model(
model_name=name,
base_model=clip_field.text_encoder.base_model,
model_type=ModelType.TextualInversion,
context=context,
).context.model,
)
)
except ModelNotFoundException:
# print(e)
#import traceback
#print(traceback.format_exc())
print(f"Warn: trigger: \"{trigger}\" not found")
with ModelPatcher.apply_lora_text_encoder(text_encoder_info.context.model, _lora_loader()),\
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (tokenizer, ti_manager),\
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers),\
text_encoder_info as text_encoder:
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
with ModelPatcher.apply_lora(
text_encoder_info.context.model, _lora_loader(), lora_prefix
), ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
tokenizer,
ti_manager,
), ModelPatcher.apply_clip_skip(
text_encoder_info.context.model, clip_field.skipped_layers
), text_encoder_info as text_encoder:
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,
textual_inversion_manager=ti_manager,
dtype_for_device_getter=torch_dtype,
truncate_long_prompts=True, # TODO:
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
requires_pooled=True,
)
@ -318,49 +266,44 @@ class SDXLPromptInvocationBase:
return c, c_pooled, ec
@title("SDXL Compel Prompt")
@tags("sdxl", "compel", "prompt")
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
type: Literal["sdxl_compel_prompt"] = "sdxl_compel_prompt"
prompt: str = Field(default="", description="Prompt")
style: str = Field(default="", description="Style prompt")
original_width: int = Field(1024, description="")
original_height: int = Field(1024, description="")
crop_top: int = Field(0, description="")
crop_left: int = Field(0, description="")
target_width: int = Field(1024, description="")
target_height: int = Field(1024, description="")
clip: ClipField = Field(None, description="Clip to use")
clip2: ClipField = Field(None, description="Clip2 to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Prompt (Compel)",
"tags": ["prompt", "compel"],
"type_hints": {
"model": "model"
}
},
}
prompt: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea)
style: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea)
original_width: int = InputField(default=1024, description="")
original_height: int = InputField(default=1024, description="")
crop_top: int = InputField(default=0, description="")
crop_left: int = InputField(default=0, description="")
target_width: int = InputField(default=1024, description="")
target_height: int = InputField(default=1024, description="")
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
clip2: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
c1, c1_pooled, ec1 = self.run_clip_compel(context, self.clip, self.prompt, False)
def invoke(self, context: InvocationContext) -> ConditioningOutput:
c1, c1_pooled, ec1 = self.run_clip_compel(
context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True
)
if self.style.strip() == "":
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.prompt, True)
c2, c2_pooled, ec2 = self.run_clip_compel(
context, self.clip2, self.prompt, True, "lora_te2_", zero_on_empty=True
)
else:
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.style, True)
c2, c2_pooled, ec2 = self.run_clip_compel(
context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True
)
original_size = (self.original_height, self.original_width)
crop_coords = (self.crop_top, self.crop_left)
target_size = (self.target_height, self.target_width)
add_time_ids = torch.tensor([
original_size + crop_coords + target_size
])
add_time_ids = torch.tensor([original_size + crop_coords + target_size])
conditioning_data = ConditioningFieldData(
conditionings=[
@ -376,47 +319,39 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
return CompelOutput(
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
@title("SDXL Refiner Compel Prompt")
@tags("sdxl", "compel", "prompt")
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
type: Literal["sdxl_refiner_compel_prompt"] = "sdxl_refiner_compel_prompt"
style: str = Field(default="", description="Style prompt") # TODO: ?
original_width: int = Field(1024, description="")
original_height: int = Field(1024, description="")
crop_top: int = Field(0, description="")
crop_left: int = Field(0, description="")
aesthetic_score: float = Field(6.0, description="")
clip2: ClipField = Field(None, description="Clip to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Refiner Prompt (Compel)",
"tags": ["prompt", "compel"],
"type_hints": {
"model": "model"
}
},
}
style: str = InputField(
default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea
) # TODO: ?
original_width: int = InputField(default=1024, description="")
original_height: int = InputField(default=1024, description="")
crop_top: int = InputField(default=0, description="")
crop_left: int = InputField(default=0, description="")
aesthetic_score: float = InputField(default=6.0, description=FieldDescriptions.sdxl_aesthetic)
clip2: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.style, True)
def invoke(self, context: InvocationContext) -> ConditioningOutput:
# TODO: if there will appear lora for refiner - write proper prefix
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.style, True, "<NONE>", zero_on_empty=False)
original_size = (self.original_height, self.original_width)
crop_coords = (self.crop_top, self.crop_left)
add_time_ids = torch.tensor([
original_size + crop_coords + (self.aesthetic_score,)
])
add_time_ids = torch.tensor([original_size + crop_coords + (self.aesthetic_score,)])
conditioning_data = ConditioningFieldData(
conditionings=[
@ -424,7 +359,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
embeds=c2,
pooled_embeds=c2_pooled,
add_time_ids=add_time_ids,
extra_conditioning=ec2, # or None
extra_conditioning=ec2, # or None
)
]
)
@ -432,127 +367,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
return CompelOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
class SDXLRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Pass unmodified prompt to conditioning without compel processing."""
type: Literal["sdxl_raw_prompt"] = "sdxl_raw_prompt"
prompt: str = Field(default="", description="Prompt")
style: str = Field(default="", description="Style prompt")
original_width: int = Field(1024, description="")
original_height: int = Field(1024, description="")
crop_top: int = Field(0, description="")
crop_left: int = Field(0, description="")
target_width: int = Field(1024, description="")
target_height: int = Field(1024, description="")
clip: ClipField = Field(None, description="Clip to use")
clip2: ClipField = Field(None, description="Clip2 to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Prompt (Raw)",
"tags": ["prompt", "compel"],
"type_hints": {
"model": "model"
}
},
}
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
c1, c1_pooled, ec1 = self.run_clip_raw(context, self.clip, self.prompt, False)
if self.style.strip() == "":
c2, c2_pooled, ec2 = self.run_clip_raw(context, self.clip2, self.prompt, True)
else:
c2, c2_pooled, ec2 = self.run_clip_raw(context, self.clip2, self.style, True)
original_size = (self.original_height, self.original_width)
crop_coords = (self.crop_top, self.crop_left)
target_size = (self.target_height, self.target_width)
add_time_ids = torch.tensor([
original_size + crop_coords + target_size
])
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
embeds=torch.cat([c1, c2], dim=-1),
pooled_embeds=c2_pooled,
add_time_ids=add_time_ids,
extra_conditioning=ec1,
)
]
)
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
return CompelOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
class SDXLRefinerRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
type: Literal["sdxl_refiner_raw_prompt"] = "sdxl_refiner_raw_prompt"
style: str = Field(default="", description="Style prompt") # TODO: ?
original_width: int = Field(1024, description="")
original_height: int = Field(1024, description="")
crop_top: int = Field(0, description="")
crop_left: int = Field(0, description="")
aesthetic_score: float = Field(6.0, description="")
clip2: ClipField = Field(None, description="Clip to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Refiner Prompt (Raw)",
"tags": ["prompt", "compel"],
"type_hints": {
"model": "model"
}
},
}
@torch.no_grad()
def invoke(self, context: InvocationContext) -> CompelOutput:
c2, c2_pooled, ec2 = self.run_clip_raw(context, self.clip2, self.style, True)
original_size = (self.original_height, self.original_width)
crop_coords = (self.crop_top, self.crop_left)
add_time_ids = torch.tensor([
original_size + crop_coords + (self.aesthetic_score,)
])
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
embeds=c2,
pooled_embeds=c2_pooled,
add_time_ids=add_time_ids,
extra_conditioning=ec2, # or None
)
]
)
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
return CompelOutput(
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
@ -561,23 +376,20 @@ class SDXLRefinerRawPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
class ClipSkipInvocationOutput(BaseInvocationOutput):
"""Clip skip node output"""
type: Literal["clip_skip_output"] = "clip_skip_output"
clip: ClipField = Field(None, description="Clip with skipped layers")
type: Literal["clip_skip_output"] = "clip_skip_output"
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@title("CLIP Skip")
@tags("clipskip", "clip", "skip")
class ClipSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""
type: Literal["clip_skip"] = "clip_skip"
clip: ClipField = Field(None, description="Clip to use")
skipped_layers: int = Field(0, description="Number of layers to skip in text_encoder")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "CLIP Skip",
"tags": ["clip", "skip"]
},
}
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP")
skipped_layers: int = InputField(default=0, description=FieldDescriptions.skipped_layers)
def invoke(self, context: InvocationContext) -> ClipSkipInvocationOutput:
self.clip.skipped_layers += self.skipped_layers
@ -587,46 +399,26 @@ class ClipSkipInvocation(BaseInvocation):
def get_max_token_count(
tokenizer, prompt: Union[FlattenedPrompt, Blend, Conjunction],
truncate_if_too_long=False) -> int:
tokenizer, prompt: Union[FlattenedPrompt, Blend, Conjunction], truncate_if_too_long=False
) -> int:
if type(prompt) is Blend:
blend: Blend = prompt
return max(
[
get_max_token_count(tokenizer, p, truncate_if_too_long)
for p in blend.prompts
]
)
return max([get_max_token_count(tokenizer, p, truncate_if_too_long) for p in blend.prompts])
elif type(prompt) is Conjunction:
conjunction: Conjunction = prompt
return sum(
[
get_max_token_count(tokenizer, p, truncate_if_too_long)
for p in conjunction.prompts
]
)
return sum([get_max_token_count(tokenizer, p, truncate_if_too_long) for p in conjunction.prompts])
else:
return len(
get_tokens_for_prompt_object(
tokenizer, prompt, truncate_if_too_long))
return len(get_tokens_for_prompt_object(tokenizer, prompt, truncate_if_too_long))
def get_tokens_for_prompt_object(
tokenizer, parsed_prompt: FlattenedPrompt, truncate_if_too_long=True
) -> List[str]:
def get_tokens_for_prompt_object(tokenizer, parsed_prompt: FlattenedPrompt, truncate_if_too_long=True) -> List[str]:
if type(parsed_prompt) is Blend:
raise ValueError(
"Blend is not supported here - you need to get tokens for each of its .children"
)
raise ValueError("Blend is not supported here - you need to get tokens for each of its .children")
text_fragments = [
x.text
if type(x) is Fragment
else (
" ".join([f.text for f in x.original])
if type(x) is CrossAttentionControlSubstitute
else str(x)
)
else (" ".join([f.text for f in x.original]) if type(x) is CrossAttentionControlSubstitute else str(x))
for x in parsed_prompt.children
]
text = " ".join(text_fragments)
@ -637,25 +429,17 @@ def get_tokens_for_prompt_object(
return tokens
def log_tokenization_for_conjunction(
c: Conjunction, tokenizer, display_label_prefix=None
):
def log_tokenization_for_conjunction(c: Conjunction, tokenizer, display_label_prefix=None):
display_label_prefix = display_label_prefix or ""
for i, p in enumerate(c.prompts):
if len(c.prompts) > 1:
this_display_label_prefix = f"{display_label_prefix}(conjunction part {i + 1}, weight={c.weights[i]})"
else:
this_display_label_prefix = display_label_prefix
log_tokenization_for_prompt_object(
p,
tokenizer,
display_label_prefix=this_display_label_prefix
)
log_tokenization_for_prompt_object(p, tokenizer, display_label_prefix=this_display_label_prefix)
def log_tokenization_for_prompt_object(
p: Union[Blend, FlattenedPrompt], tokenizer, display_label_prefix=None
):
def log_tokenization_for_prompt_object(p: Union[Blend, FlattenedPrompt], tokenizer, display_label_prefix=None):
display_label_prefix = display_label_prefix or ""
if type(p) is Blend:
blend: Blend = p
@ -692,13 +476,10 @@ def log_tokenization_for_prompt_object(
)
else:
text = " ".join([x.text for x in flattened_prompt.children])
log_tokenization_for_text(
text, tokenizer, display_label=display_label_prefix
)
log_tokenization_for_text(text, tokenizer, display_label=display_label_prefix)
def log_tokenization_for_text(
text, tokenizer, display_label=None, truncate_if_too_long=False):
def log_tokenization_for_text(text, tokenizer, display_label=None, truncate_if_too_long=False):
"""shows how the prompt is tokenized
# usually tokens have '</w>' to indicate end-of-word,
# but for readability it has been replaced with ' '

View File

@ -6,88 +6,53 @@ from typing import Dict, List, Literal, Optional, Union
import cv2
import numpy as np
from controlnet_aux import (CannyDetector, ContentShuffleDetector, HEDdetector,
LeresDetector, LineartAnimeDetector,
LineartDetector, MediapipeFaceDetector,
MidasDetector, MLSDdetector, NormalBaeDetector,
OpenposeDetector, PidiNetDetector, SamDetector,
ZoeDetector)
from controlnet_aux import (
CannyDetector,
ContentShuffleDetector,
HEDdetector,
LeresDetector,
LineartAnimeDetector,
LineartDetector,
MediapipeFaceDetector,
MidasDetector,
MLSDdetector,
NormalBaeDetector,
OpenposeDetector,
PidiNetDetector,
SamDetector,
ZoeDetector,
)
from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, Field, validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from ...backend.model_management import BaseModelType, ModelType
from ..models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from .image import ImageOutput, PILInvocationConfig
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
Input,
InvocationContext,
OutputField,
UIType,
tags,
title,
)
CONTROLNET_DEFAULT_MODELS = [
###########################################
# lllyasviel sd v1.5, ControlNet v1.0 models
##############################################
"lllyasviel/sd-controlnet-canny",
"lllyasviel/sd-controlnet-depth",
"lllyasviel/sd-controlnet-hed",
"lllyasviel/sd-controlnet-seg",
"lllyasviel/sd-controlnet-openpose",
"lllyasviel/sd-controlnet-scribble",
"lllyasviel/sd-controlnet-normal",
"lllyasviel/sd-controlnet-mlsd",
#############################################
# lllyasviel sd v1.5, ControlNet v1.1 models
#############################################
"lllyasviel/control_v11p_sd15_canny",
"lllyasviel/control_v11p_sd15_openpose",
"lllyasviel/control_v11p_sd15_seg",
# "lllyasviel/control_v11p_sd15_depth", # broken
"lllyasviel/control_v11f1p_sd15_depth",
"lllyasviel/control_v11p_sd15_normalbae",
"lllyasviel/control_v11p_sd15_scribble",
"lllyasviel/control_v11p_sd15_mlsd",
"lllyasviel/control_v11p_sd15_softedge",
"lllyasviel/control_v11p_sd15s2_lineart_anime",
"lllyasviel/control_v11p_sd15_lineart",
"lllyasviel/control_v11p_sd15_inpaint",
# "lllyasviel/control_v11u_sd15_tile",
# problem (temporary?) with huffingface "lllyasviel/control_v11u_sd15_tile",
# so for now replace "lllyasviel/control_v11f1e_sd15_tile",
"lllyasviel/control_v11e_sd15_shuffle",
"lllyasviel/control_v11e_sd15_ip2p",
"lllyasviel/control_v11f1e_sd15_tile",
#################################################
# thibaud sd v2.1 models (ControlNet v1.0? or v1.1?
##################################################
"thibaud/controlnet-sd21-openpose-diffusers",
"thibaud/controlnet-sd21-canny-diffusers",
"thibaud/controlnet-sd21-depth-diffusers",
"thibaud/controlnet-sd21-scribble-diffusers",
"thibaud/controlnet-sd21-hed-diffusers",
"thibaud/controlnet-sd21-zoedepth-diffusers",
"thibaud/controlnet-sd21-color-diffusers",
"thibaud/controlnet-sd21-openposev2-diffusers",
"thibaud/controlnet-sd21-lineart-diffusers",
"thibaud/controlnet-sd21-normalbae-diffusers",
"thibaud/controlnet-sd21-ade20k-diffusers",
##############################################
# ControlNetMediaPipeface, ControlNet v1.1
##############################################
# ["CrucibleAI/ControlNetMediaPipeFace", "diffusion_sd15"], # SD 1.5
# diffusion_sd15 needs to be passed to from_pretrained() as subfolder arg
# hacked t2l to split to model & subfolder if format is "model,subfolder"
"CrucibleAI/ControlNetMediaPipeFace,diffusion_sd15", # SD 1.5
"CrucibleAI/ControlNetMediaPipeFace", # SD 2.1?
CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
CONTROLNET_RESIZE_VALUES = Literal[
"just_resize",
"crop_resize",
"fill_resize",
"just_resize_simple",
]
CONTROLNET_NAME_VALUES = Literal[tuple(CONTROLNET_DEFAULT_MODELS)]
CONTROLNET_MODE_VALUES = Literal[tuple(
["balanced", "more_prompt", "more_control", "unbalanced"])]
CONTROLNET_RESIZE_VALUES = Literal[tuple(
["just_resize", "crop_resize", "fill_resize", "just_resize_simple",])]
class ControlNetModelField(BaseModel):
"""ControlNet model field"""
@ -97,22 +62,17 @@ class ControlNetModelField(BaseModel):
class ControlField(BaseModel):
image: ImageField = Field(default=None, description="The control image")
control_model: Optional[ControlNetModelField] = Field(
default=None, description="The ControlNet model to use")
# control_weight: Optional[float] = Field(default=1, description="weight given to controlnet")
control_weight: Union[float, List[float]] = Field(
default=1, description="The weight given to the ControlNet")
image: ImageField = Field(description="The control image")
control_model: ControlNetModelField = Field(description="The ControlNet model to use")
control_weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1,
description="When the ControlNet is first applied (% of total steps)")
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
control_mode: CONTROLNET_MODE_VALUES = Field(
default="balanced", description="The control mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(
default="just_resize", description="The resize mode to use")
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@validator("control_weight")
def validate_control_weight(cls, v):
@ -120,65 +80,45 @@ class ControlField(BaseModel):
if isinstance(v, list):
for i in v:
if i < -1 or i > 2:
raise ValueError(
'Control weights must be within -1 to 2 range')
raise ValueError("Control weights must be within -1 to 2 range")
else:
if v < -1 or v > 2:
raise ValueError('Control weights must be within -1 to 2 range')
raise ValueError("Control weights must be within -1 to 2 range")
return v
class Config:
schema_extra = {
"required": ["image", "control_model", "control_weight", "begin_step_percent", "end_step_percent"],
"ui": {
"type_hints": {
"control_weight": "float",
"control_model": "controlnet_model",
# "control_weight": "number",
}
}
}
class ControlOutput(BaseInvocationOutput):
"""node output for ControlNet info"""
# fmt: off
type: Literal["control_output"] = "control_output"
control: ControlField = Field(default=None, description="The control info")
# fmt: on
# Outputs
control: ControlField = OutputField(description=FieldDescriptions.control)
@title("ControlNet")
@tags("controlnet")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
# fmt: off
type: Literal["controlnet"] = "controlnet"
# Inputs
image: ImageField = Field(default=None, description="The control image")
control_model: ControlNetModelField = Field(default="lllyasviel/sd-controlnet-canny",
description="control model used")
control_weight: Union[float, List[float]] = Field(default=1.0, description="The weight given to the ControlNet")
begin_step_percent: float = Field(default=0, ge=-1, le=2,
description="When the ControlNet is first applied (% of total steps)")
end_step_percent: float = Field(default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode used")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "ControlNet",
"tags": ["controlnet", "latents"],
"type_hints": {
"model": "model",
"control": "control",
# "cfg_scale": "float",
"cfg_scale": "number",
"control_weight": "float",
}
},
}
type: Literal["controlnet"] = "controlnet"
# Inputs
image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(
default="lllyasviel/sd-controlnet-canny", description=FieldDescriptions.controlnet_model, input=Input.Direct
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, description="The weight given to the ControlNet", ui_type=UIType.Float
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
@ -194,22 +134,13 @@ class ControlNetInvocation(BaseInvocation):
)
class ImageProcessorInvocation(BaseInvocation, PILInvocationConfig):
class ImageProcessorInvocation(BaseInvocation):
"""Base class for invocations that preprocess images for ControlNet"""
# fmt: off
type: Literal["image_processor"] = "image_processor"
# Inputs
image: ImageField = Field(default=None, description="The image to process")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Image Processor",
"tags": ["image", "processor"]
},
}
# Inputs
image: ImageField = InputField(description="The image to process")
def run_processor(self, image):
# superclass just passes through image without processing
@ -233,7 +164,7 @@ class ImageProcessorInvocation(BaseInvocation, PILInvocationConfig):
image_category=ImageCategory.CONTROL,
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate
is_intermediate=self.is_intermediate,
)
"""Builds an ImageOutput and its ImageField"""
@ -248,405 +179,319 @@ class ImageProcessorInvocation(BaseInvocation, PILInvocationConfig):
)
class CannyImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Canny Processor")
@tags("controlnet", "canny")
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
# fmt: off
type: Literal["canny_image_processor"] = "canny_image_processor"
# Input
low_threshold: int = Field(default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)")
high_threshold: int = Field(default=200, ge=0, le=255, description="The high threshold of the Canny pixel gradient (0-255)")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Canny Processor",
"tags": ["controlnet", "canny", "image", "processor"]
},
}
type: Literal["canny_image_processor"] = "canny_image_processor"
# Input
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
high_threshold: int = InputField(
default=200, ge=0, le=255, description="The high threshold of the Canny pixel gradient (0-255)"
)
def run_processor(self, image):
canny_processor = CannyDetector()
processed_image = canny_processor(
image, self.low_threshold, self.high_threshold)
processed_image = canny_processor(image, self.low_threshold, self.high_threshold)
return processed_image
class HedImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("HED (softedge) Processor")
@tags("controlnet", "hed", "softedge")
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
# fmt: off
type: Literal["hed_image_processor"] = "hed_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# safe not supported in controlnet_aux v0.0.3
# safe: bool = Field(default=False, description="whether to use safe mode")
scribble: bool = Field(default=False, description="Whether to use scribble mode")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Softedge(HED) Processor",
"tags": ["controlnet", "softedge", "hed", "image", "processor"]
},
}
type: Literal["hed_image_processor"] = "hed_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# safe not supported in controlnet_aux v0.0.3
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
def run_processor(self, image):
hed_processor = HEDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = hed_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
# safe not supported in controlnet_aux v0.0.3
# safe=self.safe,
scribble=self.scribble,
)
return processed_image
class LineartImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
"""Applies line art processing to image"""
# fmt: off
type: Literal["lineart_image_processor"] = "lineart_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
coarse: bool = Field(default=False, description="Whether to use coarse mode")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Lineart Processor",
"tags": ["controlnet", "lineart", "image", "processor"]
},
}
def run_processor(self, image):
lineart_processor = LineartDetector.from_pretrained(
"lllyasviel/Annotators")
processed_image = lineart_processor(
image, detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution, coarse=self.coarse)
return processed_image
class LineartAnimeImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
"""Applies line art anime processing to image"""
# fmt: off
type: Literal["lineart_anime_image_processor"] = "lineart_anime_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Lineart Anime Processor",
"tags": ["controlnet", "lineart", "anime", "image", "processor"]
},
}
def run_processor(self, image):
processor = LineartAnimeDetector.from_pretrained(
"lllyasviel/Annotators")
processed_image = processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
class OpenposeImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
"""Applies Openpose processing to image"""
# fmt: off
type: Literal["openpose_image_processor"] = "openpose_image_processor"
# Inputs
hand_and_face: bool = Field(default=False, description="Whether to use hands and face mode")
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Openpose Processor",
"tags": ["controlnet", "openpose", "image", "processor"]
},
}
def run_processor(self, image):
openpose_processor = OpenposeDetector.from_pretrained(
"lllyasviel/Annotators")
processed_image = openpose_processor(
image, detect_resolution=self.detect_resolution,
processed_image = hed_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
hand_and_face=self.hand_and_face,)
# safe not supported in controlnet_aux v0.0.3
# safe=self.safe,
scribble=self.scribble,
)
return processed_image
class MidasDepthImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
"""Applies Midas depth processing to image"""
# fmt: off
type: Literal["midas_depth_image_processor"] = "midas_depth_image_processor"
# Inputs
a_mult: float = Field(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = Field(default=0.1, ge=0, description="Midas parameter `bg_th`")
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = Field(default=False, description="whether to use depth and normal mode")
# fmt: on
@title("Lineart Processor")
@tags("controlnet", "lineart")
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Midas (Depth) Processor",
"tags": ["controlnet", "midas", "depth", "image", "processor"]
},
}
type: Literal["lineart_image_processor"] = "lineart_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
def run_processor(self, image):
lineart_processor = LineartDetector.from_pretrained("lllyasviel/Annotators")
processed_image = lineart_processor(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution, coarse=self.coarse
)
return processed_image
@title("Lineart Anime Processor")
@tags("controlnet", "lineart", "anime")
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
type: Literal["lineart_anime_image_processor"] = "lineart_anime_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
processor = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
@title("Openpose Processor")
@tags("controlnet", "openpose", "pose")
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
type: Literal["openpose_image_processor"] = "openpose_image_processor"
# Inputs
hand_and_face: bool = InputField(default=False, description="Whether to use hands and face mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
openpose_processor = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = openpose_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
hand_and_face=self.hand_and_face,
)
return processed_image
@title("Midas (Depth) Processor")
@tags("controlnet", "midas", "depth")
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
type: Literal["midas_depth_image_processor"] = "midas_depth_image_processor"
# Inputs
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
def run_processor(self, image):
midas_processor = MidasDetector.from_pretrained("lllyasviel/Annotators")
processed_image = midas_processor(image,
a=np.pi * self.a_mult,
bg_th=self.bg_th,
# dept_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal=self.depth_and_normal,
)
processed_image = midas_processor(
image,
a=np.pi * self.a_mult,
bg_th=self.bg_th,
# dept_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal=self.depth_and_normal,
)
return processed_image
class NormalbaeImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Normal BAE Processor")
@tags("controlnet", "normal", "bae")
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
# fmt: off
type: Literal["normalbae_image_processor"] = "normalbae_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Normal BAE Processor",
"tags": ["controlnet", "normal", "bae", "image", "processor"]
},
}
type: Literal["normalbae_image_processor"] = "normalbae_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
normalbae_processor = NormalBaeDetector.from_pretrained(
"lllyasviel/Annotators")
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = normalbae_processor(
image, detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution)
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution
)
return processed_image
class MlsdImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("MLSD Processor")
@tags("controlnet", "mlsd")
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
# fmt: off
type: Literal["mlsd_image_processor"] = "mlsd_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
thr_v: float = Field(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = Field(default=0.1, ge=0, description="MLSD parameter `thr_d`")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "MLSD Processor",
"tags": ["controlnet", "mlsd", "image", "processor"]
},
}
type: Literal["mlsd_image_processor"] = "mlsd_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")
def run_processor(self, image):
mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = mlsd_processor(
image, detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution, thr_v=self.thr_v,
thr_d=self.thr_d)
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
thr_v=self.thr_v,
thr_d=self.thr_d,
)
return processed_image
class PidiImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("PIDI Processor")
@tags("controlnet", "pidi")
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
# fmt: off
type: Literal["pidi_image_processor"] = "pidi_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
safe: bool = Field(default=False, description="Whether to use safe mode")
scribble: bool = Field(default=False, description="Whether to use scribble mode")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "PIDI Processor",
"tags": ["controlnet", "pidi", "image", "processor"]
},
}
type: Literal["pidi_image_processor"] = "pidi_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
def run_processor(self, image):
pidi_processor = PidiNetDetector.from_pretrained(
"lllyasviel/Annotators")
pidi_processor = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
processed_image = pidi_processor(
image, detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution, safe=self.safe,
scribble=self.scribble)
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
safe=self.safe,
scribble=self.scribble,
)
return processed_image
class ContentShuffleImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Content Shuffle Processor")
@tags("controlnet", "contentshuffle")
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
# fmt: off
type: Literal["content_shuffle_image_processor"] = "content_shuffle_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
h: Optional[int] = Field(default=512, ge=0, description="Content shuffle `h` parameter")
w: Optional[int] = Field(default=512, ge=0, description="Content shuffle `w` parameter")
f: Optional[int] = Field(default=256, ge=0, description="Content shuffle `f` parameter")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Content Shuffle Processor",
"tags": ["controlnet", "contentshuffle", "image", "processor"]
},
}
type: Literal["content_shuffle_image_processor"] = "content_shuffle_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
h: Optional[int] = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
w: Optional[int] = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: Optional[int] = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
def run_processor(self, image):
content_shuffle_processor = ContentShuffleDetector()
processed_image = content_shuffle_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
h=self.h,
w=self.w,
f=self.f
)
processed_image = content_shuffle_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
h=self.h,
w=self.w,
f=self.f,
)
return processed_image
# should work with controlnet_aux >= 0.0.4 and timm <= 0.6.13
class ZoeDepthImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Zoe (Depth) Processor")
@tags("controlnet", "zoe", "depth")
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
# fmt: off
type: Literal["zoe_depth_image_processor"] = "zoe_depth_image_processor"
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Zoe (Depth) Processor",
"tags": ["controlnet", "zoe", "depth", "image", "processor"]
},
}
type: Literal["zoe_depth_image_processor"] = "zoe_depth_image_processor"
def run_processor(self, image):
zoe_depth_processor = ZoeDetector.from_pretrained(
"lllyasviel/Annotators")
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = zoe_depth_processor(image)
return processed_image
class MediapipeFaceProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Mediapipe Face Processor")
@tags("controlnet", "mediapipe", "face")
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
# fmt: off
type: Literal["mediapipe_face_processor"] = "mediapipe_face_processor"
# Inputs
max_faces: int = Field(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = Field(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Mediapipe Processor",
"tags": ["controlnet", "mediapipe", "image", "processor"]
},
}
type: Literal["mediapipe_face_processor"] = "mediapipe_face_processor"
# Inputs
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
def run_processor(self, image):
# MediaPipeFaceDetector throws an error if image has alpha channel
# so convert to RGB if needed
if image.mode == 'RGBA':
image = image.convert('RGB')
if image.mode == "RGBA":
image = image.convert("RGB")
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(
image, max_faces=self.max_faces, min_confidence=self.min_confidence)
processed_image = mediapipe_face_processor(image, max_faces=self.max_faces, min_confidence=self.min_confidence)
return processed_image
class LeresImageProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Leres (Depth) Processor")
@tags("controlnet", "leres", "depth")
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
# fmt: off
type: Literal["leres_image_processor"] = "leres_image_processor"
# Inputs
thr_a: float = Field(default=0, description="Leres parameter `thr_a`")
thr_b: float = Field(default=0, description="Leres parameter `thr_b`")
boost: bool = Field(default=False, description="Whether to use boost mode")
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Leres (Depth) Processor",
"tags": ["controlnet", "leres", "depth", "image", "processor"]
},
}
type: Literal["leres_image_processor"] = "leres_image_processor"
# Inputs
thr_a: float = InputField(default=0, description="Leres parameter `thr_a`")
thr_b: float = InputField(default=0, description="Leres parameter `thr_b`")
boost: bool = InputField(default=False, description="Whether to use boost mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
processed_image = leres_processor(
image, thr_a=self.thr_a, thr_b=self.thr_b, boost=self.boost,
image,
thr_a=self.thr_a,
thr_b=self.thr_b,
boost=self.boost,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution)
image_resolution=self.image_resolution,
)
return processed_image
class TileResamplerProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Tile Resample Processor")
@tags("controlnet", "tile")
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
# fmt: off
type: Literal["tile_image_processor"] = "tile_image_processor"
# Inputs
#res: int = Field(default=512, ge=0, le=1024, description="The pixel resolution for each tile")
down_sampling_rate: float = Field(default=1.0, ge=1.0, le=8.0, description="Down sampling rate")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Tile Resample Processor",
"tags": ["controlnet", "tile", "resample", "image", "processor"]
},
}
# Inputs
# res: int = InputField(default=512, ge=0, le=1024, description="The pixel resolution for each tile")
down_sampling_rate: float = InputField(default=1.0, ge=1.0, le=8.0, description="Down sampling rate")
# tile_resample copied from sd-webui-controlnet/scripts/processor.py
def tile_resample(self,
np_img: np.ndarray,
res=512, # never used?
down_sampling_rate=1.0,
):
def tile_resample(
self,
np_img: np.ndarray,
res=512, # never used?
down_sampling_rate=1.0,
):
np_img = HWC3(np_img)
if down_sampling_rate < 1.1:
return np_img
@ -658,36 +503,33 @@ class TileResamplerProcessorInvocation(
def run_processor(self, img):
np_img = np.array(img, dtype=np.uint8)
processed_np_image = self.tile_resample(np_img,
# res=self.tile_size,
down_sampling_rate=self.down_sampling_rate
)
processed_np_image = self.tile_resample(
np_img,
# res=self.tile_size,
down_sampling_rate=self.down_sampling_rate,
)
processed_image = Image.fromarray(processed_np_image)
return processed_image
class SegmentAnythingProcessorInvocation(
ImageProcessorInvocation, PILInvocationConfig):
@title("Segment Anything Processor")
@tags("controlnet", "segmentanything")
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
# fmt: off
type: Literal["segment_anything_processor"] = "segment_anything_processor"
# fmt: on
class Config(InvocationConfig):
schema_extra = {"ui": {"title": "Segment Anything Processor", "tags": [
"controlnet", "segment", "anything", "sam", "image", "processor"]}, }
type: Literal["segment_anything_processor"] = "segment_anything_processor"
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
"ybelkada/segment-anything", subfolder="checkpoints")
"ybelkada/segment-anything", subfolder="checkpoints"
)
np_img = np.array(image, dtype=np.uint8)
processed_image = segment_anything_processor(np_img)
return processed_image
class SamDetectorReproducibleColors(SamDetector):
# overriding SamDetector.show_anns() method to use reproducible colors for segmentation image
# base class show_anns() method randomizes colors,
# which seems to also lead to non-reproducible image generation
@ -695,19 +537,15 @@ class SamDetectorReproducibleColors(SamDetector):
def show_anns(self, anns: List[Dict]):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
h, w = anns[0]['segmentation'].shape
final_img = Image.fromarray(
np.zeros((h, w, 3), dtype=np.uint8), mode="RGB")
sorted_anns = sorted(anns, key=(lambda x: x["area"]), reverse=True)
h, w = anns[0]["segmentation"].shape
final_img = Image.fromarray(np.zeros((h, w, 3), dtype=np.uint8), mode="RGB")
palette = ade_palette()
for i, ann in enumerate(sorted_anns):
m = ann['segmentation']
m = ann["segmentation"]
img = np.empty((m.shape[0], m.shape[1], 3), dtype=np.uint8)
# doing modulo just in case number of annotated regions exceeds number of colors in palette
ann_color = palette[i % len(palette)]
img[:, :] = ann_color
final_img.paste(
Image.fromarray(img, mode="RGB"),
(0, 0),
Image.fromarray(np.uint8(m * 255)))
final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255)))
return np.array(final_img, dtype=np.uint8)

View File

@ -5,43 +5,22 @@ from typing import Literal
import cv2 as cv
import numpy
from PIL import Image, ImageOps
from pydantic import BaseModel, Field
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, tags, title
class CvInvocationConfig(BaseModel):
"""Helper class to provide all OpenCV invocations with additional config"""
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["cv", "image"],
},
}
class CvInpaintInvocation(BaseInvocation, CvInvocationConfig):
@title("OpenCV Inpaint")
@tags("opencv", "inpaint")
class CvInpaintInvocation(BaseInvocation):
"""Simple inpaint using opencv."""
# fmt: off
type: Literal["cv_inpaint"] = "cv_inpaint"
# Inputs
image: ImageField = Field(default=None, description="The image to inpaint")
mask: ImageField = Field(default=None, description="The mask to use when inpainting")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "OpenCV Inpaint",
"tags": ["opencv", "inpaint"]
},
}
image: ImageField = InputField(description="The image to inpaint")
mask: ImageField = InputField(description="The mask to use when inpainting")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)

View File

@ -1,254 +0,0 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from functools import partial
from typing import Literal, Optional, get_args
import torch
from pydantic import Field
from invokeai.app.models.image import (ColorField, ImageCategory, ImageField,
ResourceOrigin)
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.backend.generator.inpaint import infill_methods
from ...backend.generator import Inpaint, InvokeAIGenerator
from ...backend.stable_diffusion import PipelineIntermediateState
from ..util.step_callback import stable_diffusion_step_callback
from .baseinvocation import BaseInvocation, InvocationConfig, InvocationContext
from .image import ImageOutput
from ...backend.model_management.lora import ModelPatcher
from ...backend.stable_diffusion.diffusers_pipeline import StableDiffusionGeneratorPipeline
from .model import UNetField, VaeField
from .compel import ConditioningField
from contextlib import contextmanager, ExitStack, ContextDecorator
SAMPLER_NAME_VALUES = Literal[tuple(InvokeAIGenerator.schedulers())]
INFILL_METHODS = Literal[tuple(infill_methods())]
DEFAULT_INFILL_METHOD = (
"patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
)
from .latent import get_scheduler
class OldModelContext(ContextDecorator):
model: StableDiffusionGeneratorPipeline
def __init__(self, model):
self.model = model
def __enter__(self):
return self.model
def __exit__(self, *exc):
return False
class OldModelInfo:
name: str
hash: str
context: OldModelContext
def __init__(self, name: str, hash: str, model: StableDiffusionGeneratorPipeline):
self.name = name
self.hash = hash
self.context = OldModelContext(
model=model,
)
class InpaintInvocation(BaseInvocation):
"""Generates an image using inpaint."""
type: Literal["inpaint"] = "inpaint"
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
seed: int = Field(ge=0, le=SEED_MAX, description="The seed to use (omit for random)", default_factory=get_random_seed)
steps: int = Field(default=30, gt=0, description="The number of steps to use to generate the image")
width: int = Field(default=512, multiple_of=8, gt=0, description="The width of the resulting image", )
height: int = Field(default=512, multiple_of=8, gt=0, description="The height of the resulting image", )
cfg_scale: float = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
unet: UNetField = Field(default=None, description="UNet model")
vae: VaeField = Field(default=None, description="Vae model")
# Inputs
image: Optional[ImageField] = Field(description="The input image")
strength: float = Field(
default=0.75, gt=0, le=1, description="The strength of the original image"
)
fit: bool = Field(
default=True,
description="Whether or not the result should be fit to the aspect ratio of the input image",
)
# Inputs
mask: Optional[ImageField] = Field(description="The mask")
seam_size: int = Field(default=96, ge=1, description="The seam inpaint size (px)")
seam_blur: int = Field(
default=16, ge=0, description="The seam inpaint blur radius (px)"
)
seam_strength: float = Field(
default=0.75, gt=0, le=1, description="The seam inpaint strength"
)
seam_steps: int = Field(
default=30, ge=1, description="The number of steps to use for seam inpaint"
)
tile_size: int = Field(
default=32, ge=1, description="The tile infill method size (px)"
)
infill_method: INFILL_METHODS = Field(
default=DEFAULT_INFILL_METHOD,
description="The method used to infill empty regions (px)",
)
inpaint_width: Optional[int] = Field(
default=None,
multiple_of=8,
gt=0,
description="The width of the inpaint region (px)",
)
inpaint_height: Optional[int] = Field(
default=None,
multiple_of=8,
gt=0,
description="The height of the inpaint region (px)",
)
inpaint_fill: Optional[ColorField] = Field(
default=ColorField(r=127, g=127, b=127, a=255),
description="The solid infill method color",
)
inpaint_replace: float = Field(
default=0.0,
ge=0.0,
le=1.0,
description="The amount by which to replace masked areas with latent noise",
)
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["stable-diffusion", "image"],
"title": "Inpaint"
},
}
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
intermediate_state: PipelineIntermediateState,
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.dict(),
source_node_id=source_node_id,
)
def get_conditioning(self, context, unet):
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
c = positive_cond_data.conditionings[0].embeds.to(device=unet.device, dtype=unet.dtype)
extra_conditioning_info = positive_cond_data.conditionings[0].extra_conditioning
negative_cond_data = context.services.latents.get(self.negative_conditioning.conditioning_name)
uc = negative_cond_data.conditionings[0].embeds.to(device=unet.device, dtype=unet.dtype)
return (uc, c, extra_conditioning_info)
@contextmanager
def load_model_old_way(self, context, scheduler):
def _lora_loader():
for lora in self.unet.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}), context=context,)
yield (lora_info.context.model, lora.weight)
del lora_info
return
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict(), context=context,)
vae_info = context.services.model_manager.get_model(**self.vae.vae.dict(), context=context,)
with vae_info as vae,\
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),\
unet_info as unet:
device = context.services.model_manager.mgr.cache.execution_device
dtype = context.services.model_manager.mgr.cache.precision
pipeline = StableDiffusionGeneratorPipeline(
vae=vae,
text_encoder=None,
tokenizer=None,
unet=unet,
scheduler=scheduler,
safety_checker=None,
feature_extractor=None,
requires_safety_checker=False,
precision="float16" if dtype == torch.float16 else "float32",
execution_device=device,
)
yield OldModelInfo(
name=self.unet.unet.model_name,
hash="<NO-HASH>",
model=pipeline,
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = (
None
if self.image is None
else context.services.images.get_pil_image(self.image.image_name)
)
mask = (
None
if self.mask is None
else context.services.images.get_pil_image(self.mask.image_name)
)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
with self.load_model_old_way(context, scheduler) as model:
conditioning = self.get_conditioning(context, model.context.model.unet)
outputs = Inpaint(model).generate(
conditioning=conditioning,
scheduler=scheduler,
init_image=image,
mask_image=mask,
step_callback=partial(self.dispatch_progress, context, source_node_id),
**self.dict(
exclude={"positive_conditioning", "negative_conditioning", "scheduler", "image", "mask"}
), # Shorthand for passing all of the parameters above manually
)
# Outputs is an infinite iterator that will return a new InvokeAIGeneratorOutput object
# each time it is called. We only need the first one.
generator_output = next(outputs)
image_dto = context.services.images.create(
image=generator_output.image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -1,112 +1,31 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from pathlib import Path
from typing import Literal, Optional
import cv2
import numpy
from PIL import Image, ImageFilter, ImageOps, ImageChops
from pydantic import BaseModel, Field
from typing import Union
from PIL import Image, ImageChops, ImageFilter, ImageOps
from ..models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
class PILInvocationConfig(BaseModel):
"""Helper class to provide all PIL invocations with additional config"""
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["PIL", "image"],
},
}
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
class MaskOutput(BaseInvocationOutput):
"""Base class for invocations that output a mask"""
# fmt: off
type: Literal["mask"] = "mask"
mask: ImageField = Field(default=None, description="The output mask")
width: int = Field(description="The width of the mask in pixels")
height: int = Field(description="The height of the mask in pixels")
# fmt: on
class Config:
schema_extra = {
"required": [
"type",
"mask",
]
}
class LoadImageInvocation(BaseInvocation):
"""Load an image and provide it as output."""
# fmt: off
type: Literal["load_image"] = "load_image"
# Inputs
image: Optional[ImageField] = Field(
default=None, description="The image to load"
)
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Load Image",
"tags": ["image", "load"]
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image.width,
height=image.height,
)
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, tags, title
@title("Show Image")
@tags("image")
class ShowImageInvocation(BaseInvocation):
"""Displays a provided image, and passes it forward in the pipeline."""
# Metadata
type: Literal["show_image"] = "show_image"
# Inputs
image: Optional[ImageField] = Field(
default=None, description="The image to show"
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Show Image",
"tags": ["image", "show"]
},
}
image: ImageField = InputField(description="The image to show")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -122,34 +41,25 @@ class ShowImageInvocation(BaseInvocation):
)
class ImageCropInvocation(BaseInvocation, PILInvocationConfig):
@title("Crop Image")
@tags("image", "crop")
class ImageCropInvocation(BaseInvocation):
"""Crops an image to a specified box. The box can be outside of the image."""
# fmt: off
# Metadata
type: Literal["img_crop"] = "img_crop"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to crop")
x: int = Field(default=0, description="The left x coordinate of the crop rectangle")
y: int = Field(default=0, description="The top y coordinate of the crop rectangle")
width: int = Field(default=512, gt=0, description="The width of the crop rectangle")
height: int = Field(default=512, gt=0, description="The height of the crop rectangle")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Crop Image",
"tags": ["image", "crop"]
},
}
image: ImageField = InputField(description="The image to crop")
x: int = InputField(default=0, description="The left x coordinate of the crop rectangle")
y: int = InputField(default=0, description="The top y coordinate of the crop rectangle")
width: int = InputField(default=512, gt=0, description="The width of the crop rectangle")
height: int = InputField(default=512, gt=0, description="The height of the crop rectangle")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_crop = Image.new(
mode="RGBA", size=(self.width, self.height), color=(0, 0, 0, 0)
)
image_crop = Image.new(mode="RGBA", size=(self.width, self.height), color=(0, 0, 0, 0))
image_crop.paste(image, (-self.x, -self.y))
image_dto = context.services.images.create(
@ -168,38 +78,31 @@ class ImageCropInvocation(BaseInvocation, PILInvocationConfig):
)
class ImagePasteInvocation(BaseInvocation, PILInvocationConfig):
@title("Paste Image")
@tags("image", "paste")
class ImagePasteInvocation(BaseInvocation):
"""Pastes an image into another image."""
# fmt: off
# Metadata
type: Literal["img_paste"] = "img_paste"
# Inputs
base_image: Optional[ImageField] = Field(default=None, description="The base image")
image: Optional[ImageField] = Field(default=None, description="The image to paste")
mask: Optional[ImageField] = Field(default=None, description="The mask to use when pasting")
x: int = Field(default=0, description="The left x coordinate at which to paste the image")
y: int = Field(default=0, description="The top y coordinate at which to paste the image")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Paste Image",
"tags": ["image", "paste"]
},
}
base_image: ImageField = InputField(description="The base image")
image: ImageField = InputField(description="The image to paste")
mask: Optional[ImageField] = InputField(
default=None,
description="The mask to use when pasting",
)
x: int = InputField(default=0, description="The left x coordinate at which to paste the image")
y: int = InputField(default=0, description="The top y coordinate at which to paste the image")
def invoke(self, context: InvocationContext) -> ImageOutput:
base_image = context.services.images.get_pil_image(self.base_image.image_name)
image = context.services.images.get_pil_image(self.image.image_name)
mask = (
None
if self.mask is None
else ImageOps.invert(
context.services.images.get_pil_image(self.mask.image_name)
)
)
mask = None
if self.mask is not None:
mask = context.services.images.get_pil_image(self.mask.image_name)
mask = ImageOps.invert(mask.convert("L"))
# TODO: probably shouldn't invert mask here... should user be required to do it?
min_x = min(0, self.x)
@ -207,9 +110,7 @@ class ImagePasteInvocation(BaseInvocation, PILInvocationConfig):
max_x = max(base_image.width, image.width + self.x)
max_y = max(base_image.height, image.height + self.y)
new_image = Image.new(
mode="RGBA", size=(max_x - min_x, max_y - min_y), color=(0, 0, 0, 0)
)
new_image = Image.new(mode="RGBA", size=(max_x - min_x, max_y - min_y), color=(0, 0, 0, 0))
new_image.paste(base_image, (abs(min_x), abs(min_y)))
new_image.paste(image, (max(0, self.x), max(0, self.y)), mask=mask)
@ -229,26 +130,19 @@ class ImagePasteInvocation(BaseInvocation, PILInvocationConfig):
)
class MaskFromAlphaInvocation(BaseInvocation, PILInvocationConfig):
@title("Mask from Alpha")
@tags("image", "mask")
class MaskFromAlphaInvocation(BaseInvocation):
"""Extracts the alpha channel of an image as a mask."""
# fmt: off
# Metadata
type: Literal["tomask"] = "tomask"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to create the mask from")
invert: bool = Field(default=False, description="Whether or not to invert the mask")
# fmt: on
image: ImageField = InputField(description="The image to create the mask from")
invert: bool = InputField(default=False, description="Whether or not to invert the mask")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Mask From Alpha",
"tags": ["image", "mask", "alpha"]
},
}
def invoke(self, context: InvocationContext) -> MaskOutput:
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_mask = image.split()[-1]
@ -264,31 +158,24 @@ class MaskFromAlphaInvocation(BaseInvocation, PILInvocationConfig):
is_intermediate=self.is_intermediate,
)
return MaskOutput(
mask=ImageField(image_name=image_dto.image_name),
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImageMultiplyInvocation(BaseInvocation, PILInvocationConfig):
@title("Multiply Images")
@tags("image", "multiply")
class ImageMultiplyInvocation(BaseInvocation):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
# fmt: off
# Metadata
type: Literal["img_mul"] = "img_mul"
# Inputs
image1: Optional[ImageField] = Field(default=None, description="The first image to multiply")
image2: Optional[ImageField] = Field(default=None, description="The second image to multiply")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Multiply Images",
"tags": ["image", "multiply"]
},
}
image1: ImageField = InputField(description="The first image to multiply")
image2: ImageField = InputField(description="The second image to multiply")
def invoke(self, context: InvocationContext) -> ImageOutput:
image1 = context.services.images.get_pil_image(self.image1.image_name)
@ -315,24 +202,17 @@ class ImageMultiplyInvocation(BaseInvocation, PILInvocationConfig):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
class ImageChannelInvocation(BaseInvocation, PILInvocationConfig):
@title("Extract Image Channel")
@tags("image", "channel")
class ImageChannelInvocation(BaseInvocation):
"""Gets a channel from an image."""
# fmt: off
# Metadata
type: Literal["img_chan"] = "img_chan"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to get the channel from")
channel: IMAGE_CHANNELS = Field(default="A", description="The channel to get")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Image Channel",
"tags": ["image", "channel"]
},
}
image: ImageField = InputField(description="The image to get the channel from")
channel: IMAGE_CHANNELS = InputField(default="A", description="The channel to get")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -358,24 +238,17 @@ class ImageChannelInvocation(BaseInvocation, PILInvocationConfig):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
class ImageConvertInvocation(BaseInvocation, PILInvocationConfig):
@title("Convert Image Mode")
@tags("image", "convert")
class ImageConvertInvocation(BaseInvocation):
"""Converts an image to a different mode."""
# fmt: off
# Metadata
type: Literal["img_conv"] = "img_conv"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to convert")
mode: IMAGE_MODES = Field(default="L", description="The mode to convert to")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Convert Image",
"tags": ["image", "convert"]
},
}
image: ImageField = InputField(description="The image to convert")
mode: IMAGE_MODES = InputField(default="L", description="The mode to convert to")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -398,33 +271,25 @@ class ImageConvertInvocation(BaseInvocation, PILInvocationConfig):
)
class ImageBlurInvocation(BaseInvocation, PILInvocationConfig):
@title("Blur Image")
@tags("image", "blur")
class ImageBlurInvocation(BaseInvocation):
"""Blurs an image"""
# fmt: off
# Metadata
type: Literal["img_blur"] = "img_blur"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to blur")
radius: float = Field(default=8.0, ge=0, description="The blur radius")
blur_type: Literal["gaussian", "box"] = Field(default="gaussian", description="The type of blur")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Blur Image",
"tags": ["image", "blur"]
},
}
image: ImageField = InputField(description="The image to blur")
radius: float = InputField(default=8.0, ge=0, description="The blur radius")
# Metadata
blur_type: Literal["gaussian", "box"] = InputField(default="gaussian", description="The type of blur")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
blur = (
ImageFilter.GaussianBlur(self.radius)
if self.blur_type == "gaussian"
else ImageFilter.BoxBlur(self.radius)
ImageFilter.GaussianBlur(self.radius) if self.blur_type == "gaussian" else ImageFilter.BoxBlur(self.radius)
)
blur_image = image.filter(blur)
@ -464,26 +329,19 @@ PIL_RESAMPLING_MAP = {
}
class ImageResizeInvocation(BaseInvocation, PILInvocationConfig):
@title("Resize Image")
@tags("image", "resize")
class ImageResizeInvocation(BaseInvocation):
"""Resizes an image to specific dimensions"""
# fmt: off
# Metadata
type: Literal["img_resize"] = "img_resize"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to resize")
width: Union[int, None] = Field(ge=64, multiple_of=8, description="The width to resize to (px)")
height: Union[int, None] = Field(ge=64, multiple_of=8, description="The height to resize to (px)")
resample_mode: PIL_RESAMPLING_MODES = Field(default="bicubic", description="The resampling mode")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Resize Image",
"tags": ["image", "resize"]
},
}
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=64, multiple_of=8, description="The width to resize to (px)")
height: int = InputField(default=512, ge=64, multiple_of=8, description="The height to resize to (px)")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -511,25 +369,22 @@ class ImageResizeInvocation(BaseInvocation, PILInvocationConfig):
)
class ImageScaleInvocation(BaseInvocation, PILInvocationConfig):
@title("Scale Image")
@tags("image", "scale")
class ImageScaleInvocation(BaseInvocation):
"""Scales an image by a factor"""
# fmt: off
# Metadata
type: Literal["img_scale"] = "img_scale"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to scale")
scale_factor: Optional[float] = Field(default=2.0, gt=0, description="The factor by which to scale the image")
resample_mode: PIL_RESAMPLING_MODES = Field(default="bicubic", description="The resampling mode")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Scale Image",
"tags": ["image", "scale"]
},
}
image: ImageField = InputField(description="The image to scale")
scale_factor: float = InputField(
default=2.0,
gt=0,
description="The factor by which to scale the image",
)
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -559,31 +414,24 @@ class ImageScaleInvocation(BaseInvocation, PILInvocationConfig):
)
class ImageLerpInvocation(BaseInvocation, PILInvocationConfig):
@title("Lerp Image")
@tags("image", "lerp")
class ImageLerpInvocation(BaseInvocation):
"""Linear interpolation of all pixels of an image"""
# fmt: off
# Metadata
type: Literal["img_lerp"] = "img_lerp"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to lerp")
min: int = Field(default=0, ge=0, le=255, description="The minimum output value")
max: int = Field(default=255, ge=0, le=255, description="The maximum output value")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Image Linear Interpolation",
"tags": ["image", "linear", "interpolation", "lerp"]
},
}
image: ImageField = InputField(description="The image to lerp")
min: int = InputField(default=0, ge=0, le=255, description="The minimum output value")
max: int = InputField(default=255, ge=0, le=255, description="The maximum output value")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_arr = numpy.asarray(image, dtype=numpy.float32) / 255
image_arr = image_arr * (self.max - self.min) + self.max
image_arr = image_arr * (self.max - self.min) + self.min
lerp_image = Image.fromarray(numpy.uint8(image_arr))
@ -603,36 +451,24 @@ class ImageLerpInvocation(BaseInvocation, PILInvocationConfig):
)
class ImageInverseLerpInvocation(BaseInvocation, PILInvocationConfig):
@title("Inverse Lerp Image")
@tags("image", "ilerp")
class ImageInverseLerpInvocation(BaseInvocation):
"""Inverse linear interpolation of all pixels of an image"""
# fmt: off
# Metadata
type: Literal["img_ilerp"] = "img_ilerp"
# Inputs
image: Optional[ImageField] = Field(default=None, description="The image to lerp")
min: int = Field(default=0, ge=0, le=255, description="The minimum input value")
max: int = Field(default=255, ge=0, le=255, description="The maximum input value")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Image Inverse Linear Interpolation",
"tags": ["image", "linear", "interpolation", "inverse"]
},
}
image: ImageField = InputField(description="The image to lerp")
min: int = InputField(default=0, ge=0, le=255, description="The minimum input value")
max: int = InputField(default=255, ge=0, le=255, description="The maximum input value")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_arr = numpy.asarray(image, dtype=numpy.float32)
image_arr = (
numpy.minimum(
numpy.maximum(image_arr - self.min, 0) / float(self.max - self.min), 1
)
* 255
)
image_arr = numpy.minimum(numpy.maximum(image_arr - self.min, 0) / float(self.max - self.min), 1) * 255
ilerp_image = Image.fromarray(numpy.uint8(image_arr))
@ -650,3 +486,418 @@ class ImageInverseLerpInvocation(BaseInvocation, PILInvocationConfig):
width=image_dto.width,
height=image_dto.height,
)
@title("Blur NSFW Image")
@tags("image", "nsfw")
class ImageNSFWBlurInvocation(BaseInvocation):
"""Add blur to NSFW-flagged images"""
# Metadata
type: Literal["img_nsfw"] = "img_nsfw"
# Inputs
image: ImageField = InputField(description="The image to check")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
logger = context.services.logger
logger.debug("Running NSFW checker")
if SafetyChecker.has_nsfw_concept(image):
logger.info("A potentially NSFW image has been detected. Image will be blurred.")
blurry_image = image.filter(filter=ImageFilter.GaussianBlur(radius=32))
caution = self._get_caution_img()
blurry_image.paste(caution, (0, 0), caution)
image = blurry_image
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
def _get_caution_img(self) -> Image:
import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
return caution.resize((caution.width // 2, caution.height // 2))
@title("Add Invisible Watermark")
@tags("image", "watermark")
class ImageWatermarkInvocation(BaseInvocation):
"""Add an invisible watermark to an image"""
# Metadata
type: Literal["img_watermark"] = "img_watermark"
# Inputs
image: ImageField = InputField(description="The image to check")
text: str = InputField(default="InvokeAI", description="Watermark text")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
new_image = InvisibleWatermark.add_watermark(image, self.text)
image_dto = context.services.images.create(
image=new_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@title("Mask Edge")
@tags("image", "mask", "inpaint")
class MaskEdgeInvocation(BaseInvocation):
"""Applies an edge mask to an image"""
type: Literal["mask_edge"] = "mask_edge"
# Inputs
image: ImageField = InputField(description="The image to apply the mask to")
edge_size: int = InputField(description="The size of the edge")
edge_blur: int = InputField(description="The amount of blur on the edge")
low_threshold: int = InputField(description="First threshold for the hysteresis procedure in Canny edge detection")
high_threshold: int = InputField(
description="Second threshold for the hysteresis procedure in Canny edge detection"
)
def invoke(self, context: InvocationContext) -> ImageOutput:
mask = context.services.images.get_pil_image(self.image.image_name)
npimg = numpy.asarray(mask, dtype=numpy.uint8)
npgradient = numpy.uint8(255 * (1.0 - numpy.floor(numpy.abs(0.5 - numpy.float32(npimg) / 255.0) * 2.0)))
npedge = cv2.Canny(npimg, threshold1=self.low_threshold, threshold2=self.high_threshold)
npmask = npgradient + npedge
npmask = cv2.dilate(npmask, numpy.ones((3, 3), numpy.uint8), iterations=int(self.edge_size / 2))
new_mask = Image.fromarray(npmask)
if self.edge_blur > 0:
new_mask = new_mask.filter(ImageFilter.BoxBlur(self.edge_blur))
new_mask = ImageOps.invert(new_mask)
image_dto = context.services.images.create(
image=new_mask,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.MASK,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@title("Combine Mask")
@tags("image", "mask", "multiply")
class MaskCombineInvocation(BaseInvocation):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
type: Literal["mask_combine"] = "mask_combine"
# Inputs
mask1: ImageField = InputField(description="The first mask to combine")
mask2: ImageField = InputField(description="The second image to combine")
def invoke(self, context: InvocationContext) -> ImageOutput:
mask1 = context.services.images.get_pil_image(self.mask1.image_name).convert("L")
mask2 = context.services.images.get_pil_image(self.mask2.image_name).convert("L")
combined_mask = ImageChops.multiply(mask1, mask2)
image_dto = context.services.images.create(
image=combined_mask,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@title("Color Correct")
@tags("image", "color")
class ColorCorrectInvocation(BaseInvocation):
"""
Shifts the colors of a target image to match the reference image, optionally
using a mask to only color-correct certain regions of the target image.
"""
type: Literal["color_correct"] = "color_correct"
# Inputs
image: ImageField = InputField(description="The image to color-correct")
reference: ImageField = InputField(description="Reference image for color-correction")
mask: Optional[ImageField] = InputField(default=None, description="Mask to use when applying color-correction")
mask_blur_radius: float = InputField(default=8, description="Mask blur radius")
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_init_mask = None
if self.mask is not None:
pil_init_mask = context.services.images.get_pil_image(self.mask.image_name).convert("L")
init_image = context.services.images.get_pil_image(self.reference.image_name)
result = context.services.images.get_pil_image(self.image.image_name).convert("RGBA")
# if init_image is None or init_mask is None:
# return result
# Get the original alpha channel of the mask if there is one.
# Otherwise it is some other black/white image format ('1', 'L' or 'RGB')
# pil_init_mask = (
# init_mask.getchannel("A")
# if init_mask.mode == "RGBA"
# else init_mask.convert("L")
# )
pil_init_image = init_image.convert("RGBA") # Add an alpha channel if one doesn't exist
# Build an image with only visible pixels from source to use as reference for color-matching.
init_rgb_pixels = numpy.asarray(init_image.convert("RGB"), dtype=numpy.uint8)
init_a_pixels = numpy.asarray(pil_init_image.getchannel("A"), dtype=numpy.uint8)
init_mask_pixels = numpy.asarray(pil_init_mask, dtype=numpy.uint8)
# Get numpy version of result
np_image = numpy.asarray(result.convert("RGB"), dtype=numpy.uint8)
# Mask and calculate mean and standard deviation
mask_pixels = init_a_pixels * init_mask_pixels > 0
np_init_rgb_pixels_masked = init_rgb_pixels[mask_pixels, :]
np_image_masked = np_image[mask_pixels, :]
if np_init_rgb_pixels_masked.size > 0:
init_means = np_init_rgb_pixels_masked.mean(axis=0)
init_std = np_init_rgb_pixels_masked.std(axis=0)
gen_means = np_image_masked.mean(axis=0)
gen_std = np_image_masked.std(axis=0)
# Color correct
np_matched_result = np_image.copy()
np_matched_result[:, :, :] = (
(
(
(np_matched_result[:, :, :].astype(numpy.float32) - gen_means[None, None, :])
/ gen_std[None, None, :]
)
* init_std[None, None, :]
+ init_means[None, None, :]
)
.clip(0, 255)
.astype(numpy.uint8)
)
matched_result = Image.fromarray(np_matched_result, mode="RGB")
else:
matched_result = Image.fromarray(np_image, mode="RGB")
# Blur the mask out (into init image) by specified amount
if self.mask_blur_radius > 0:
nm = numpy.asarray(pil_init_mask, dtype=numpy.uint8)
nmd = cv2.erode(
nm,
kernel=numpy.ones((3, 3), dtype=numpy.uint8),
iterations=int(self.mask_blur_radius / 2),
)
pmd = Image.fromarray(nmd, mode="L")
blurred_init_mask = pmd.filter(ImageFilter.BoxBlur(self.mask_blur_radius))
else:
blurred_init_mask = pil_init_mask
multiplied_blurred_init_mask = ImageChops.multiply(blurred_init_mask, result.split()[-1])
# Paste original on color-corrected generation (using blurred mask)
matched_result.paste(init_image, (0, 0), mask=multiplied_blurred_init_mask)
image_dto = context.services.images.create(
image=matched_result,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@title("Image Hue Adjustment")
@tags("image", "hue", "hsl")
class ImageHueAdjustmentInvocation(BaseInvocation):
"""Adjusts the Hue of an image."""
type: Literal["img_hue_adjust"] = "img_hue_adjust"
# Inputs
image: ImageField = InputField(description="The image to adjust")
hue: int = InputField(default=0, description="The degrees by which to rotate the hue, 0-360")
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_image = context.services.images.get_pil_image(self.image.image_name)
# Convert image to HSV color space
hsv_image = numpy.array(pil_image.convert("HSV"))
# Convert hue from 0..360 to 0..256
hue = int(256 * ((self.hue % 360) / 360))
# Increment each hue and wrap around at 255
hsv_image[:, :, 0] = (hsv_image[:, :, 0] + hue) % 256
# Convert back to PIL format and to original color mode
pil_image = Image.fromarray(hsv_image, mode="HSV").convert("RGBA")
image_dto = context.services.images.create(
image=pil_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
),
width=image_dto.width,
height=image_dto.height,
)
@title("Image Luminosity Adjustment")
@tags("image", "luminosity", "hsl")
class ImageLuminosityAdjustmentInvocation(BaseInvocation):
"""Adjusts the Luminosity (Value) of an image."""
type: Literal["img_luminosity_adjust"] = "img_luminosity_adjust"
# Inputs
image: ImageField = InputField(description="The image to adjust")
luminosity: float = InputField(
default=1.0, ge=0, le=1, description="The factor by which to adjust the luminosity (value)"
)
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_image = context.services.images.get_pil_image(self.image.image_name)
# Convert PIL image to OpenCV format (numpy array), note color channel
# ordering is changed from RGB to BGR
image = numpy.array(pil_image.convert("RGB"))[:, :, ::-1]
# Convert image to HSV color space
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Adjust the luminosity (value)
hsv_image[:, :, 2] = numpy.clip(hsv_image[:, :, 2] * self.luminosity, 0, 255)
# Convert image back to BGR color space
image = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
# Convert back to PIL format and to original color mode
pil_image = Image.fromarray(image[:, :, ::-1], "RGB").convert("RGBA")
image_dto = context.services.images.create(
image=pil_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
),
width=image_dto.width,
height=image_dto.height,
)
@title("Image Saturation Adjustment")
@tags("image", "saturation", "hsl")
class ImageSaturationAdjustmentInvocation(BaseInvocation):
"""Adjusts the Saturation of an image."""
type: Literal["img_saturation_adjust"] = "img_saturation_adjust"
# Inputs
image: ImageField = InputField(description="The image to adjust")
saturation: float = InputField(default=1.0, ge=0, le=1, description="The factor by which to adjust the saturation")
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_image = context.services.images.get_pil_image(self.image.image_name)
# Convert PIL image to OpenCV format (numpy array), note color channel
# ordering is changed from RGB to BGR
image = numpy.array(pil_image.convert("RGB"))[:, :, ::-1]
# Convert image to HSV color space
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Adjust the saturation
hsv_image[:, :, 1] = numpy.clip(hsv_image[:, :, 1] * self.saturation, 0, 255)
# Convert image back to BGR color space
image = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
# Convert back to PIL format and to original color mode
pil_image = Image.fromarray(image[:, :, ::-1], "RGB").convert("RGBA")
image_dto = context.services.images.create(
image=pil_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -5,18 +5,13 @@ from typing import Literal, Optional, get_args
import numpy as np
import math
from PIL import Image, ImageOps
from pydantic import Field
from invokeai.app.invocations.primitives import ImageField, ImageOutput, ColorField
from invokeai.app.invocations.image import ImageOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.backend.image_util.patchmatch import PatchMatch
from ..models.image import ColorField, ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
InvocationConfig,
InvocationContext,
)
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, title, tags
def infill_methods() -> list[str]:
@ -30,9 +25,7 @@ def infill_methods() -> list[str]:
INFILL_METHODS = Literal[tuple(infill_methods())]
DEFAULT_INFILL_METHOD = (
"patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
)
DEFAULT_INFILL_METHOD = "patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
def infill_patchmatch(im: Image.Image) -> Image.Image:
@ -44,9 +37,7 @@ def infill_patchmatch(im: Image.Image) -> Image.Image:
return im
# Patchmatch (note, we may want to expose patch_size? Increasing it significantly impacts performance though)
im_patched_np = PatchMatch.inpaint(
im.convert("RGB"), ImageOps.invert(im.split()[-1]), patch_size=3
)
im_patched_np = PatchMatch.inpaint(im.convert("RGB"), ImageOps.invert(im.split()[-1]), patch_size=3)
im_patched = Image.fromarray(im_patched_np, mode="RGB")
return im_patched
@ -68,9 +59,7 @@ def get_tile_images(image: np.ndarray, width=8, height=8):
)
def tile_fill_missing(
im: Image.Image, tile_size: int = 16, seed: Optional[int] = None
) -> Image.Image:
def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int] = None) -> Image.Image:
# Only fill if there's an alpha layer
if im.mode != "RGBA":
return im
@ -103,9 +92,7 @@ def tile_fill_missing(
# Find all invalid tiles and replace with a random valid tile
replace_count = (tiles_mask == False).sum()
rng = np.random.default_rng(seed=seed)
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[
rng.choice(filtered_tiles.shape[0], replace_count), :, :, :
]
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[rng.choice(filtered_tiles.shape[0], replace_count), :, :, :]
# Convert back to an image
tiles_all = tiles_all.reshape(tshape)
@ -122,26 +109,20 @@ def tile_fill_missing(
return si
@title("Solid Color Infill")
@tags("image", "inpaint")
class InfillColorInvocation(BaseInvocation):
"""Infills transparent areas of an image with a solid color"""
type: Literal["infill_rgba"] = "infill_rgba"
image: Optional[ImageField] = Field(
default=None, description="The image to infill"
)
color: ColorField = Field(
# Inputs
image: ImageField = InputField(description="The image to infill")
color: ColorField = InputField(
default=ColorField(r=127, g=127, b=127, a=255),
description="The color to use to infill",
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Color Infill",
"tags": ["image", "inpaint", "color", "infill"]
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -166,36 +147,27 @@ class InfillColorInvocation(BaseInvocation):
)
@title("Tile Infill")
@tags("image", "inpaint")
class InfillTileInvocation(BaseInvocation):
"""Infills transparent areas of an image with tiles of the image"""
type: Literal["infill_tile"] = "infill_tile"
image: Optional[ImageField] = Field(
default=None, description="The image to infill"
)
tile_size: int = Field(default=32, ge=1, description="The tile size (px)")
seed: int = Field(
# Input
image: ImageField = InputField(description="The image to infill")
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
seed: int = InputField(
ge=0,
le=SEED_MAX,
description="The seed to use for tile generation (omit for random)",
default_factory=get_random_seed,
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Tile Infill",
"tags": ["image", "inpaint", "tile", "infill"]
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
infilled = tile_fill_missing(
image.copy(), seed=self.seed, tile_size=self.tile_size
)
infilled = tile_fill_missing(image.copy(), seed=self.seed, tile_size=self.tile_size)
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.services.images.create(
@ -214,22 +186,15 @@ class InfillTileInvocation(BaseInvocation):
)
@title("PatchMatch Infill")
@tags("image", "inpaint")
class InfillPatchMatchInvocation(BaseInvocation):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
type: Literal["infill_patchmatch"] = "infill_patchmatch"
image: Optional[ImageField] = Field(
default=None, description="The image to infill"
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Patch Match Infill",
"tags": ["image", "inpaint", "patchmatch", "infill"]
},
}
# Inputs
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)

View File

@ -5,89 +5,76 @@ from typing import List, Literal, Optional, Union
import einops
import torch
from diffusers import ControlNetModel
import torchvision.transforms as T
from diffusers.image_processor import VaeImageProcessor
from diffusers.schedulers import SchedulerMixin as Scheduler
from pydantic import BaseModel, Field, validator
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.model_management.models.base import ModelType
from ...backend.model_management.lora import ModelPatcher
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData, ControlNetData, StableDiffusionGeneratorPipeline,
image_resized_to_grid_as_tensor)
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import \
PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_torch_device, torch_dtype, choose_precision
from ..models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from .compel import ConditioningField
from .controlnet_image_processors import ControlField
from .image import ImageOutput
from .model import ModelInfo, UNetField, VaeField
from invokeai.app.util.controlnet_utils import prepare_control_image
from diffusers.models.attention_processor import (
AttnProcessor2_0,
LoRAAttnProcessor2_0,
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.schedulers import DPMSolverSDEScheduler
from diffusers.schedulers import SchedulerMixin as Scheduler
from pydantic import BaseModel, Field, validator
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import (
ImageField,
ImageOutput,
LatentsField,
LatentsOutput,
build_latents_output,
)
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.model_management.models import ModelType, SilenceWarnings
from ...backend.model_management import BaseModelType, ModelPatcher
from ...backend.model_management.lora import ModelPatcher
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData,
ControlNetData,
StableDiffusionGeneratorPipeline,
image_resized_to_grid_as_tensor,
)
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_precision, choose_torch_device
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
tags,
title,
)
from .compel import ConditioningField
from .controlnet_image_processors import ControlField
from .model import ModelInfo, UNetField, VaeField
DEFAULT_PRECISION = choose_precision(choose_torch_device())
class LatentsField(BaseModel):
"""A latents field used for passing latents between invocations"""
latents_name: Optional[str] = Field(
default=None, description="The name of the latents")
class Config:
schema_extra = {"required": ["latents_name"]}
class LatentsOutput(BaseInvocationOutput):
"""Base class for invocations that output latents"""
#fmt: off
type: Literal["latents_output"] = "latents_output"
# Inputs
latents: LatentsField = Field(default=None, description="The output latents")
width: int = Field(description="The width of the latents in pixels")
height: int = Field(description="The height of the latents in pixels")
#fmt: on
def build_latents_output(latents_name: str, latents: torch.Tensor):
return LatentsOutput(
latents=LatentsField(latents_name=latents_name),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
SAMPLER_NAME_VALUES = Literal[
tuple(list(SCHEDULER_MAP.keys()))
]
SAMPLER_NAME_VALUES = Literal[tuple(list(SCHEDULER_MAP.keys()))]
def get_scheduler(
context: InvocationContext,
scheduler_info: ModelInfo,
scheduler_name: str,
seed: int,
) -> Scheduler:
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(
scheduler_name, SCHEDULER_MAP['ddim']
)
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP["ddim"])
orig_scheduler_info = context.services.model_manager.get_model(
**scheduler_info.dict(), context=context,
**scheduler_info.dict(),
context=context,
)
with orig_scheduler_info as orig_scheduler:
scheduler_config = orig_scheduler.config
@ -99,33 +86,50 @@ def get_scheduler(
**scheduler_extra_config,
"_backup": scheduler_config,
}
# make dpmpp_sde reproducable(seed can be passed only in initializer)
if scheduler_class is DPMSolverSDEScheduler:
scheduler_config["noise_sampler_seed"] = seed
scheduler = scheduler_class.from_config(scheduler_config)
# hack copied over from generate.py
if not hasattr(scheduler, 'uses_inpainting_model'):
if not hasattr(scheduler, "uses_inpainting_model"):
scheduler.uses_inpainting_model = lambda: False
return scheduler
# Text to image
class TextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
@title("Denoise Latents")
@tags("latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l")
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
type: Literal["t2l"] = "t2l"
type: Literal["denoise_latents"] = "denoise_latents"
# Inputs
# fmt: off
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
noise: Optional[LatentsField] = Field(description="The noise to use")
steps: int = Field(default=10, gt=0, description="The number of steps to use to generate the image")
cfg_scale: Union[float, List[float]] = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
unet: UNetField = Field(default=None, description="UNet submodel")
control: Union[ControlField, list[ControlField]] = Field(default=None, description="The control to use")
#seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
#seamless_axes: str = Field(default="", description="The axes to tile the image on, 'x' and/or 'y'")
# fmt: on
positive_conditioning: ConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_conditioning: ConditioningField = InputField(
description=FieldDescriptions.negative_cond, input=Input.Connection
)
noise: Optional[LatentsField] = InputField(description=FieldDescriptions.noise, input=Input.Connection)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
default=7.5, ge=1, description=FieldDescriptions.cfg_scale, ui_type=UIType.Float
)
denoising_start: float = InputField(default=0.0, ge=0, le=1, description=FieldDescriptions.denoising_start)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
scheduler: SAMPLER_NAME_VALUES = InputField(default="euler", description=FieldDescriptions.scheduler)
unet: UNetField = InputField(description=FieldDescriptions.unet, input=Input.Connection)
control: Union[ControlField, list[ControlField]] = InputField(
default=None, description=FieldDescriptions.control, input=Input.Connection
)
latents: Optional[LatentsField] = InputField(description=FieldDescriptions.latents, input=Input.Connection)
mask: Optional[ImageField] = InputField(
default=None,
description=FieldDescriptions.mask,
)
@validator("cfg_scale")
def ge_one(cls, v):
@ -133,39 +137,26 @@ class TextToLatentsInvocation(BaseInvocation):
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError('cfg_scale must be greater than 1')
raise ValueError("cfg_scale must be greater than 1")
else:
if v < 1:
raise ValueError('cfg_scale must be greater than 1')
raise ValueError("cfg_scale must be greater than 1")
return v
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Text To Latents",
"tags": ["latents"],
"type_hints": {
"model": "model",
"control": "control",
# "cfg_scale": "float",
"cfg_scale": "number"
}
},
}
# TODO: pass this an emitter method or something? or a session for dispatching?
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
intermediate_state: PipelineIntermediateState,
base_model: BaseModelType,
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.dict(),
source_node_id=source_node_id,
base_model=base_model,
)
def get_conditioning_data(
@ -173,13 +164,14 @@ class TextToLatentsInvocation(BaseInvocation):
context: InvocationContext,
scheduler,
unet,
seed,
) -> ConditioningData:
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
c = positive_cond_data.conditionings[0].embeds.to(device=unet.device, dtype=unet.dtype)
extra_conditioning_info = positive_cond_data.conditionings[0].extra_conditioning
c = positive_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
extra_conditioning_info = c.extra_conditioning
negative_cond_data = context.services.latents.get(self.negative_conditioning.conditioning_name)
uc = negative_cond_data.conditionings[0].embeds.to(device=unet.device, dtype=unet.dtype)
uc = negative_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
conditioning_data = ConditioningData(
unconditioned_embeddings=uc,
@ -190,18 +182,17 @@ class TextToLatentsInvocation(BaseInvocation):
threshold=0.0, # threshold,
warmup=0.2, # warmup,
h_symmetry_time_pct=None, # h_symmetry_time_pct,
v_symmetry_time_pct=None # v_symmetry_time_pct,
v_symmetry_time_pct=None, # v_symmetry_time_pct,
),
)
conditioning_data = conditioning_data.add_scheduler_args_if_applicable(
scheduler,
# for ddim scheduler
eta=0.0, # ddim_eta
# for ancestral and sde schedulers
generator=torch.Generator(device=unet.device).manual_seed(0),
# flip all bits to have noise different from initial
generator=torch.Generator(device=unet.device).manual_seed(seed ^ 0xFFFFFFFF),
)
return conditioning_data
@ -234,7 +225,6 @@ class TextToLatentsInvocation(BaseInvocation):
safety_checker=None,
feature_extractor=None,
requires_safety_checker=False,
precision="float16" if unet.dtype == torch.float16 else "float32",
)
def prep_control_data(
@ -247,7 +237,6 @@ class TextToLatentsInvocation(BaseInvocation):
exit_stack: ExitStack,
do_classifier_free_guidance: bool = True,
) -> List[ControlNetData]:
# assuming fixed dimensional scaling of 8:1 for image:latents
control_height_resize = latents_shape[2] * 8
control_width_resize = latents_shape[3] * 8
@ -261,7 +250,7 @@ class TextToLatentsInvocation(BaseInvocation):
control_list = control_input
else:
control_list = None
if (control_list is None):
if control_list is None:
control_data = None
# from above handling, any control that is not None should now be of type list[ControlField]
else:
@ -281,9 +270,7 @@ class TextToLatentsInvocation(BaseInvocation):
control_models.append(control_model)
control_image_field = control_info.image
input_image = context.services.images.get_pil_image(
control_image_field.image_name
)
input_image = context.services.images.get_pil_image(control_image_field.image_name)
# self.image.image_type, self.image.image_name
# FIXME: still need to test with different widths, heights, devices, dtypes
# and add in batch_size, num_images_per_prompt?
@ -316,210 +303,188 @@ class TextToLatentsInvocation(BaseInvocation):
# MultiControlNetModel has been refactored out, just need list[ControlNetData]
return control_data
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
noise = context.services.latents.get(self.noise.latents_name)
# original idea by https://github.com/AmericanPresidentJimmyCarter
# TODO: research more for second order schedulers timesteps
def init_scheduler(self, scheduler, device, steps, denoising_start, denoising_end):
num_inference_steps = steps
if scheduler.config.get("cpu_only", False):
scheduler.set_timesteps(num_inference_steps, device="cpu")
timesteps = scheduler.timesteps.to(device=device)
else:
scheduler.set_timesteps(num_inference_steps, device=device)
timesteps = scheduler.timesteps
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
# apply denoising_start
t_start_val = int(round(scheduler.config.num_train_timesteps * (1 - denoising_start)))
t_start_idx = len(list(filter(lambda ts: ts >= t_start_val, timesteps)))
timesteps = timesteps[t_start_idx:]
if scheduler.order == 2 and t_start_idx > 0:
timesteps = timesteps[1:]
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state)
# save start timestep to apply noise
init_timestep = timesteps[:1]
def _lora_loader():
for lora in self.unet.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}), context=context,
)
yield (lora_info.context.model, lora.weight)
del lora_info
return
# apply denoising_end
t_end_val = int(round(scheduler.config.num_train_timesteps * (1 - denoising_end)))
t_end_idx = len(list(filter(lambda ts: ts >= t_end_val, timesteps)))
if scheduler.order == 2 and t_end_idx > 0:
t_end_idx += 1
timesteps = timesteps[:t_end_idx]
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict(), context=context,
)
with ExitStack() as exit_stack,\
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),\
unet_info as unet:
# calculate step count based on scheduler order
num_inference_steps = len(timesteps)
if scheduler.order == 2:
num_inference_steps += num_inference_steps % 2
num_inference_steps = num_inference_steps // 2
noise = noise.to(device=unet.device, dtype=unet.dtype)
return num_inference_steps, timesteps, init_timestep
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
def prep_mask_tensor(self, mask, context, lantents):
if mask is None:
return None
pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler, unet)
control_data = self.prep_control_data(
model=pipeline, context=context, control_input=self.control,
latents_shape=noise.shape,
# do_classifier_free_guidance=(self.cfg_scale >= 1.0))
do_classifier_free_guidance=True,
exit_stack=exit_stack,
)
# TODO: Verify the noise is the right size
result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
latents=torch.zeros_like(noise, dtype=torch_dtype(unet.device)),
noise=noise,
num_inference_steps=self.steps,
conditioning_data=conditioning_data,
control_data=control_data, # list[ControlNetData]
callback=step_callback,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
result_latents = result_latents.to("cpu")
torch.cuda.empty_cache()
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents)
class LatentsToLatentsInvocation(TextToLatentsInvocation):
"""Generates latents using latents as base image."""
type: Literal["l2l"] = "l2l"
# Inputs
latents: Optional[LatentsField] = Field(
description="The latents to use as a base image")
strength: float = Field(
default=0.7, ge=0, le=1,
description="The strength of the latents to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Latent To Latents",
"tags": ["latents"],
"type_hints": {
"model": "model",
"control": "control",
"cfg_scale": "number",
}
},
}
mask_image = context.services.images.get_pil_image(mask.image_name)
if mask_image.mode != "L":
# FIXME: why do we get passed an RGB image here? We can only use single-channel.
mask_image = mask_image.convert("L")
mask_tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
if mask_tensor.dim() == 3:
mask_tensor = mask_tensor.unsqueeze(0)
mask_tensor = tv_resize(mask_tensor, lantents.shape[-2:], T.InterpolationMode.BILINEAR)
return 1 - mask_tensor
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
noise = context.services.latents.get(self.noise.latents_name)
latent = context.services.latents.get(self.latents.latents_name)
with SilenceWarnings(): # this quenches NSFW nag from diffusers
seed = None
noise = None
if self.noise is not None:
noise = context.services.latents.get(self.noise.latents_name)
seed = self.noise.seed
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
if self.latents is not None:
latents = context.services.latents.get(self.latents.latents_name)
if seed is None:
seed = self.latents.seed
else:
latents = torch.zeros_like(noise)
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state)
if seed is None:
seed = 0
def _lora_loader():
for lora in self.unet.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}), context=context,
)
yield (lora_info.context.model, lora.weight)
del lora_info
return
mask = self.prep_mask_tensor(self.mask, context, latents)
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict(), context=context,
)
with ExitStack() as exit_stack,\
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),\
unet_info as unet:
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
noise = noise.to(device=unet.device, dtype=unet.dtype)
latent = latent.to(device=unet.device, dtype=unet.dtype)
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state, self.unet.unet.base_model)
scheduler = get_scheduler(
def _lora_loader():
for lora in self.unet.loras:
lora_info = context.services.model_manager.get_model(
**lora.dict(exclude={"weight"}),
context=context,
)
yield (lora_info.context.model, lora.weight)
del lora_info
return
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict(),
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
with ExitStack() as exit_stack, ModelPatcher.apply_lora_unet(
unet_info.context.model, _lora_loader()
), unet_info as unet:
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:
noise = noise.to(device=unet.device, dtype=unet.dtype)
if mask is not None:
mask = mask.to(device=unet.device, dtype=unet.dtype)
pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler, unet)
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
seed=seed,
)
control_data = self.prep_control_data(
model=pipeline, context=context, control_input=self.control,
latents_shape=noise.shape,
# do_classifier_free_guidance=(self.cfg_scale >= 1.0))
do_classifier_free_guidance=True,
exit_stack=exit_stack,
)
pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler, unet, seed)
# TODO: Verify the noise is the right size
initial_latents = latent if self.strength < 1.0 else torch.zeros_like(
latent, device=unet.device, dtype=latent.dtype
)
control_data = self.prep_control_data(
model=pipeline,
context=context,
control_input=self.control,
latents_shape=latents.shape,
# do_classifier_free_guidance=(self.cfg_scale >= 1.0))
do_classifier_free_guidance=True,
exit_stack=exit_stack,
)
timesteps, _ = pipeline.get_img2img_timesteps(
self.steps,
self.strength,
device=unet.device,
)
num_inference_steps, timesteps, init_timestep = self.init_scheduler(
scheduler,
device=unet.device,
steps=self.steps,
denoising_start=self.denoising_start,
denoising_end=self.denoising_end,
)
result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
latents=initial_latents,
timesteps=timesteps,
noise=noise,
num_inference_steps=self.steps,
conditioning_data=conditioning_data,
control_data=control_data, # list[ControlNetData]
callback=step_callback
)
result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
latents=latents,
timesteps=timesteps,
init_timestep=init_timestep,
noise=noise,
seed=seed,
mask=mask,
num_inference_steps=num_inference_steps,
conditioning_data=conditioning_data,
control_data=control_data, # list[ControlNetData]
callback=step_callback,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
result_latents = result_latents.to("cpu")
torch.cuda.empty_cache()
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
result_latents = result_latents.to("cpu")
torch.cuda.empty_cache()
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents)
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents, seed=seed)
# Latent to image
@title("Latents to Image")
@tags("latents", "image", "vae")
class LatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
type: Literal["l2i"] = "l2i"
# Inputs
latents: Optional[LatentsField] = Field(
description="The latents to generate an image from")
vae: VaeField = Field(default=None, description="Vae submodel")
tiled: bool = Field(
default=False,
description="Decode latents by overlaping tiles(less memory consumption)")
fp32: bool = Field(DEFAULT_PRECISION=='float32', description="Decode in full precision")
metadata: Optional[CoreMetadata] = Field(default=None, description="Optional core metadata to be written to the image")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Latents To Image",
"tags": ["latents", "image"],
},
}
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VaeField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32)
metadata: CoreMetadata = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.services.latents.get(self.latents.latents_name)
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(), context=context,
**self.vae.vae.dict(),
context=context,
)
with vae_info as vae:
@ -586,46 +551,45 @@ class LatentsToImageInvocation(BaseInvocation):
)
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear",
"bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
@title("Resize Latents")
@tags("latents", "resize")
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
type: Literal["lresize"] = "lresize"
# Inputs
latents: Optional[LatentsField] = Field(
description="The latents to resize")
width: Union[int, None] = Field(default=512,
ge=64, multiple_of=8, description="The width to resize to (px)")
height: Union[int, None] = Field(default=512,
ge=64, multiple_of=8, description="The height to resize to (px)")
mode: LATENTS_INTERPOLATION_MODE = Field(
default="bilinear", description="The interpolation mode")
antialias: bool = Field(
default=False,
description="Whether or not to antialias (applied in bilinear and bicubic modes only)")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Resize Latents",
"tags": ["latents", "resize"]
},
}
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
width: int = InputField(
ge=64,
multiple_of=8,
description=FieldDescriptions.width,
)
height: int = InputField(
ge=64,
multiple_of=8,
description=FieldDescriptions.width,
)
mode: LATENTS_INTERPOLATION_MODE = InputField(default="bilinear", description=FieldDescriptions.interp_mode)
antialias: bool = InputField(default=False, description=FieldDescriptions.torch_antialias)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
# TODO:
device=choose_torch_device()
device = choose_torch_device()
resized_latents = torch.nn.functional.interpolate(
latents.to(device), size=(self.height // 8, self.width // 8),
mode=self.mode, antialias=self.antialias
if self.mode in ["bilinear", "bicubic"] else False,
latents.to(device),
size=(self.height // 8, self.width // 8),
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
@ -635,44 +599,37 @@ class ResizeLatentsInvocation(BaseInvocation):
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@title("Scale Latents")
@tags("latents", "resize")
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
type: Literal["lscale"] = "lscale"
# Inputs
latents: Optional[LatentsField] = Field(
description="The latents to scale")
scale_factor: float = Field(
gt=0, description="The factor by which to scale the latents")
mode: LATENTS_INTERPOLATION_MODE = Field(
default="bilinear", description="The interpolation mode")
antialias: bool = Field(
default=False,
description="Whether or not to antialias (applied in bilinear and bicubic modes only)")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Scale Latents",
"tags": ["latents", "scale"]
},
}
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
scale_factor: float = InputField(gt=0, description=FieldDescriptions.scale_factor)
mode: LATENTS_INTERPOLATION_MODE = InputField(default="bilinear", description=FieldDescriptions.interp_mode)
antialias: bool = InputField(default=False, description=FieldDescriptions.torch_antialias)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
# TODO:
device=choose_torch_device()
device = choose_torch_device()
# resizing
resized_latents = torch.nn.functional.interpolate(
latents.to(device), scale_factor=self.scale_factor, mode=self.mode,
antialias=self.antialias
if self.mode in ["bilinear", "bicubic"] else False,
latents.to(device),
scale_factor=self.scale_factor,
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
@ -682,31 +639,26 @@ class ScaleLatentsInvocation(BaseInvocation):
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@title("Image to Latents")
@tags("latents", "image", "vae")
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
type: Literal["i2l"] = "i2l"
# Inputs
image: Optional[ImageField] = Field(description="The image to encode")
vae: VaeField = Field(default=None, description="Vae submodel")
tiled: bool = Field(
default=False,
description="Encode latents by overlaping tiles(less memory consumption)")
fp32: bool = Field(DEFAULT_PRECISION=='float32', description="Decode in full precision")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Image To Latents",
"tags": ["latents", "image"]
},
}
image: ImageField = InputField(
description="The image to encode",
)
vae: VaeField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
@ -715,9 +667,10 @@ class ImageToLatentsInvocation(BaseInvocation):
# )
image = context.services.images.get_pil_image(self.image.image_name)
#vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
# vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(), context=context,
**self.vae.vae.dict(),
context=context,
)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
@ -744,12 +697,12 @@ class ImageToLatentsInvocation(BaseInvocation):
vae.post_quant_conv.to(orig_dtype)
vae.decoder.conv_in.to(orig_dtype)
vae.decoder.mid_block.to(orig_dtype)
#else:
# else:
# latents = latents.float()
else:
vae.to(dtype=torch.float16)
#latents = latents.half()
# latents = latents.half()
if self.tiled:
vae.enable_tiling()
@ -760,9 +713,7 @@ class ImageToLatentsInvocation(BaseInvocation):
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
with torch.inference_mode():
image_tensor_dist = vae.encode(image_tensor).latent_dist
latents = image_tensor_dist.sample().to(
dtype=vae.dtype
) # FIXME: uses torch.randn. make reproducible!
latents = image_tensor_dist.sample().to(dtype=vae.dtype) # FIXME: uses torch.randn. make reproducible!
latents = vae.config.scaling_factor * latents
latents = latents.to(dtype=orig_dtype)
@ -770,4 +721,4 @@ class ImageToLatentsInvocation(BaseInvocation):
name = f"{context.graph_execution_state_id}__{self.id}"
latents = latents.to("cpu")
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents)
return build_latents_output(latents_name=name, latents=latents, seed=None)

View File

@ -2,149 +2,83 @@
from typing import Literal
from pydantic import BaseModel, Field
import numpy as np
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
from invokeai.app.invocations.primitives import IntegerOutput
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, tags, title
class MathInvocationConfig(BaseModel):
"""Helper class to provide all math invocations with additional config"""
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["math"],
}
}
class IntOutput(BaseInvocationOutput):
"""An integer output"""
# fmt: off
type: Literal["int_output"] = "int_output"
a: int = Field(default=None, description="The output integer")
# fmt: on
class FloatOutput(BaseInvocationOutput):
"""A float output"""
# fmt: off
type: Literal["float_output"] = "float_output"
param: float = Field(default=None, description="The output float")
# fmt: on
class AddInvocation(BaseInvocation, MathInvocationConfig):
@title("Add Integers")
@tags("math")
class AddInvocation(BaseInvocation):
"""Adds two numbers"""
# fmt: off
type: Literal["add"] = "add"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Add",
"tags": ["math", "add"]
},
}
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a + self.b)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a + self.b)
class SubtractInvocation(BaseInvocation, MathInvocationConfig):
@title("Subtract Integers")
@tags("math")
class SubtractInvocation(BaseInvocation):
"""Subtracts two numbers"""
# fmt: off
type: Literal["sub"] = "sub"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Subtract",
"tags": ["math", "subtract"]
},
}
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a - self.b)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a - self.b)
class MultiplyInvocation(BaseInvocation, MathInvocationConfig):
@title("Multiply Integers")
@tags("math")
class MultiplyInvocation(BaseInvocation):
"""Multiplies two numbers"""
# fmt: off
type: Literal["mul"] = "mul"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Multiply",
"tags": ["math", "multiply"]
},
}
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a * self.b)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a * self.b)
class DivideInvocation(BaseInvocation, MathInvocationConfig):
@title("Divide Integers")
@tags("math")
class DivideInvocation(BaseInvocation):
"""Divides two numbers"""
# fmt: off
type: Literal["div"] = "div"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Divide",
"tags": ["math", "divide"]
},
}
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=int(self.a / self.b))
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=int(self.a / self.b))
@title("Random Integer")
@tags("math")
class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer."""
# fmt: off
type: Literal["rand_int"] = "rand_int"
low: int = Field(default=0, description="The inclusive low value")
high: int = Field(
default=np.iinfo(np.int32).max, description="The exclusive high value"
)
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Random Integer",
"tags": ["math", "random", "integer"]
},
}
# Inputs
low: int = InputField(default=0, description="The inclusive low value")
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=np.random.randint(self.low, self.high))
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=np.random.randint(self.low, self.high))

View File

@ -1,25 +1,37 @@
from typing import Literal, Optional, Union
from typing import Literal, Optional
from pydantic import BaseModel, Field
from pydantic import Field
from invokeai.app.invocations.baseinvocation import (BaseInvocation,
BaseInvocationOutput, InvocationConfig,
InvocationContext)
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InputField,
InvocationContext,
OutputField,
tags,
title,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.model import (LoRAModelField, MainModelField,
VAEModelField)
from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
from ...version import __version__
class LoRAMetadataField(BaseModel):
class LoRAMetadataField(BaseModelExcludeNull):
"""LoRA metadata for an image generated in InvokeAI."""
lora: LoRAModelField = Field(description="The LoRA model")
weight: float = Field(description="The weight of the LoRA model")
class CoreMetadata(BaseModel):
class CoreMetadata(BaseModelExcludeNull):
"""Core generation metadata for an image generated in InvokeAI."""
generation_mode: str = Field(description="The generation mode that output this image",)
app_version: str = Field(default=__version__, description="The version of InvokeAI used to generate this image")
generation_mode: str = Field(
description="The generation mode that output this image",
)
positive_prompt: str = Field(description="The positive prompt parameter")
negative_prompt: str = Field(description="The negative prompt parameter")
width: int = Field(description="The width parameter")
@ -29,33 +41,53 @@ class CoreMetadata(BaseModel):
cfg_scale: float = Field(description="The classifier-free guidance scale parameter")
steps: int = Field(description="The number of steps used for inference")
scheduler: str = Field(description="The scheduler used for inference")
clip_skip: int = Field(description="The number of skipped CLIP layers",)
clip_skip: int = Field(
description="The number of skipped CLIP layers",
)
model: MainModelField = Field(description="The main model used for inference")
controlnets: list[ControlField]= Field(description="The ControlNets used for inference")
controlnets: list[ControlField] = Field(description="The ControlNets used for inference")
loras: list[LoRAMetadataField] = Field(description="The LoRAs used for inference")
strength: Union[float, None] = Field(
default=None,
description="The strength used for latents-to-latents",
)
init_image: Union[str, None] = Field(
default=None, description="The name of the initial image"
)
vae: Union[VAEModelField, None] = Field(
vae: Optional[VAEModelField] = Field(
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
# Latents-to-Latents
strength: Optional[float] = Field(
default=None,
description="The strength used for latents-to-latents",
)
init_image: Optional[str] = Field(default=None, description="The name of the initial image")
class ImageMetadata(BaseModel):
# SDXL
positive_style_prompt: Optional[str] = Field(default=None, description="The positive style prompt parameter")
negative_style_prompt: Optional[str] = Field(default=None, description="The negative style prompt parameter")
# SDXL Refiner
refiner_model: Optional[MainModelField] = Field(default=None, description="The SDXL Refiner model used")
refiner_cfg_scale: Optional[float] = Field(
default=None,
description="The classifier-free guidance scale parameter used for the refiner",
)
refiner_steps: Optional[int] = Field(default=None, description="The number of steps used for the refiner")
refiner_scheduler: Optional[str] = Field(default=None, description="The scheduler used for the refiner")
refiner_positive_aesthetic_store: Optional[float] = Field(
default=None, description="The aesthetic score used for the refiner"
)
refiner_negative_aesthetic_store: Optional[float] = Field(
default=None, description="The aesthetic score used for the refiner"
)
refiner_start: Optional[float] = Field(default=None, description="The start value used for refiner denoising")
class ImageMetadata(BaseModelExcludeNull):
"""An image's generation metadata"""
metadata: Optional[dict] = Field(
default=None,
description="The image's core metadata, if it was created in the Linear or Canvas UI",
)
graph: Optional[dict] = Field(
default=None, description="The graph that created the image"
)
graph: Optional[dict] = Field(default=None, description="The graph that created the image")
class MetadataAccumulatorOutput(BaseInvocationOutput):
@ -63,70 +95,88 @@ class MetadataAccumulatorOutput(BaseInvocationOutput):
type: Literal["metadata_accumulator_output"] = "metadata_accumulator_output"
metadata: CoreMetadata = Field(description="The core metadata for the image")
metadata: CoreMetadata = OutputField(description="The core metadata for the image")
@title("Metadata Accumulator")
@tags("metadata")
class MetadataAccumulatorInvocation(BaseInvocation):
"""Outputs a Core Metadata Object"""
type: Literal["metadata_accumulator"] = "metadata_accumulator"
generation_mode: str = Field(description="The generation mode that output this image",)
positive_prompt: str = Field(description="The positive prompt parameter")
negative_prompt: str = Field(description="The negative prompt parameter")
width: int = Field(description="The width parameter")
height: int = Field(description="The height parameter")
seed: int = Field(description="The seed used for noise generation")
rand_device: str = Field(description="The device used for random number generation")
cfg_scale: float = Field(description="The classifier-free guidance scale parameter")
steps: int = Field(description="The number of steps used for inference")
scheduler: str = Field(description="The scheduler used for inference")
clip_skip: int = Field(description="The number of skipped CLIP layers",)
model: MainModelField = Field(description="The main model used for inference")
controlnets: list[ControlField]= Field(description="The ControlNets used for inference")
loras: list[LoRAMetadataField] = Field(description="The LoRAs used for inference")
strength: Union[float, None] = Field(
generation_mode: str = InputField(
description="The generation mode that output this image",
)
positive_prompt: str = InputField(description="The positive prompt parameter")
negative_prompt: str = InputField(description="The negative prompt parameter")
width: int = InputField(description="The width parameter")
height: int = InputField(description="The height parameter")
seed: int = InputField(description="The seed used for noise generation")
rand_device: str = InputField(description="The device used for random number generation")
cfg_scale: float = InputField(description="The classifier-free guidance scale parameter")
steps: int = InputField(description="The number of steps used for inference")
scheduler: str = InputField(description="The scheduler used for inference")
clip_skip: int = InputField(
description="The number of skipped CLIP layers",
)
model: MainModelField = InputField(description="The main model used for inference")
controlnets: list[ControlField] = InputField(description="The ControlNets used for inference")
loras: list[LoRAMetadataField] = InputField(description="The LoRAs used for inference")
strength: Optional[float] = InputField(
default=None,
description="The strength used for latents-to-latents",
)
init_image: Union[str, None] = Field(
default=None, description="The name of the initial image"
init_image: Optional[str] = InputField(
default=None,
description="The name of the initial image",
)
vae: Union[VAEModelField, None] = Field(
vae: Optional[VAEModelField] = InputField(
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Metadata Accumulator",
"tags": ["image", "metadata", "generation"]
},
}
# SDXL
positive_style_prompt: Optional[str] = InputField(
default=None,
description="The positive style prompt parameter",
)
negative_style_prompt: Optional[str] = InputField(
default=None,
description="The negative style prompt parameter",
)
# SDXL Refiner
refiner_model: Optional[MainModelField] = InputField(
default=None,
description="The SDXL Refiner model used",
)
refiner_cfg_scale: Optional[float] = InputField(
default=None,
description="The classifier-free guidance scale parameter used for the refiner",
)
refiner_steps: Optional[int] = InputField(
default=None,
description="The number of steps used for the refiner",
)
refiner_scheduler: Optional[str] = InputField(
default=None,
description="The scheduler used for the refiner",
)
refiner_positive_aesthetic_store: Optional[float] = InputField(
default=None,
description="The aesthetic score used for the refiner",
)
refiner_negative_aesthetic_store: Optional[float] = InputField(
default=None,
description="The aesthetic score used for the refiner",
)
refiner_start: Optional[float] = InputField(
default=None,
description="The start value used for refiner denoising",
)
def invoke(self, context: InvocationContext) -> MetadataAccumulatorOutput:
"""Collects and outputs a CoreMetadata object"""
return MetadataAccumulatorOutput(
metadata=CoreMetadata(
generation_mode=self.generation_mode,
positive_prompt=self.positive_prompt,
negative_prompt=self.negative_prompt,
width=self.width,
height=self.height,
seed=self.seed,
rand_device=self.rand_device,
cfg_scale=self.cfg_scale,
steps=self.steps,
scheduler=self.scheduler,
model=self.model,
strength=self.strength,
init_image=self.init_image,
vae=self.vae,
controlnets=self.controlnets,
loras=self.loras,
clip_skip=self.clip_skip,
)
)
return MetadataAccumulatorOutput(metadata=CoreMetadata(**self.dict()))

View File

@ -4,17 +4,25 @@ from typing import List, Literal, Optional, Union
from pydantic import BaseModel, Field
from ...backend.model_management import BaseModelType, ModelType, SubModelType
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
Input,
InvocationContext,
OutputField,
UIType,
tags,
title,
)
class ModelInfo(BaseModel):
model_name: str = Field(description="Info to load submodel")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Info to load submodel")
submodel: Optional[SubModelType] = Field(
default=None, description="Info to load submodel"
)
submodel: Optional[SubModelType] = Field(default=None, description="Info to load submodel")
class LoraInfo(ModelInfo):
@ -33,6 +41,7 @@ class ClipField(BaseModel):
skipped_layers: int = Field(description="Number of skipped layers in text_encoder")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
class VaeField(BaseModel):
# TODO: better naming?
vae: ModelInfo = Field(description="Info to load vae submodel")
@ -41,19 +50,19 @@ class VaeField(BaseModel):
class ModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
# fmt: off
type: Literal["model_loader_output"] = "model_loader_output"
unet: UNetField = Field(default=None, description="UNet submodel")
clip: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
vae: VaeField = Field(default=None, description="Vae submodel")
# fmt: on
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
class MainModelField(BaseModel):
"""Main model field"""
model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Model Type")
class LoRAModelField(BaseModel):
@ -62,24 +71,18 @@ class LoRAModelField(BaseModel):
model_name: str = Field(description="Name of the LoRA model")
base_model: BaseModelType = Field(description="Base model")
@title("Main Model Loader")
@tags("model")
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
type: Literal["main_model_loader"] = "main_model_loader"
model: MainModelField = Field(description="The model to load")
# Inputs
model: MainModelField = InputField(description=FieldDescriptions.main_model, input=Input.Direct)
# TODO: precision?
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Model Loader",
"tags": ["model", "loader"],
"type_hints": {"model": "model"},
},
}
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
@ -154,22 +157,6 @@ class MainModelLoaderInvocation(BaseInvocation):
loras=[],
skipped_layers=0,
),
clip2=ClipField(
tokenizer=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Tokenizer2,
),
text_encoder=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.TextEncoder2,
),
loras=[],
skipped_layers=0,
),
vae=VaeField(
vae=ModelInfo(
model_name=model_name,
@ -180,39 +167,34 @@ class MainModelLoaderInvocation(BaseInvocation):
),
)
class LoraLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
# fmt: off
type: Literal["lora_loader_output"] = "lora_loader_output"
unet: Optional[UNetField] = Field(default=None, description="UNet submodel")
clip: Optional[ClipField] = Field(default=None, description="Tokenizer and text_encoder submodels")
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
# fmt: on
@title("LoRA Loader")
@tags("lora", "model")
class LoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
type: Literal["lora_loader"] = "lora_loader"
lora: Union[LoRAModelField, None] = Field(
default=None, description="Lora model name"
# Inputs
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet"
)
clip: Optional[ClipField] = InputField(
default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP"
)
weight: float = Field(default=0.75, description="With what weight to apply lora")
unet: Optional[UNetField] = Field(description="UNet model for applying lora")
clip: Optional[ClipField] = Field(description="Clip model for applying lora")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Lora Loader",
"tags": ["lora", "loader"],
"type_hints": {"lora": "lora_model"},
},
}
def invoke(self, context: InvocationContext) -> LoraLoaderOutput:
if self.lora is None:
@ -228,14 +210,10 @@ class LoraLoaderInvocation(BaseInvocation):
):
raise Exception(f"Unkown lora name: {lora_name}!")
if self.unet is not None and any(
lora.model_name == lora_name for lora in self.unet.loras
):
if self.unet is not None and any(lora.model_name == lora_name for lora in self.unet.loras):
raise Exception(f'Lora "{lora_name}" already applied to unet')
if self.clip is not None and any(
lora.model_name == lora_name for lora in self.clip.loras
):
if self.clip is not None and any(lora.model_name == lora_name for lora in self.clip.loras):
raise Exception(f'Lora "{lora_name}" already applied to clip')
output = LoraLoaderOutput()
@ -267,6 +245,101 @@ class LoraLoaderInvocation(BaseInvocation):
return output
class SDXLLoraLoaderOutput(BaseInvocationOutput):
"""SDXL LoRA Loader Output"""
# fmt: off
type: Literal["sdxl_lora_loader_output"] = "sdxl_lora_loader_output"
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 1")
clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2")
# fmt: on
@title("SDXL LoRA Loader")
@tags("sdxl", "lora", "model")
class SDXLLoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
type: Literal["sdxl_lora_loader"] = "sdxl_lora_loader"
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
weight: float = Field(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = Field(
default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNET"
)
clip: Optional[ClipField] = Field(
default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1"
)
clip2: Optional[ClipField] = Field(
default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2"
)
def invoke(self, context: InvocationContext) -> SDXLLoraLoaderOutput:
if self.lora is None:
raise Exception("No LoRA provided")
base_model = self.lora.base_model
lora_name = self.lora.model_name
if not context.services.model_manager.model_exists(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
):
raise Exception(f"Unknown lora name: {lora_name}!")
if self.unet is not None and any(lora.model_name == lora_name for lora in self.unet.loras):
raise Exception(f'Lora "{lora_name}" already applied to unet')
if self.clip is not None and any(lora.model_name == lora_name for lora in self.clip.loras):
raise Exception(f'Lora "{lora_name}" already applied to clip')
if self.clip2 is not None and any(lora.model_name == lora_name for lora in self.clip2.loras):
raise Exception(f'Lora "{lora_name}" already applied to clip2')
output = SDXLLoraLoaderOutput()
if self.unet is not None:
output.unet = copy.deepcopy(self.unet)
output.unet.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
weight=self.weight,
)
)
if self.clip is not None:
output.clip = copy.deepcopy(self.clip)
output.clip.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
weight=self.weight,
)
)
if self.clip2 is not None:
output.clip2 = copy.deepcopy(self.clip2)
output.clip2.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
weight=self.weight,
)
)
return output
class VAEModelField(BaseModel):
"""Vae model field"""
@ -277,29 +350,23 @@ class VAEModelField(BaseModel):
class VaeLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
# fmt: off
type: Literal["vae_loader_output"] = "vae_loader_output"
vae: VaeField = Field(default=None, description="Vae model")
# fmt: on
# Outputs
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@title("VAE Loader")
@tags("vae", "model")
class VaeLoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
type: Literal["vae_loader"] = "vae_loader"
vae_model: VAEModelField = Field(description="The VAE to load")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "VAE Loader",
"tags": ["vae", "loader"],
"type_hints": {"vae_model": "vae_model"},
},
}
# Inputs
vae_model: VAEModelField = InputField(
description=FieldDescriptions.vae_model, input=Input.Direct, ui_type=UIType.VaeModel, title="VAE"
)
def invoke(self, context: InvocationContext) -> VaeLoaderOutput:
base_model = self.vae_model.base_model

View File

@ -1,19 +1,24 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
import math
from typing import Literal
from pydantic import Field, validator
import torch
from invokeai.app.invocations.latent import LatentsField
from pydantic import validator
from invokeai.app.invocations.latent import LatentsField
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from ...backend.util.devices import choose_torch_device, torch_dtype
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationConfig,
FieldDescriptions,
InputField,
InvocationContext,
OutputField,
UIType,
tags,
title,
)
"""
@ -61,66 +66,57 @@ Nodes
class NoiseOutput(BaseInvocationOutput):
"""Invocation noise output"""
# fmt: off
type: Literal["noise_output"] = "noise_output"
type: Literal["noise_output"] = "noise_output"
# Inputs
noise: LatentsField = Field(default=None, description="The output noise")
width: int = Field(description="The width of the noise in pixels")
height: int = Field(description="The height of the noise in pixels")
# fmt: on
noise: LatentsField = OutputField(default=None, description=FieldDescriptions.noise)
width: int = OutputField(description=FieldDescriptions.width)
height: int = OutputField(description=FieldDescriptions.height)
def build_noise_output(latents_name: str, latents: torch.Tensor):
def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int):
return NoiseOutput(
noise=LatentsField(latents_name=latents_name),
noise=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
@title("Noise")
@tags("latents", "noise")
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""
type: Literal["noise"] = "noise"
# Inputs
seed: int = Field(
seed: int = InputField(
ge=0,
le=SEED_MAX,
description="The seed to use",
description=FieldDescriptions.seed,
default_factory=get_random_seed,
)
width: int = Field(
width: int = InputField(
default=512,
multiple_of=8,
gt=0,
description="The width of the resulting noise",
description=FieldDescriptions.width,
)
height: int = Field(
height: int = InputField(
default=512,
multiple_of=8,
gt=0,
description="The height of the resulting noise",
description=FieldDescriptions.height,
)
use_cpu: bool = Field(
use_cpu: bool = InputField(
default=True,
description="Use CPU for noise generation (for reproducible results across platforms)",
)
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Noise",
"tags": ["latents", "noise"],
},
}
@validator("seed", pre=True)
def modulo_seed(cls, v):
"""Returns the seed modulo SEED_MAX to ensure it is within the valid range."""
return v % SEED_MAX
"""Returns the seed modulo (SEED_MAX + 1) to ensure it is within the valid range."""
return v % (SEED_MAX + 1)
def invoke(self, context: InvocationContext) -> NoiseOutput:
noise = get_noise(
@ -132,4 +128,4 @@ class NoiseInvocation(BaseInvocation):
)
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, noise)
return build_noise_output(latents_name=name, latents=noise)
return build_noise_output(latents_name=name, latents=noise, seed=self.seed)

View File

@ -0,0 +1,512 @@
# Copyright (c) 2023 Borisov Sergey (https://github.com/StAlKeR7779)
import inspect
import re
from contextlib import ExitStack
from typing import List, Literal, Optional, Union
import numpy as np
import torch
from diffusers import ControlNetModel, DPMSolverMultistepScheduler
from diffusers.image_processor import VaeImageProcessor
from diffusers.schedulers import SchedulerMixin as Scheduler
from pydantic import BaseModel, Field, validator
from tqdm import tqdm
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput, ImageField, ImageOutput
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend import BaseModelType, ModelType, SubModelType
from ...backend.model_management import ONNXModelPatcher
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.util import choose_torch_device
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
Input,
InvocationContext,
OutputField,
UIComponent,
UIType,
tags,
title,
)
from .controlnet_image_processors import ControlField
from .latent import SAMPLER_NAME_VALUES, LatentsField, LatentsOutput, build_latents_output, get_scheduler
from .model import ClipField, ModelInfo, UNetField, VaeField
ORT_TO_NP_TYPE = {
"tensor(bool)": np.bool_,
"tensor(int8)": np.int8,
"tensor(uint8)": np.uint8,
"tensor(int16)": np.int16,
"tensor(uint16)": np.uint16,
"tensor(int32)": np.int32,
"tensor(uint32)": np.uint32,
"tensor(int64)": np.int64,
"tensor(uint64)": np.uint64,
"tensor(float16)": np.float16,
"tensor(float)": np.float32,
"tensor(double)": np.float64,
}
PRECISION_VALUES = Literal[tuple(list(ORT_TO_NP_TYPE.keys()))]
@title("ONNX Prompt (Raw)")
@tags("onnx", "prompt")
class ONNXPromptInvocation(BaseInvocation):
type: Literal["prompt_onnx"] = "prompt_onnx"
prompt: str = InputField(default="", description=FieldDescriptions.raw_prompt, ui_component=UIComponent.Textarea)
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.dict(),
)
text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.dict(),
)
with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder, ExitStack() as stack:
loras = [
(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
for lora in self.clip.loras
]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
(
name,
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
).context.model,
)
)
except Exception:
# print(e)
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
if loras or ti_list:
text_encoder.release_session()
with ONNXModelPatcher.apply_lora_text_encoder(text_encoder, loras), ONNXModelPatcher.apply_ti(
orig_tokenizer, text_encoder, ti_list
) as (tokenizer, ti_manager):
text_encoder.create_session()
# copy from
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L153
text_inputs = tokenizer(
self.prompt,
padding="max_length",
max_length=tokenizer.model_max_length,
truncation=True,
return_tensors="np",
)
text_input_ids = text_inputs.input_ids
"""
untruncated_ids = tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
if not np.array_equal(text_input_ids, untruncated_ids):
removed_text = self.tokenizer.batch_decode(
untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
)
logger.warning(
"The following part of your input was truncated because CLIP can only handle sequences up to"
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
)
"""
prompt_embeds = text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
# TODO: hacky but works ;D maybe rename latents somehow?
context.services.latents.save(conditioning_name, (prompt_embeds, None))
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
# Text to image
@title("ONNX Text to Latents")
@tags("latents", "inference", "txt2img", "onnx")
class ONNXTextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
type: Literal["t2l_onnx"] = "t2l_onnx"
# Inputs
positive_conditioning: ConditioningField = InputField(
description=FieldDescriptions.positive_cond,
input=Input.Connection,
)
negative_conditioning: ConditioningField = InputField(
description=FieldDescriptions.negative_cond,
input=Input.Connection,
)
noise: LatentsField = InputField(
description=FieldDescriptions.noise,
input=Input.Connection,
)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
default=7.5,
ge=1,
description=FieldDescriptions.cfg_scale,
ui_type=UIType.Float,
)
scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", description=FieldDescriptions.scheduler, input=Input.Direct
)
precision: PRECISION_VALUES = InputField(default="tensor(float16)", description=FieldDescriptions.precision)
unet: UNetField = InputField(
description=FieldDescriptions.unet,
input=Input.Connection,
)
control: Optional[Union[ControlField, list[ControlField]]] = InputField(
default=None,
description=FieldDescriptions.control,
ui_type=UIType.Control,
)
# seamless: bool = InputField(default=False, description="Whether or not to generate an image that can tile without seams", )
# seamless_axes: str = InputField(default="", description="The axes to tile the image on, 'x' and/or 'y'")
@validator("cfg_scale")
def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError("cfg_scale must be greater than 1")
else:
if v < 1:
raise ValueError("cfg_scale must be greater than 1")
return v
# based on
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
def invoke(self, context: InvocationContext) -> LatentsOutput:
c, _ = context.services.latents.get(self.positive_conditioning.conditioning_name)
uc, _ = context.services.latents.get(self.negative_conditioning.conditioning_name)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
if isinstance(c, torch.Tensor):
c = c.cpu().numpy()
if isinstance(uc, torch.Tensor):
uc = uc.cpu().numpy()
device = torch.device(choose_torch_device())
prompt_embeds = np.concatenate([uc, c])
latents = context.services.latents.get(self.noise.latents_name)
if isinstance(latents, torch.Tensor):
latents = latents.cpu().numpy()
# TODO: better execution device handling
latents = latents.astype(ORT_TO_NP_TYPE[self.precision])
# get the initial random noise unless the user supplied it
do_classifier_free_guidance = True
# latents_dtype = prompt_embeds.dtype
# latents_shape = (batch_size * num_images_per_prompt, 4, height // 8, width // 8)
# if latents.shape != latents_shape:
# raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
seed=0, # TODO: refactor this node
)
def torch2numpy(latent: torch.Tensor):
return latent.cpu().numpy()
def numpy2torch(latent, device):
return torch.from_numpy(latent).to(device)
def dispatch_progress(
self, context: InvocationContext, source_node_id: str, intermediate_state: PipelineIntermediateState
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.dict(),
source_node_id=source_node_id,
)
scheduler.set_timesteps(self.steps)
latents = latents * np.float64(scheduler.init_noise_sigma)
extra_step_kwargs = dict()
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
eta=0.0,
)
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
with unet_info as unet, ExitStack() as stack:
# loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
loras = [
(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
for lora in self.unet.loras
]
if loras:
unet.release_session()
with ONNXModelPatcher.apply_lora_unet(unet, loras):
# TODO:
_, _, h, w = latents.shape
unet.create_session(h, w)
timestep_dtype = next(
(input.type for input in unet.session.get_inputs() if input.name == "timestep"), "tensor(float16)"
)
timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
for i in tqdm(range(len(scheduler.timesteps))):
t = scheduler.timesteps[i]
# expand the latents if we are doing classifier free guidance
latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = scheduler.scale_model_input(numpy2torch(latent_model_input, device), t)
latent_model_input = latent_model_input.cpu().numpy()
# predict the noise residual
timestep = np.array([t], dtype=timestep_dtype)
noise_pred = unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)
noise_pred = noise_pred[0]
# perform guidance
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
# compute the previous noisy sample x_t -> x_t-1
scheduler_output = scheduler.step(
numpy2torch(noise_pred, device), t, numpy2torch(latents, device), **extra_step_kwargs
)
latents = torch2numpy(scheduler_output.prev_sample)
state = PipelineIntermediateState(
run_id="test", step=i, timestep=timestep, latents=scheduler_output.prev_sample
)
dispatch_progress(self, context=context, source_node_id=source_node_id, intermediate_state=state)
# call the callback, if provided
# if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
torch.cuda.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=torch.from_numpy(latents))
# Latent to image
@title("ONNX Latents to Image")
@tags("latents", "image", "vae", "onnx")
class ONNXLatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
type: Literal["l2i_onnx"] = "l2i_onnx"
# Inputs
latents: LatentsField = InputField(
description=FieldDescriptions.denoised_latents,
input=Input.Connection,
)
vae: VaeField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
metadata: Optional[CoreMetadata] = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
# tiled: bool = InputField(default=False, description="Decode latents by overlaping tiles(less memory consumption)")
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.services.latents.get(self.latents.latents_name)
if self.vae.vae.submodel != SubModelType.VaeDecoder:
raise Exception(f"Expected vae_decoder, found: {self.vae.vae.model_type}")
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(),
)
# clear memory as vae decode can request a lot
torch.cuda.empty_cache()
with vae_info as vae:
vae.create_session()
# copied from
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L427
latents = 1 / 0.18215 * latents
# image = self.vae_decoder(latent_sample=latents)[0]
# it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
image = np.concatenate([vae(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])])
image = np.clip(image / 2 + 0.5, 0, 1)
image = image.transpose((0, 2, 3, 1))
image = VaeImageProcessor.numpy_to_pil(image)[0]
torch.cuda.empty_cache()
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ONNXModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
# fmt: off
type: Literal["model_loader_output_onnx"] = "model_loader_output_onnx"
unet: UNetField = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
vae_decoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Decoder")
vae_encoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Encoder")
# fmt: on
class OnnxModelField(BaseModel):
"""Onnx model field"""
model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Model Type")
@title("ONNX Model Loader")
@tags("onnx", "model")
class OnnxModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
type: Literal["onnx_model_loader"] = "onnx_model_loader"
# Inputs
model: OnnxModelField = InputField(
description=FieldDescriptions.onnx_main_model, input=Input.Direct, ui_type=UIType.ONNXModel
)
def invoke(self, context: InvocationContext) -> ONNXModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
model_type = ModelType.ONNX
# TODO: not found exceptions
if not context.services.model_manager.model_exists(
model_name=model_name,
base_model=base_model,
model_type=model_type,
):
raise Exception(f"Unknown {base_model} {model_type} model: {model_name}")
"""
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.Tokenizer,
):
raise Exception(
f"Failed to find tokenizer submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.TextEncoder,
):
raise Exception(
f"Failed to find text_encoder submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.UNet,
):
raise Exception(
f"Failed to find unet submodel from {self.model_name}! Check if model corrupted"
)
"""
return ONNXModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.UNet,
),
scheduler=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Scheduler,
),
loras=[],
),
clip=ClipField(
tokenizer=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Tokenizer,
),
text_encoder=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.TextEncoder,
),
loras=[],
skipped_layers=0,
),
vae_decoder=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.VaeDecoder,
),
),
vae_encoder=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.VaeEncoder,
),
),
)

View File

@ -1,61 +1,68 @@
import io
from typing import Literal, Optional, Any
from typing import Literal, Optional
# from PIL.Image import Image
import PIL.Image
from matplotlib.ticker import MaxNLocator
from matplotlib.figure import Figure
from pydantic import BaseModel, Field
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
from easing_functions import (
BackEaseIn,
BackEaseInOut,
BackEaseOut,
BounceEaseIn,
BounceEaseInOut,
BounceEaseOut,
CircularEaseIn,
CircularEaseInOut,
CircularEaseOut,
CubicEaseIn,
CubicEaseInOut,
CubicEaseOut,
ElasticEaseIn,
ElasticEaseInOut,
ElasticEaseOut,
ExponentialEaseIn,
ExponentialEaseInOut,
ExponentialEaseOut,
LinearInOut,
QuadEaseInOut, QuadEaseIn, QuadEaseOut,
CubicEaseInOut, CubicEaseIn, CubicEaseOut,
QuarticEaseInOut, QuarticEaseIn, QuarticEaseOut,
QuinticEaseInOut, QuinticEaseIn, QuinticEaseOut,
SineEaseInOut, SineEaseIn, SineEaseOut,
CircularEaseIn, CircularEaseInOut, CircularEaseOut,
ExponentialEaseInOut, ExponentialEaseIn, ExponentialEaseOut,
ElasticEaseIn, ElasticEaseInOut, ElasticEaseOut,
BackEaseIn, BackEaseInOut, BackEaseOut,
BounceEaseIn, BounceEaseInOut, BounceEaseOut)
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
QuadEaseIn,
QuadEaseInOut,
QuadEaseOut,
QuarticEaseIn,
QuarticEaseInOut,
QuarticEaseOut,
QuinticEaseIn,
QuinticEaseInOut,
QuinticEaseOut,
SineEaseIn,
SineEaseInOut,
SineEaseOut,
)
from matplotlib.figure import Figure
from matplotlib.ticker import MaxNLocator
from pydantic import BaseModel, Field
from invokeai.app.invocations.primitives import FloatCollectionOutput
from ...backend.util.logging import InvokeAILogger
from .collections import FloatCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, tags, title
@title("Float Range")
@tags("math", "range")
class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range"""
type: Literal["float_range"] = "float_range"
# Inputs
start: float = Field(default=5, description="The first value of the range")
stop: float = Field(default=10, description="The last value of the range")
steps: int = Field(default=30, description="number of values to interpolate over (including start and stop)")
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Linear Range (Float)",
"tags": ["math", "float", "linear", "range"]
},
}
start: float = InputField(default=5, description="The first value of the range")
stop: float = InputField(default=10, description="The last value of the range")
steps: int = InputField(default=30, description="number of values to interpolate over (including start and stop)")
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
param_list = list(np.linspace(self.start, self.stop, self.steps))
return FloatCollectionOutput(
collection=param_list
)
return FloatCollectionOutput(collection=param_list)
EASING_FUNCTIONS_MAP = {
@ -92,43 +99,32 @@ EASING_FUNCTIONS_MAP = {
"BounceInOut": BounceEaseInOut,
}
EASING_FUNCTION_KEYS: Any = Literal[
tuple(list(EASING_FUNCTIONS_MAP.keys()))
]
EASING_FUNCTION_KEYS = Literal[tuple(list(EASING_FUNCTIONS_MAP.keys()))]
# actually I think for now could just use CollectionOutput (which is list[Any]
@title("Step Param Easing")
@tags("step", "easing")
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""
type: Literal["step_param_easing"] = "step_param_easing"
# Inputs
# fmt: off
easing: EASING_FUNCTION_KEYS = Field(default="Linear", description="The easing function to use")
num_steps: int = Field(default=20, description="number of denoising steps")
start_value: float = Field(default=0.0, description="easing starting value")
end_value: float = Field(default=1.0, description="easing ending value")
start_step_percent: float = Field(default=0.0, description="fraction of steps at which to start easing")
end_step_percent: float = Field(default=1.0, description="fraction of steps after which to end easing")
easing: EASING_FUNCTION_KEYS = InputField(default="Linear", description="The easing function to use")
num_steps: int = InputField(default=20, description="number of denoising steps")
start_value: float = InputField(default=0.0, description="easing starting value")
end_value: float = InputField(default=1.0, description="easing ending value")
start_step_percent: float = InputField(default=0.0, description="fraction of steps at which to start easing")
end_step_percent: float = InputField(default=1.0, description="fraction of steps after which to end easing")
# if None, then start_value is used prior to easing start
pre_start_value: Optional[float] = Field(default=None, description="value before easing start")
pre_start_value: Optional[float] = InputField(default=None, description="value before easing start")
# if None, then end value is used prior to easing end
post_end_value: Optional[float] = Field(default=None, description="value after easing end")
mirror: bool = Field(default=False, description="include mirror of easing function")
post_end_value: Optional[float] = InputField(default=None, description="value after easing end")
mirror: bool = InputField(default=False, description="include mirror of easing function")
# FIXME: add alt_mirror option (alternative to default or mirror), or remove entirely
# alt_mirror: bool = Field(default=False, description="alternative mirroring by dual easing")
show_easing_plot: bool = Field(default=False, description="show easing plot")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Param Easing By Step",
"tags": ["param", "step", "easing"]
},
}
# alt_mirror: bool = InputField(default=False, description="alternative mirroring by dual easing")
show_easing_plot: bool = InputField(default=False, description="show easing plot")
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
log_diagnostics = False
@ -170,12 +166,13 @@ class StepParamEasingInvocation(BaseInvocation):
# and create reverse copy of list[1:end-1]
# but if even then number_of_steps/2 === ceil(number_of_steps/2), so can just use ceil always
base_easing_duration = int(np.ceil(num_easing_steps/2.0))
if log_diagnostics: context.services.logger.debug("base easing duration: " + str(base_easing_duration))
even_num_steps = (num_easing_steps % 2 == 0) # even number of steps
easing_function = easing_class(start=self.start_value,
end=self.end_value,
duration=base_easing_duration - 1)
base_easing_duration = int(np.ceil(num_easing_steps / 2.0))
if log_diagnostics:
context.services.logger.debug("base easing duration: " + str(base_easing_duration))
even_num_steps = num_easing_steps % 2 == 0 # even number of steps
easing_function = easing_class(
start=self.start_value, end=self.end_value, duration=base_easing_duration - 1
)
base_easing_vals = list()
for step_index in range(base_easing_duration):
easing_val = easing_function.ease(step_index)
@ -214,9 +211,7 @@ class StepParamEasingInvocation(BaseInvocation):
#
else: # no mirroring (default)
easing_function = easing_class(start=self.start_value,
end=self.end_value,
duration=num_easing_steps - 1)
easing_function = easing_class(start=self.start_value, end=self.end_value, duration=num_easing_steps - 1)
for step_index in range(num_easing_steps):
step_val = easing_function.ease(step_index)
easing_list.append(step_val)
@ -240,13 +235,11 @@ class StepParamEasingInvocation(BaseInvocation):
ax = plt.gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
buf = io.BytesIO()
plt.savefig(buf, format='png')
plt.savefig(buf, format="png")
buf.seek(0)
im = PIL.Image.open(buf)
im.show()
buf.close()
# output array of size steps, each entry list[i] is param value for step i
return FloatCollectionOutput(
collection=param_list
)
return FloatCollectionOutput(collection=param_list)

View File

@ -1,70 +0,0 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
from pydantic import Field
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from .math import FloatOutput, IntOutput
# Pass-through parameter nodes - used by subgraphs
class ParamIntInvocation(BaseInvocation):
"""An integer parameter"""
#fmt: off
type: Literal["param_int"] = "param_int"
a: int = Field(default=0, description="The integer value")
#fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["param", "integer"],
"title": "Integer Parameter"
},
}
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a)
class ParamFloatInvocation(BaseInvocation):
"""A float parameter"""
#fmt: off
type: Literal["param_float"] = "param_float"
param: float = Field(default=0.0, description="The float value")
#fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["param", "float"],
"title": "Float Parameter"
},
}
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(param=self.param)
class StringOutput(BaseInvocationOutput):
"""A string output"""
type: Literal["string_output"] = "string_output"
text: str = Field(default=None, description="The output string")
class ParamStringInvocation(BaseInvocation):
"""A string parameter"""
type: Literal['param_string'] = 'param_string'
text: str = Field(default='', description='The string value')
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["param", "string"],
"title": "String Parameter"
},
}
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(text=self.text)

View File

@ -0,0 +1,494 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal, Optional, Tuple, Union
from anyio import Condition
from pydantic import BaseModel, Field
import torch
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
Input,
InputField,
InvocationContext,
OutputField,
UIComponent,
UIType,
tags,
title,
)
"""
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
- primitive nodes
- primitive outputs
- primitive collection outputs
"""
# region Boolean
class BooleanOutput(BaseInvocationOutput):
"""Base class for nodes that output a single boolean"""
type: Literal["boolean_output"] = "boolean_output"
a: bool = OutputField(description="The output boolean")
class BooleanCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of booleans"""
type: Literal["boolean_collection_output"] = "boolean_collection_output"
# Outputs
collection: list[bool] = OutputField(
default_factory=list, description="The output boolean collection", ui_type=UIType.BooleanCollection
)
@title("Boolean Primitive")
@tags("primitives", "boolean")
class BooleanInvocation(BaseInvocation):
"""A boolean primitive value"""
type: Literal["boolean"] = "boolean"
# Inputs
a: bool = InputField(default=False, description="The boolean value")
def invoke(self, context: InvocationContext) -> BooleanOutput:
return BooleanOutput(a=self.a)
@title("Boolean Primitive Collection")
@tags("primitives", "boolean", "collection")
class BooleanCollectionInvocation(BaseInvocation):
"""A collection of boolean primitive values"""
type: Literal["boolean_collection"] = "boolean_collection"
# Inputs
collection: list[bool] = InputField(
default=False, description="The collection of boolean values", ui_type=UIType.BooleanCollection
)
def invoke(self, context: InvocationContext) -> BooleanCollectionOutput:
return BooleanCollectionOutput(collection=self.collection)
# endregion
# region Integer
class IntegerOutput(BaseInvocationOutput):
"""Base class for nodes that output a single integer"""
type: Literal["integer_output"] = "integer_output"
a: int = OutputField(description="The output integer")
class IntegerCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of integers"""
type: Literal["integer_collection_output"] = "integer_collection_output"
# Outputs
collection: list[int] = OutputField(
default_factory=list, description="The int collection", ui_type=UIType.IntegerCollection
)
@title("Integer Primitive")
@tags("primitives", "integer")
class IntegerInvocation(BaseInvocation):
"""An integer primitive value"""
type: Literal["integer"] = "integer"
# Inputs
a: int = InputField(default=0, description="The integer value")
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a)
@title("Integer Primitive Collection")
@tags("primitives", "integer", "collection")
class IntegerCollectionInvocation(BaseInvocation):
"""A collection of integer primitive values"""
type: Literal["integer_collection"] = "integer_collection"
# Inputs
collection: list[int] = InputField(
default=0, description="The collection of integer values", ui_type=UIType.IntegerCollection
)
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput(collection=self.collection)
# endregion
# region Float
class FloatOutput(BaseInvocationOutput):
"""Base class for nodes that output a single float"""
type: Literal["float_output"] = "float_output"
a: float = OutputField(description="The output float")
class FloatCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of floats"""
type: Literal["float_collection_output"] = "float_collection_output"
# Outputs
collection: list[float] = OutputField(
default_factory=list, description="The float collection", ui_type=UIType.FloatCollection
)
@title("Float Primitive")
@tags("primitives", "float")
class FloatInvocation(BaseInvocation):
"""A float primitive value"""
type: Literal["float"] = "float"
# Inputs
param: float = InputField(default=0.0, description="The float value")
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(a=self.param)
@title("Float Primitive Collection")
@tags("primitives", "float", "collection")
class FloatCollectionInvocation(BaseInvocation):
"""A collection of float primitive values"""
type: Literal["float_collection"] = "float_collection"
# Inputs
collection: list[float] = InputField(
default=0, description="The collection of float values", ui_type=UIType.FloatCollection
)
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
return FloatCollectionOutput(collection=self.collection)
# endregion
# region String
class StringOutput(BaseInvocationOutput):
"""Base class for nodes that output a single string"""
type: Literal["string_output"] = "string_output"
text: str = OutputField(description="The output string")
class StringCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of strings"""
type: Literal["string_collection_output"] = "string_collection_output"
# Outputs
collection: list[str] = OutputField(
default_factory=list, description="The output strings", ui_type=UIType.StringCollection
)
@title("String Primitive")
@tags("primitives", "string")
class StringInvocation(BaseInvocation):
"""A string primitive value"""
type: Literal["string"] = "string"
# Inputs
text: str = InputField(default="", description="The string value", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(text=self.text)
@title("String Primitive Collection")
@tags("primitives", "string", "collection")
class StringCollectionInvocation(BaseInvocation):
"""A collection of string primitive values"""
type: Literal["string_collection"] = "string_collection"
# Inputs
collection: list[str] = InputField(
default=0, description="The collection of string values", ui_type=UIType.StringCollection
)
def invoke(self, context: InvocationContext) -> StringCollectionOutput:
return StringCollectionOutput(collection=self.collection)
# endregion
# region Image
class ImageField(BaseModel):
"""An image primitive field"""
image_name: str = Field(description="The name of the image")
class ImageOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image"""
type: Literal["image_output"] = "image_output"
image: ImageField = OutputField(description="The output image")
width: int = OutputField(description="The width of the image in pixels")
height: int = OutputField(description="The height of the image in pixels")
class ImageCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of images"""
type: Literal["image_collection_output"] = "image_collection_output"
# Outputs
collection: list[ImageField] = OutputField(
default_factory=list, description="The output images", ui_type=UIType.ImageCollection
)
@title("Image Primitive")
@tags("primitives", "image")
class ImageInvocation(BaseInvocation):
"""An image primitive value"""
# Metadata
type: Literal["image"] = "image"
# Inputs
image: ImageField = InputField(description="The image to load")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image.width,
height=image.height,
)
@title("Image Primitive Collection")
@tags("primitives", "image", "collection")
class ImageCollectionInvocation(BaseInvocation):
"""A collection of image primitive values"""
type: Literal["image_collection"] = "image_collection"
# Inputs
collection: list[ImageField] = InputField(
default=0, description="The collection of image values", ui_type=UIType.ImageCollection
)
def invoke(self, context: InvocationContext) -> ImageCollectionOutput:
return ImageCollectionOutput(collection=self.collection)
# endregion
# region Latents
class LatentsField(BaseModel):
"""A latents tensor primitive field"""
latents_name: str = Field(description="The name of the latents")
seed: Optional[int] = Field(default=None, description="Seed used to generate this latents")
class LatentsOutput(BaseInvocationOutput):
"""Base class for nodes that output a single latents tensor"""
type: Literal["latents_output"] = "latents_output"
latents: LatentsField = OutputField(
description=FieldDescriptions.latents,
)
width: int = OutputField(description=FieldDescriptions.width)
height: int = OutputField(description=FieldDescriptions.height)
class LatentsCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of latents tensors"""
type: Literal["latents_collection_output"] = "latents_collection_output"
collection: list[LatentsField] = OutputField(
default_factory=list,
description=FieldDescriptions.latents,
ui_type=UIType.LatentsCollection,
)
@title("Latents Primitive")
@tags("primitives", "latents")
class LatentsInvocation(BaseInvocation):
"""A latents tensor primitive value"""
type: Literal["latents"] = "latents"
# Inputs
latents: LatentsField = InputField(description="The latents tensor", input=Input.Connection)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
return build_latents_output(self.latents.latents_name, latents)
@title("Latents Primitive Collection")
@tags("primitives", "latents", "collection")
class LatentsCollectionInvocation(BaseInvocation):
"""A collection of latents tensor primitive values"""
type: Literal["latents_collection"] = "latents_collection"
# Inputs
collection: list[LatentsField] = InputField(
default=0, description="The collection of latents tensors", ui_type=UIType.LatentsCollection
)
def invoke(self, context: InvocationContext) -> LatentsCollectionOutput:
return LatentsCollectionOutput(collection=self.collection)
def build_latents_output(latents_name: str, latents: torch.Tensor, seed: Optional[int] = None):
return LatentsOutput(
latents=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
# endregion
# region Color
class ColorField(BaseModel):
"""A color primitive field"""
r: int = Field(ge=0, le=255, description="The red component")
g: int = Field(ge=0, le=255, description="The green component")
b: int = Field(ge=0, le=255, description="The blue component")
a: int = Field(ge=0, le=255, description="The alpha component")
def tuple(self) -> Tuple[int, int, int, int]:
return (self.r, self.g, self.b, self.a)
class ColorOutput(BaseInvocationOutput):
"""Base class for nodes that output a single color"""
type: Literal["color_output"] = "color_output"
color: ColorField = OutputField(description="The output color")
class ColorCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of colors"""
type: Literal["color_collection_output"] = "color_collection_output"
# Outputs
collection: list[ColorField] = OutputField(
default_factory=list, description="The output colors", ui_type=UIType.ColorCollection
)
@title("Color Primitive")
@tags("primitives", "color")
class ColorInvocation(BaseInvocation):
"""A color primitive value"""
type: Literal["color"] = "color"
# Inputs
color: ColorField = InputField(default=ColorField(r=0, g=0, b=0, a=255), description="The color value")
def invoke(self, context: InvocationContext) -> ColorOutput:
return ColorOutput(color=self.color)
# endregion
# region Conditioning
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""
type: Literal["conditioning_output"] = "conditioning_output"
conditioning: ConditioningField = OutputField(description=FieldDescriptions.cond)
class ConditioningCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of conditioning tensors"""
type: Literal["conditioning_collection_output"] = "conditioning_collection_output"
# Outputs
collection: list[ConditioningField] = OutputField(
default_factory=list,
description="The output conditioning tensors",
ui_type=UIType.ConditioningCollection,
)
@title("Conditioning Primitive")
@tags("primitives", "conditioning")
class ConditioningInvocation(BaseInvocation):
"""A conditioning tensor primitive value"""
type: Literal["conditioning"] = "conditioning"
conditioning: ConditioningField = InputField(description=FieldDescriptions.cond, input=Input.Connection)
def invoke(self, context: InvocationContext) -> ConditioningOutput:
return ConditioningOutput(conditioning=self.conditioning)
@title("Conditioning Primitive Collection")
@tags("primitives", "conditioning", "collection")
class ConditioningCollectionInvocation(BaseInvocation):
"""A collection of conditioning tensor primitive values"""
type: Literal["conditioning_collection"] = "conditioning_collection"
# Inputs
collection: list[ConditioningField] = InputField(
default=0, description="The collection of conditioning tensors", ui_type=UIType.ConditioningCollection
)
def invoke(self, context: InvocationContext) -> ConditioningCollectionOutput:
return ConditioningCollectionOutput(collection=self.collection)
# endregion

View File

@ -1,62 +1,28 @@
from os.path import exists
from typing import Literal, Optional
from typing import Literal, Optional, Union
import numpy as np
from pydantic import Field, validator
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
from pydantic import validator
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationConfig, InvocationContext
from dynamicprompts.generators import RandomPromptGenerator, CombinatorialPromptGenerator
from invokeai.app.invocations.primitives import StringCollectionOutput
class PromptOutput(BaseInvocationOutput):
"""Base class for invocations that output a prompt"""
#fmt: off
type: Literal["prompt"] = "prompt"
prompt: str = Field(default=None, description="The output prompt")
#fmt: on
class Config:
schema_extra = {
'required': [
'type',
'prompt',
]
}
class PromptCollectionOutput(BaseInvocationOutput):
"""Base class for invocations that output a collection of prompts"""
# fmt: off
type: Literal["prompt_collection_output"] = "prompt_collection_output"
prompt_collection: list[str] = Field(description="The output prompt collection")
count: int = Field(description="The size of the prompt collection")
# fmt: on
class Config:
schema_extra = {"required": ["type", "prompt_collection", "count"]}
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, UIType, tags, title
@title("Dynamic Prompt")
@tags("prompt", "collection")
class DynamicPromptInvocation(BaseInvocation):
"""Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator"""
type: Literal["dynamic_prompt"] = "dynamic_prompt"
prompt: str = Field(description="The prompt to parse with dynamicprompts")
max_prompts: int = Field(default=1, description="The number of prompts to generate")
combinatorial: bool = Field(
default=False, description="Whether to use the combinatorial generator"
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Dynamic Prompt",
"tags": ["prompt", "dynamic"]
},
}
# Inputs
prompt: str = InputField(description="The prompt to parse with dynamicprompts", ui_component=UIComponent.Textarea)
max_prompts: int = InputField(default=1, description="The number of prompts to generate")
combinatorial: bool = InputField(default=False, description="Whether to use the combinatorial generator")
def invoke(self, context: InvocationContext) -> PromptCollectionOutput:
def invoke(self, context: InvocationContext) -> StringCollectionOutput:
if self.combinatorial:
generator = CombinatorialPromptGenerator()
prompts = generator.generate(self.prompt, max_prompts=self.max_prompts)
@ -64,29 +30,26 @@ class DynamicPromptInvocation(BaseInvocation):
generator = RandomPromptGenerator()
prompts = generator.generate(self.prompt, num_images=self.max_prompts)
return PromptCollectionOutput(prompt_collection=prompts, count=len(prompts))
return StringCollectionOutput(collection=prompts)
@title("Prompts from File")
@tags("prompt", "file")
class PromptsFromFileInvocation(BaseInvocation):
'''Loads prompts from a text file'''
# fmt: off
type: Literal['prompt_from_file'] = 'prompt_from_file'
"""Loads prompts from a text file"""
type: Literal["prompt_from_file"] = "prompt_from_file"
# Inputs
file_path: str = Field(description="Path to prompt text file")
pre_prompt: Optional[str] = Field(description="String to prepend to each prompt")
post_prompt: Optional[str] = Field(description="String to append to each prompt")
start_line: int = Field(default=1, ge=1, description="Line in the file to start start from")
max_prompts: int = Field(default=1, ge=0, description="Max lines to read from file (0=all)")
#fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Prompts From File",
"tags": ["prompt", "file"]
},
}
file_path: str = InputField(description="Path to prompt text file", ui_type=UIType.FilePath)
pre_prompt: Optional[str] = InputField(
default=None, description="String to prepend to each prompt", ui_component=UIComponent.Textarea
)
post_prompt: Optional[str] = InputField(
default=None, description="String to append to each prompt", ui_component=UIComponent.Textarea
)
start_line: int = InputField(default=1, ge=1, description="Line in the file to start start from")
max_prompts: int = InputField(default=1, ge=0, description="Max lines to read from file (0=all)")
@validator("file_path")
def file_path_exists(cls, v):
@ -94,7 +57,14 @@ class PromptsFromFileInvocation(BaseInvocation):
raise ValueError(FileNotFoundError)
return v
def promptsFromFile(self, file_path: str, pre_prompt: str, post_prompt: str, start_line: int, max_prompts: int):
def promptsFromFile(
self,
file_path: str,
pre_prompt: Union[str, None],
post_prompt: Union[str, None],
start_line: int,
max_prompts: int,
):
prompts = []
start_line -= 1
end_line = start_line + max_prompts
@ -103,11 +73,13 @@ class PromptsFromFileInvocation(BaseInvocation):
with open(file_path) as f:
for i, line in enumerate(f):
if i >= start_line and i < end_line:
prompts.append((pre_prompt or '') + line.strip() + (post_prompt or ''))
prompts.append((pre_prompt or "") + line.strip() + (post_prompt or ""))
if i >= end_line:
break
return prompts
def invoke(self, context: InvocationContext) -> PromptCollectionOutput:
prompts = self.promptsFromFile(self.file_path, self.pre_prompt, self.post_prompt, self.start_line, self.max_prompts)
return PromptCollectionOutput(prompt_collection=prompts, count=len(prompts))
def invoke(self, context: InvocationContext) -> StringCollectionOutput:
prompts = self.promptsFromFile(
self.file_path, self.pre_prompt, self.post_prompt, self.start_line, self.max_prompts
)
return StringCollectionOutput(collection=prompts)

View File

@ -1,59 +1,55 @@
import torch
import inspect
from tqdm import tqdm
from typing import List, Literal, Optional, Union
from pydantic import Field, validator
from typing import Literal
from ...backend.model_management import ModelType, SubModelType
from invokeai.app.util.step_callback import stable_diffusion_xl_step_callback
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
tags,
title,
)
from .model import ClipField, MainModelField, ModelInfo, UNetField, VaeField
from .model import UNetField, ClipField, VaeField, MainModelField, ModelInfo
from .compel import ConditioningField
from .latent import LatentsField, SAMPLER_NAME_VALUES, LatentsOutput, get_scheduler, build_latents_output
class SDXLModelLoaderOutput(BaseInvocationOutput):
"""SDXL base model loader output"""
# fmt: off
type: Literal["sdxl_model_loader_output"] = "sdxl_model_loader_output"
unet: UNetField = Field(default=None, description="UNet submodel")
clip: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
clip2: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
vae: VaeField = Field(default=None, description="Vae submodel")
# fmt: on
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 1")
clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
"""SDXL refiner model loader output"""
# fmt: off
type: Literal["sdxl_refiner_model_loader_output"] = "sdxl_refiner_model_loader_output"
unet: UNetField = Field(default=None, description="UNet submodel")
clip2: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
vae: VaeField = Field(default=None, description="Vae submodel")
# fmt: on
#fmt: on
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@title("SDXL Main Model Loader")
@tags("model", "sdxl")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
type: Literal["sdxl_model_loader"] = "sdxl_model_loader"
model: MainModelField = Field(description="The model to load")
# Inputs
model: MainModelField = InputField(
description=FieldDescriptions.sdxl_main_model, input=Input.Direct, ui_type=UIType.SDXLMainModel
)
# TODO: precision?
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Model Loader",
"tags": ["model", "loader", "sdxl"],
"type_hints": {"model": "model"},
},
}
def invoke(self, context: InvocationContext) -> SDXLModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
@ -125,23 +121,22 @@ class SDXLModelLoaderInvocation(BaseInvocation):
),
)
@title("SDXL Refiner Model Loader")
@tags("model", "sdxl", "refiner")
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""
type: Literal["sdxl_refiner_model_loader"] = "sdxl_refiner_model_loader"
model: MainModelField = Field(description="The model to load")
# Inputs
model: MainModelField = InputField(
description=FieldDescriptions.sdxl_refiner_model,
input=Input.Direct,
ui_type=UIType.SDXLRefinerModel,
)
# TODO: precision?
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Refiner Model Loader",
"tags": ["model", "loader", "sdxl_refiner"],
"type_hints": {"model": "model"},
},
}
def invoke(self, context: InvocationContext) -> SDXLRefinerModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
@ -196,514 +191,3 @@ class SDXLRefinerModelLoaderInvocation(BaseInvocation):
),
),
)
# Text to image
class SDXLTextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
type: Literal["t2l_sdxl"] = "t2l_sdxl"
# Inputs
# fmt: off
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
noise: Optional[LatentsField] = Field(description="The noise to use")
steps: int = Field(default=10, gt=0, description="The number of steps to use to generate the image")
cfg_scale: Union[float, List[float]] = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
unet: UNetField = Field(default=None, description="UNet submodel")
denoising_end: float = Field(default=1.0, gt=0, le=1, description="")
#control: Union[ControlField, list[ControlField]] = Field(default=None, description="The control to use")
#seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
#seamless_axes: str = Field(default="", description="The axes to tile the image on, 'x' and/or 'y'")
# fmt: on
@validator("cfg_scale")
def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError('cfg_scale must be greater than 1')
else:
if v < 1:
raise ValueError('cfg_scale must be greater than 1')
return v
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Text To Latents",
"tags": ["latents"],
"type_hints": {
"model": "model",
# "cfg_scale": "float",
"cfg_scale": "number"
}
},
}
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
sample,
step,
total_steps,
) -> None:
stable_diffusion_xl_step_callback(
context=context,
node=self.dict(),
source_node_id=source_node_id,
sample=sample,
step=step,
total_steps=total_steps,
)
# based on
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
latents = context.services.latents.get(self.noise.latents_name)
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
prompt_embeds = positive_cond_data.conditionings[0].embeds
pooled_prompt_embeds = positive_cond_data.conditionings[0].pooled_embeds
add_time_ids = positive_cond_data.conditionings[0].add_time_ids
negative_cond_data = context.services.latents.get(self.negative_conditioning.conditioning_name)
negative_prompt_embeds = negative_cond_data.conditionings[0].embeds
negative_pooled_prompt_embeds = negative_cond_data.conditionings[0].pooled_embeds
add_neg_time_ids = negative_cond_data.conditionings[0].add_time_ids
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
num_inference_steps = self.steps
scheduler.set_timesteps(num_inference_steps)
timesteps = scheduler.timesteps
latents = latents * scheduler.init_noise_sigma
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict()
)
do_classifier_free_guidance = True
cross_attention_kwargs = None
with unet_info as unet:
extra_step_kwargs = dict()
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
eta=0.0,
)
if "generator" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
generator=torch.Generator(device=unet.device).manual_seed(0),
)
num_warmup_steps = len(timesteps) - self.steps * scheduler.order
# apply denoising_end
skipped_final_steps = int(round((1 - self.denoising_end) * self.steps))
num_inference_steps = num_inference_steps - skipped_final_steps
timesteps = timesteps[: num_warmup_steps + scheduler.order * num_inference_steps]
if not context.services.configuration.sequential_guidance:
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
add_text_embeds = torch.cat([negative_pooled_prompt_embeds, pooled_prompt_embeds], dim=0)
add_time_ids = torch.cat([add_neg_time_ids, add_time_ids], dim=0)
prompt_embeds = prompt_embeds.to(device=unet.device, dtype=unet.dtype)
add_text_embeds = add_text_embeds.to(device=unet.device, dtype=unet.dtype)
add_time_ids = add_time_ids.to(device=unet.device, dtype=unet.dtype)
latents = latents.to(device=unet.device, dtype=unet.dtype)
with tqdm(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
# expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = scheduler.scale_model_input(latent_model_input, t)
# predict the noise residual
added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
noise_pred = unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
cross_attention_kwargs=cross_attention_kwargs,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
# perform guidance
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
#del noise_pred_uncond
#del noise_pred_text
#if do_classifier_free_guidance and guidance_rescale > 0.0:
# # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
# noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
# compute the previous noisy sample x_t -> x_t-1
latents = scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
else:
negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.to(device=unet.device, dtype=unet.dtype)
negative_prompt_embeds = negative_prompt_embeds.to(device=unet.device, dtype=unet.dtype)
add_neg_time_ids = add_neg_time_ids.to(device=unet.device, dtype=unet.dtype)
pooled_prompt_embeds = pooled_prompt_embeds.to(device=unet.device, dtype=unet.dtype)
prompt_embeds = prompt_embeds.to(device=unet.device, dtype=unet.dtype)
add_time_ids = add_time_ids.to(device=unet.device, dtype=unet.dtype)
latents = latents.to(device=unet.device, dtype=unet.dtype)
with tqdm(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
# expand the latents if we are doing classifier free guidance
#latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = scheduler.scale_model_input(latents, t)
#import gc
#gc.collect()
#torch.cuda.empty_cache()
# predict the noise residual
added_cond_kwargs = {"text_embeds": negative_pooled_prompt_embeds, "time_ids": add_neg_time_ids}
noise_pred_uncond = unet(
latent_model_input,
t,
encoder_hidden_states=negative_prompt_embeds,
cross_attention_kwargs=cross_attention_kwargs,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
added_cond_kwargs = {"text_embeds": pooled_prompt_embeds, "time_ids": add_time_ids}
noise_pred_text = unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
cross_attention_kwargs=cross_attention_kwargs,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
# perform guidance
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
#del noise_pred_text
#del noise_pred_uncond
#import gc
#gc.collect()
#torch.cuda.empty_cache()
#if do_classifier_free_guidance and guidance_rescale > 0.0:
# # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
# noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
# compute the previous noisy sample x_t -> x_t-1
latents = scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
#del noise_pred
#import gc
#gc.collect()
#torch.cuda.empty_cache()
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
#################
latents = latents.to("cpu")
torch.cuda.empty_cache()
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents)
class SDXLLatentsToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
type: Literal["l2l_sdxl"] = "l2l_sdxl"
# Inputs
# fmt: off
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
noise: Optional[LatentsField] = Field(description="The noise to use")
steps: int = Field(default=10, gt=0, description="The number of steps to use to generate the image")
cfg_scale: Union[float, List[float]] = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
unet: UNetField = Field(default=None, description="UNet submodel")
latents: Optional[LatentsField] = Field(description="Initial latents")
denoising_start: float = Field(default=0.0, ge=0, lt=1, description="")
denoising_end: float = Field(default=1.0, gt=0, le=1, description="")
#control: Union[ControlField, list[ControlField]] = Field(default=None, description="The control to use")
#seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
#seamless_axes: str = Field(default="", description="The axes to tile the image on, 'x' and/or 'y'")
# fmt: on
@validator("cfg_scale")
def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError('cfg_scale must be greater than 1')
else:
if v < 1:
raise ValueError('cfg_scale must be greater than 1')
return v
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "SDXL Latents to Latents",
"tags": ["latents"],
"type_hints": {
"model": "model",
# "cfg_scale": "float",
"cfg_scale": "number"
}
},
}
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
sample,
step,
total_steps,
) -> None:
stable_diffusion_xl_step_callback(
context=context,
node=self.dict(),
source_node_id=source_node_id,
sample=sample,
step=step,
total_steps=total_steps,
)
# based on
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
latents = context.services.latents.get(self.latents.latents_name)
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
prompt_embeds = positive_cond_data.conditionings[0].embeds
pooled_prompt_embeds = positive_cond_data.conditionings[0].pooled_embeds
add_time_ids = positive_cond_data.conditionings[0].add_time_ids
negative_cond_data = context.services.latents.get(self.negative_conditioning.conditioning_name)
negative_prompt_embeds = negative_cond_data.conditionings[0].embeds
negative_pooled_prompt_embeds = negative_cond_data.conditionings[0].pooled_embeds
add_neg_time_ids = negative_cond_data.conditionings[0].add_time_ids
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
# apply denoising_start
num_inference_steps = self.steps
scheduler.set_timesteps(num_inference_steps)
t_start = int(round(self.denoising_start * num_inference_steps))
timesteps = scheduler.timesteps[t_start * scheduler.order:]
num_inference_steps = num_inference_steps - t_start
# apply noise(if provided)
if self.noise is not None:
noise = context.services.latents.get(self.noise.latents_name)
latents = scheduler.add_noise(latents, noise, timesteps[:1])
del noise
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict()
)
do_classifier_free_guidance = True
cross_attention_kwargs = None
with unet_info as unet:
# apply scheduler extra args
extra_step_kwargs = dict()
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
eta=0.0,
)
if "generator" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
generator=torch.Generator(device=unet.device).manual_seed(0),
)
num_warmup_steps = max(len(timesteps) - num_inference_steps * scheduler.order, 0)
# apply denoising_end
skipped_final_steps = int(round((1 - self.denoising_end) * self.steps))
num_inference_steps = num_inference_steps - skipped_final_steps
timesteps = timesteps[: num_warmup_steps + scheduler.order * num_inference_steps]
if not context.services.configuration.sequential_guidance:
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
add_text_embeds = torch.cat([negative_pooled_prompt_embeds, pooled_prompt_embeds], dim=0)
add_time_ids = torch.cat([add_neg_time_ids, add_time_ids], dim=0)
prompt_embeds = prompt_embeds.to(device=unet.device, dtype=unet.dtype)
add_text_embeds = add_text_embeds.to(device=unet.device, dtype=unet.dtype)
add_time_ids = add_time_ids.to(device=unet.device, dtype=unet.dtype)
latents = latents.to(device=unet.device, dtype=unet.dtype)
with tqdm(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
# expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = scheduler.scale_model_input(latent_model_input, t)
# predict the noise residual
added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
noise_pred = unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
cross_attention_kwargs=cross_attention_kwargs,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
# perform guidance
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
#del noise_pred_uncond
#del noise_pred_text
#if do_classifier_free_guidance and guidance_rescale > 0.0:
# # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
# noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
# compute the previous noisy sample x_t -> x_t-1
latents = scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
else:
negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.to(device=unet.device, dtype=unet.dtype)
negative_prompt_embeds = negative_prompt_embeds.to(device=unet.device, dtype=unet.dtype)
add_neg_time_ids = add_neg_time_ids.to(device=unet.device, dtype=unet.dtype)
pooled_prompt_embeds = pooled_prompt_embeds.to(device=unet.device, dtype=unet.dtype)
prompt_embeds = prompt_embeds.to(device=unet.device, dtype=unet.dtype)
add_time_ids = add_time_ids.to(device=unet.device, dtype=unet.dtype)
latents = latents.to(device=unet.device, dtype=unet.dtype)
with tqdm(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
# expand the latents if we are doing classifier free guidance
#latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = scheduler.scale_model_input(latents, t)
#import gc
#gc.collect()
#torch.cuda.empty_cache()
# predict the noise residual
added_cond_kwargs = {"text_embeds": negative_pooled_prompt_embeds, "time_ids": add_time_ids}
noise_pred_uncond = unet(
latent_model_input,
t,
encoder_hidden_states=negative_prompt_embeds,
cross_attention_kwargs=cross_attention_kwargs,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
added_cond_kwargs = {"text_embeds": pooled_prompt_embeds, "time_ids": add_time_ids}
noise_pred_text = unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
cross_attention_kwargs=cross_attention_kwargs,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
# perform guidance
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
#del noise_pred_text
#del noise_pred_uncond
#import gc
#gc.collect()
#torch.cuda.empty_cache()
#if do_classifier_free_guidance and guidance_rescale > 0.0:
# # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
# noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
# compute the previous noisy sample x_t -> x_t-1
latents = scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
#del noise_pred
#import gc
#gc.collect()
#torch.cuda.empty_cache()
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
#################
latents = latents.to("cpu")
torch.cuda.empty_cache()
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents)

View File

@ -6,13 +6,12 @@ import cv2 as cv
import numpy as np
from basicsr.archs.rrdbnet_arch import RRDBNet
from PIL import Image
from pydantic import Field
from realesrgan import RealESRGANer
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ImageField, ResourceOrigin
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InvocationConfig, InvocationContext
from .image import ImageOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, title, tags
# TODO: Populate this from disk?
# TODO: Use model manager to load?
@ -24,22 +23,16 @@ ESRGAN_MODELS = Literal[
]
@title("Upscale (RealESRGAN)")
@tags("esrgan", "upscale")
class ESRGANInvocation(BaseInvocation):
"""Upscales an image using RealESRGAN."""
type: Literal["esrgan"] = "esrgan"
image: Union[ImageField, None] = Field(default=None, description="The input image")
model_name: ESRGAN_MODELS = Field(
default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use"
)
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Upscale (RealESRGAN)",
"tags": ["image", "upscale", "realesrgan"]
},
}
# Inputs
image: ImageField = InputField(description="The input image")
model_name: ESRGAN_MODELS = InputField(default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -108,9 +101,7 @@ class ESRGANInvocation(BaseInvocation):
upscaled_image, img_mode = upsampler.enhance(cv_image)
# back to PIL
pil_image = Image.fromarray(
cv.cvtColor(upscaled_image, cv.COLOR_BGR2RGB)
).convert("RGBA")
pil_image = Image.fromarray(cv.cvtColor(upscaled_image, cv.COLOR_BGR2RGB)).convert("RGBA")
image_dto = context.services.images.create(
image=pil_image,

View File

@ -1,3 +1,4 @@
class CanceledException(Exception):
"""Execution canceled by user."""
pass

View File

@ -1,10 +1,18 @@
from enum import Enum
from typing import Optional, Tuple
from pydantic import BaseModel, Field
from invokeai.app.util.metaenum import MetaEnum
class ProgressImage(BaseModel):
"""The progress image sent intermittently during processing"""
width: int = Field(description="The effective width of the image in pixels")
height: int = Field(description="The effective height of the image in pixels")
dataURL: str = Field(description="The image data as a b64 data URL")
class ResourceOrigin(str, Enum, metaclass=MetaEnum):
"""The origin of a resource (eg image).
@ -61,30 +69,3 @@ class InvalidImageCategoryException(ValueError):
def __init__(self, message="Invalid image category."):
super().__init__(message)
class ImageField(BaseModel):
"""An image field used for passing image objects between invocations"""
image_name: Optional[str] = Field(default=None, description="The name of the image")
class Config:
schema_extra = {"required": ["image_name"]}
class ColorField(BaseModel):
r: int = Field(ge=0, le=255, description="The red component")
g: int = Field(ge=0, le=255, description="The green component")
b: int = Field(ge=0, le=255, description="The blue component")
a: int = Field(ge=0, le=255, description="The alpha component")
def tuple(self) -> Tuple[int, int, int, int]:
return (self.r, self.g, self.b, self.a)
class ProgressImage(BaseModel):
"""The progress image sent intermittently during processing"""
width: int = Field(description="The effective width of the image in pixels")
height: int = Field(description="The effective height of the image in pixels")
dataURL: str = Field(description="The image data as a b64 data URL")

View File

@ -25,7 +25,6 @@ class BoardImageRecordStorageBase(ABC):
@abstractmethod
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Removes an image from a board."""
@ -154,7 +153,6 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
try:
@ -162,9 +160,9 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
self._cursor.execute(
"""--sql
DELETE FROM board_images
WHERE board_id = ? AND image_name = ?;
WHERE image_name = ?;
""",
(board_id, image_name),
(image_name,),
)
self._conn.commit()
except sqlite3.Error as e:
@ -207,9 +205,7 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
raise e
finally:
self._lock.release()
return OffsetPaginatedResults(
items=images, offset=offset, limit=limit, total=count
)
return OffsetPaginatedResults(items=images, offset=offset, limit=limit, total=count)
def get_all_board_image_names_for_board(self, board_id: str) -> list[str]:
try:

View File

@ -31,7 +31,6 @@ class BoardImagesServiceABC(ABC):
@abstractmethod
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Removes an image from a board."""
@ -93,18 +92,15 @@ class BoardImagesService(BoardImagesServiceABC):
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
self._services.board_image_records.remove_image_from_board(board_id, image_name)
self._services.board_image_records.remove_image_from_board(image_name)
def get_all_board_image_names_for_board(
self,
board_id: str,
) -> list[str]:
return self._services.board_image_records.get_all_board_image_names_for_board(
board_id
)
return self._services.board_image_records.get_all_board_image_names_for_board(board_id)
def get_board_for_image(
self,
@ -114,9 +110,7 @@ class BoardImagesService(BoardImagesServiceABC):
return board_id
def board_record_to_dto(
board_record: BoardRecord, cover_image_name: Optional[str], image_count: int
) -> BoardDTO:
def board_record_to_dto(board_record: BoardRecord, cover_image_name: Optional[str], image_count: int) -> BoardDTO:
"""Converts a board record to a board DTO."""
return BoardDTO(
**board_record.dict(exclude={"cover_image_name"}),

View File

@ -15,9 +15,7 @@ from pydantic import BaseModel, Field, Extra
class BoardChanges(BaseModel, extra=Extra.forbid):
board_name: Optional[str] = Field(description="The board's new name.")
cover_image_name: Optional[str] = Field(
description="The name of the board's new cover image."
)
cover_image_name: Optional[str] = Field(description="The name of the board's new cover image.")
class BoardRecordNotFoundException(Exception):
@ -292,9 +290,7 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
count = cast(int, self._cursor.fetchone()[0])
return OffsetPaginatedResults[BoardRecord](
items=boards, offset=offset, limit=limit, total=count
)
return OffsetPaginatedResults[BoardRecord](items=boards, offset=offset, limit=limit, total=count)
except sqlite3.Error as e:
self._conn.rollback()

View File

@ -108,16 +108,12 @@ class BoardService(BoardServiceABC):
def get_dto(self, board_id: str) -> BoardDTO:
board_record = self._services.board_records.get(board_id)
cover_image = self._services.image_records.get_most_recent_image_for_board(
board_record.board_id
)
cover_image = self._services.image_records.get_most_recent_image_for_board(board_record.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
board_id
)
image_count = self._services.board_image_records.get_image_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count)
def update(
@ -126,60 +122,44 @@ class BoardService(BoardServiceABC):
changes: BoardChanges,
) -> BoardDTO:
board_record = self._services.board_records.update(board_id, changes)
cover_image = self._services.image_records.get_most_recent_image_for_board(
board_record.board_id
)
cover_image = self._services.image_records.get_most_recent_image_for_board(board_record.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
board_id
)
image_count = self._services.board_image_records.get_image_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count)
def delete(self, board_id: str) -> None:
self._services.board_records.delete(board_id)
def get_many(
self, offset: int = 0, limit: int = 10
) -> OffsetPaginatedResults[BoardDTO]:
def get_many(self, offset: int = 0, limit: int = 10) -> OffsetPaginatedResults[BoardDTO]:
board_records = self._services.board_records.get_many(offset, limit)
board_dtos = []
for r in board_records.items:
cover_image = self._services.image_records.get_most_recent_image_for_board(
r.board_id
)
cover_image = self._services.image_records.get_most_recent_image_for_board(r.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
r.board_id
)
image_count = self._services.board_image_records.get_image_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
return OffsetPaginatedResults[BoardDTO](
items=board_dtos, offset=offset, limit=limit, total=len(board_dtos)
)
return OffsetPaginatedResults[BoardDTO](items=board_dtos, offset=offset, limit=limit, total=len(board_dtos))
def get_all(self) -> list[BoardDTO]:
board_records = self._services.board_records.get_all()
board_dtos = []
for r in board_records:
cover_image = self._services.image_records.get_most_recent_image_for_board(
r.board_id
)
cover_image = self._services.image_records.get_most_recent_image_for_board(r.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
r.board_id
)
image_count = self._services.board_image_records.get_image_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
return board_dtos
return board_dtos

View File

@ -1,6 +1,6 @@
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
'''Invokeai configuration system.
"""Invokeai configuration system.
Arguments and fields are taken from the pydantic definition of the
model. Defaults can be set by creating a yaml configuration file that
@ -24,12 +24,10 @@ InvokeAI:
sequential_guidance: false
precision: float16
max_cache_size: 6
max_vram_cache_size: 2.7
max_vram_cache_size: 0.5
always_use_cpu: false
free_gpu_mem: false
Features:
nsfw_checker: true
restore: true
esrgan: true
patchmatch: true
internet_available: true
@ -92,18 +90,18 @@ Typical usage at the top level file:
from invokeai.app.services.config import InvokeAIAppConfig
# get global configuration and print its nsfw_checker value
# get global configuration and print its cache size
conf = InvokeAIAppConfig.get_config()
conf.parse_args()
print(conf.nsfw_checker)
print(conf.max_cache_size)
Typical usage in a backend module:
from invokeai.app.services.config import InvokeAIAppConfig
# get global configuration and print its nsfw_checker value
# get global configuration and print its cache size value
conf = InvokeAIAppConfig.get_config()
print(conf.nsfw_checker)
print(conf.max_cache_size)
Computed properties:
@ -159,76 +157,85 @@ two configs are kept in separate sections of the config file:
outdir: outputs
...
'''
"""
from __future__ import annotations
import argparse
import pydoc
import os
import sys
from argparse import ArgumentParser
from omegaconf import OmegaConf, DictConfig
from omegaconf import OmegaConf, DictConfig, ListConfig
from pathlib import Path
from pydantic import BaseSettings, Field, parse_obj_as
from typing import ClassVar, Dict, List, Set, Literal, Union, get_origin, get_type_hints, get_args
INIT_FILE = Path('invokeai.yaml')
MODEL_CORE = Path('models/core')
DB_FILE = Path('invokeai.db')
LEGACY_INIT_FILE = Path('invokeai.init')
INIT_FILE = Path("invokeai.yaml")
DB_FILE = Path("invokeai.db")
LEGACY_INIT_FILE = Path("invokeai.init")
DEFAULT_MAX_VRAM = 0.5
class InvokeAISettings(BaseSettings):
'''
"""
Runtime configuration settings in which default values are
read from an omegaconf .yaml file.
'''
initconf : ClassVar[DictConfig] = None
argparse_groups : ClassVar[Dict] = {}
"""
def parse_args(self, argv: list=sys.argv[1:]):
initconf: ClassVar[DictConfig] = None
argparse_groups: ClassVar[Dict] = {}
def parse_args(self, argv: list = sys.argv[1:]):
parser = self.get_parser()
opt = parser.parse_args(argv)
for name in self.__fields__:
if name not in self._excluded():
setattr(self, name, getattr(opt,name))
value = getattr(opt, name)
if isinstance(value, ListConfig):
value = list(value)
elif isinstance(value, DictConfig):
value = dict(value)
setattr(self, name, value)
def to_yaml(self)->str:
def to_yaml(self) -> str:
"""
Return a YAML string representing our settings. This can be used
as the contents of `invokeai.yaml` to restore settings later.
"""
cls = self.__class__
type = get_args(get_type_hints(cls)['type'])[0]
field_dict = dict({type:dict()})
for name,field in self.__fields__.items():
type = get_args(get_type_hints(cls)["type"])[0]
field_dict = dict({type: dict()})
for name, field in self.__fields__.items():
if name in cls._excluded_from_yaml():
continue
category = field.field_info.extra.get("category") or "Uncategorized"
value = getattr(self,name)
value = getattr(self, name)
if category not in field_dict[type]:
field_dict[type][category] = dict()
# keep paths as strings to make it easier to read
field_dict[type][category][name] = str(value) if isinstance(value,Path) else value
field_dict[type][category][name] = str(value) if isinstance(value, Path) else value
conf = OmegaConf.create(field_dict)
return OmegaConf.to_yaml(conf)
@classmethod
def add_parser_arguments(cls, parser):
if 'type' in get_type_hints(cls):
settings_stanza = get_args(get_type_hints(cls)['type'])[0]
if "type" in get_type_hints(cls):
settings_stanza = get_args(get_type_hints(cls)["type"])[0]
else:
settings_stanza = "Uncategorized"
env_prefix = cls.Config.env_prefix if hasattr(cls.Config,'env_prefix') else settings_stanza.upper()
env_prefix = cls.Config.env_prefix if hasattr(cls.Config, "env_prefix") else settings_stanza.upper()
initconf = cls.initconf.get(settings_stanza) \
if cls.initconf and settings_stanza in cls.initconf \
else OmegaConf.create()
initconf = (
cls.initconf.get(settings_stanza)
if cls.initconf and settings_stanza in cls.initconf
else OmegaConf.create()
)
# create an upcase version of the environment in
# order to achieve case-insensitive environment
# variables (the way Windows does)
upcase_environ = dict()
for key,value in os.environ.items():
for key, value in os.environ.items():
upcase_environ[key.upper()] = value
fields = cls.__fields__
@ -238,8 +245,8 @@ class InvokeAISettings(BaseSettings):
if name not in cls._excluded():
current_default = field.default
category = field.field_info.extra.get("category","Uncategorized")
env_name = env_prefix + '_' + name
category = field.field_info.extra.get("category", "Uncategorized")
env_name = env_prefix + "_" + name
if category in initconf and name in initconf.get(category):
field.default = initconf.get(category).get(name)
if env_name.upper() in upcase_environ:
@ -249,15 +256,15 @@ class InvokeAISettings(BaseSettings):
field.default = current_default
@classmethod
def cmd_name(self, command_field: str='type')->str:
def cmd_name(self, command_field: str = "type") -> str:
hints = get_type_hints(self)
if command_field in hints:
return get_args(hints[command_field])[0]
else:
return 'Uncategorized'
return "Uncategorized"
@classmethod
def get_parser(cls)->ArgumentParser:
def get_parser(cls) -> ArgumentParser:
parser = PagingArgumentParser(
prog=cls.cmd_name(),
description=cls.__doc__,
@ -270,24 +277,37 @@ class InvokeAISettings(BaseSettings):
parser.add_parser(cls.cmd_name(), help=cls.__doc__)
@classmethod
def _excluded(self)->List[str]:
def _excluded(self) -> List[str]:
# internal fields that shouldn't be exposed as command line options
return ['type','initconf']
return ["type", "initconf"]
@classmethod
def _excluded_from_yaml(self)->List[str]:
def _excluded_from_yaml(self) -> List[str]:
# combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options
return ['type','initconf', 'gpu_mem_reserved', 'max_loaded_models', 'version', 'from_file', 'model', 'restore', 'root']
return [
"type",
"initconf",
"version",
"from_file",
"model",
"root",
]
class Config:
env_file_encoding = 'utf-8'
env_file_encoding = "utf-8"
arbitrary_types_allowed = True
case_sensitive = True
@classmethod
def add_field_argument(cls, command_parser, name: str, field, default_override = None):
def add_field_argument(cls, command_parser, name: str, field, default_override=None):
field_type = get_type_hints(cls).get(name)
default = default_override if default_override is not None else field.default if field.default_factory is None else field.default_factory()
default = (
default_override
if default_override is not None
else field.default
if field.default_factory is None
else field.default_factory()
)
if category := field.field_info.extra.get("category"):
if category not in cls.argparse_groups:
cls.argparse_groups[category] = command_parser.add_argument_group(category)
@ -316,10 +336,10 @@ class InvokeAISettings(BaseSettings):
argparse_group.add_argument(
f"--{name}",
dest=name,
nargs='*',
nargs="*",
type=field.type_,
default=default,
action=argparse.BooleanOptionalAction if field.type_==bool else 'store',
action=argparse.BooleanOptionalAction if field.type_ == bool else "store",
help=field.field_info.description,
)
else:
@ -328,31 +348,35 @@ class InvokeAISettings(BaseSettings):
dest=name,
type=field.type_,
default=default,
action=argparse.BooleanOptionalAction if field.type_==bool else 'store',
action=argparse.BooleanOptionalAction if field.type_ == bool else "store",
help=field.field_info.description,
)
def _find_root()->Path:
def _find_root() -> Path:
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ.get("INVOKEAI_ROOT")).resolve()
elif any([(venv.parent/x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE, MODEL_CORE]]):
root = Path(os.environ["INVOKEAI_ROOT"])
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]]):
root = (venv.parent).resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
return root
class InvokeAIAppConfig(InvokeAISettings):
'''
Generate images using Stable Diffusion. Use "invokeai" to launch
the command-line client (recommended for experts only), or
"invokeai-web" to launch the web server. Global options
can be changed by editing the file "INVOKEAI_ROOT/invokeai.yaml" or by
setting environment variables INVOKEAI_<setting>.
'''
"""
Generate images using Stable Diffusion. Use "invokeai" to launch
the command-line client (recommended for experts only), or
"invokeai-web" to launch the web server. Global options
can be changed by editing the file "INVOKEAI_ROOT/invokeai.yaml" or by
setting environment variables INVOKEAI_<setting>.
"""
singleton_config: ClassVar[InvokeAIAppConfig] = None
singleton_init: ClassVar[Dict] = None
#fmt: off
# fmt: off
type: Literal["InvokeAI"] = "InvokeAI"
host : str = Field(default="127.0.0.1", description="IP address to bind to", category='Web Server')
port : int = Field(default=9090, description="Port to bind to", category='Web Server')
@ -364,22 +388,18 @@ setting environment variables INVOKEAI_<setting>.
esrgan : bool = Field(default=True, description="Enable/disable upscaling code", category='Features')
internet_available : bool = Field(default=True, description="If true, attempt to download models on the fly; otherwise only use local models", category='Features')
log_tokenization : bool = Field(default=False, description="Enable logging of parsed prompt tokens.", category='Features')
nsfw_checker : bool = Field(default=True, description="Enable/disable the NSFW checker", category='Features')
patchmatch : bool = Field(default=True, description="Enable/disable patchmatch inpaint code", category='Features')
restore : bool = Field(default=True, description="Enable/disable face restoration code (DEPRECATED)", category='DEPRECATED')
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", category='Memory/Performance')
free_gpu_mem : bool = Field(default=False, description="If true, purge model from GPU after each generation.", category='Memory/Performance')
max_loaded_models : int = Field(default=3, gt=0, description="(DEPRECATED: use max_cache_size) Maximum number of models to keep in memory for rapid switching", category='DEPRECATED')
max_cache_size : float = Field(default=6.0, gt=0, description="Maximum memory amount used by model cache for rapid switching", category='Memory/Performance')
max_vram_cache_size : float = Field(default=2.75, ge=0, description="Amount of VRAM reserved for model storage", category='Memory/Performance')
gpu_mem_reserved : float = Field(default=2.75, ge=0, description="DEPRECATED: use max_vram_cache_size. Amount of VRAM reserved for model storage", category='DEPRECATED')
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='auto',description='Floating point precision', category='Memory/Performance')
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category='Memory/Performance')
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category='Memory/Performance')
root : Path = Field(default=_find_root(), description='InvokeAI runtime root directory', category='Paths')
root : Path = Field(default=None, description='InvokeAI runtime root directory', category='Paths')
autoimport_dir : Path = Field(default='autoimport', description='Path to a directory of models files to be imported on startup.', category='Paths')
lora_dir : Path = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', category='Paths')
embedding_dir : Path = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', category='Paths')
@ -391,8 +411,7 @@ setting environment variables INVOKEAI_<setting>.
outdir : Path = Field(default='outputs', description='Default folder for output images', category='Paths')
from_file : Path = Field(default=None, description='Take command input from the indicated file (command-line client only)', category='Paths')
use_memory_db : bool = Field(default=False, description='Use in-memory database for storing image metadata', category='Paths')
model : str = Field(default='stable-diffusion-1.5', description='Initial model name', category='Models')
ignore_missing_core_models : bool = Field(default=False, description='Ignore missing models in models/core/convert', category='Features')
log_handlers : List[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>"', category="Logging")
# note - would be better to read the log_format values from logging.py, but this creates circular dependencies issues
@ -400,16 +419,19 @@ setting environment variables INVOKEAI_<setting>.
log_level : Literal[tuple(["debug","info","warning","error","critical"])] = Field(default="info", description="Emit logging messages at this level or higher", category="Logging")
version : bool = Field(default=False, description="Show InvokeAI version and exit", category="Other")
#fmt: on
# fmt: on
def parse_args(self, argv: List[str]=None, conf: DictConfig = None, clobber=False):
'''
class Config:
validate_assignment = True
def parse_args(self, argv: List[str] = None, conf: DictConfig = None, clobber=False):
"""
Update settings with contents of init file, environment, and
command-line settings.
:param conf: alternate Omegaconf dictionary object
:param argv: aternate sys.argv list
:param clobber: ovewrite any initialization parameters passed during initialization
'''
"""
# Set the runtime root directory. We parse command-line switches here
# in order to pick up the --root_dir option.
super().parse_args(argv)
@ -426,125 +448,141 @@ setting environment variables INVOKEAI_<setting>.
if self.singleton_init and not clobber:
hints = get_type_hints(self.__class__)
for k in self.singleton_init:
setattr(self,k,parse_obj_as(hints[k],self.singleton_init[k]))
setattr(self, k, parse_obj_as(hints[k], self.singleton_init[k]))
@classmethod
def get_config(cls,**kwargs)->InvokeAIAppConfig:
'''
def get_config(cls, **kwargs) -> InvokeAIAppConfig:
"""
This returns a singleton InvokeAIAppConfig configuration object.
'''
if cls.singleton_config is None \
or type(cls.singleton_config)!=cls \
or (kwargs and cls.singleton_init != kwargs):
"""
if (
cls.singleton_config is None
or type(cls.singleton_config) != cls
or (kwargs and cls.singleton_init != kwargs)
):
cls.singleton_config = cls(**kwargs)
cls.singleton_init = kwargs
return cls.singleton_config
@property
def root_path(self)->Path:
'''
def root_path(self) -> Path:
"""
Path to the runtime root directory
'''
"""
if self.root:
return Path(self.root).expanduser().absolute()
root = Path(self.root).expanduser().absolute()
else:
return self.find_root()
root = self.find_root().expanduser().absolute()
self.root = root # insulate ourselves from relative paths that may change
return root
@property
def root_dir(self)->Path:
'''
def root_dir(self) -> Path:
"""
Alias for above.
'''
"""
return self.root_path
def _resolve(self,partial_path:Path)->Path:
def _resolve(self, partial_path: Path) -> Path:
return (self.root_path / partial_path).resolve()
@property
def init_file_path(self)->Path:
'''
def init_file_path(self) -> Path:
"""
Path to invokeai.yaml
'''
"""
return self._resolve(INIT_FILE)
@property
def output_path(self)->Path:
'''
def output_path(self) -> Path:
"""
Path to defaults outputs directory.
'''
"""
return self._resolve(self.outdir)
@property
def db_path(self)->Path:
'''
def db_path(self) -> Path:
"""
Path to the invokeai.db file.
'''
"""
return self._resolve(self.db_dir) / DB_FILE
@property
def model_conf_path(self)->Path:
'''
def model_conf_path(self) -> Path:
"""
Path to models configuration file.
'''
"""
return self._resolve(self.conf_path)
@property
def legacy_conf_path(self)->Path:
'''
def legacy_conf_path(self) -> Path:
"""
Path to directory of legacy configuration files (e.g. v1-inference.yaml)
'''
"""
return self._resolve(self.legacy_conf_dir)
@property
def models_path(self)->Path:
'''
def models_path(self) -> Path:
"""
Path to the models directory
'''
"""
return self._resolve(self.models_dir)
@property
def autoconvert_path(self)->Path:
'''
def autoconvert_path(self) -> Path:
"""
Path to the directory containing models to be imported automatically at startup.
'''
"""
return self._resolve(self.autoconvert_dir) if self.autoconvert_dir else None
# the following methods support legacy calls leftover from the Globals era
@property
def full_precision(self)->bool:
def full_precision(self) -> bool:
"""Return true if precision set to float32"""
return self.precision=='float32'
return self.precision == "float32"
@property
def disable_xformers(self)->bool:
def disable_xformers(self) -> bool:
"""Return true if xformers_enabled is false"""
return not self.xformers_enabled
@property
def try_patchmatch(self)->bool:
def try_patchmatch(self) -> bool:
"""Return true if patchmatch true"""
return self.patchmatch
@property
def nsfw_checker(self) -> bool:
"""NSFW node is always active and disabled from Web UIe"""
return True
@property
def invisible_watermark(self) -> bool:
"""invisible watermark node is always active and disabled from Web UIe"""
return True
@staticmethod
def find_root()->Path:
'''
def find_root() -> Path:
"""
Choose the runtime root directory when not specified on command line or
init file.
'''
"""
return _find_root()
class PagingArgumentParser(argparse.ArgumentParser):
'''
"""
A custom ArgumentParser that uses pydoc to page its output.
It also supports reading defaults from an init file.
'''
"""
def print_help(self, file=None):
text = self.format_help()
pydoc.pager(text)
def get_invokeai_config(**kwargs)->InvokeAIAppConfig:
'''
def get_invokeai_config(**kwargs) -> InvokeAIAppConfig:
"""
Legacy function which returns InvokeAIAppConfig.get_config()
'''
"""
return InvokeAIAppConfig.get_config(**kwargs)

View File

@ -1,60 +1,86 @@
from ..invocations.latent import LatentsToImageInvocation, TextToLatentsInvocation
from ..invocations.latent import LatentsToImageInvocation, DenoiseLatentsInvocation
from ..invocations.image import ImageNSFWBlurInvocation
from ..invocations.noise import NoiseInvocation
from ..invocations.compel import CompelInvocation
from ..invocations.params import ParamIntInvocation
from ..invocations.primitives import IntegerInvocation
from .graph import Edge, EdgeConnection, ExposedNodeInput, ExposedNodeOutput, Graph, LibraryGraph
from .item_storage import ItemStorageABC
default_text_to_image_graph_id = '539b2af5-2b4d-4d8c-8071-e54a3255fc74'
default_text_to_image_graph_id = "539b2af5-2b4d-4d8c-8071-e54a3255fc74"
def create_text_to_image() -> LibraryGraph:
return LibraryGraph(
id=default_text_to_image_graph_id,
name='t2i',
description='Converts text to an image',
name="t2i",
description="Converts text to an image",
graph=Graph(
nodes={
'width': ParamIntInvocation(id='width', a=512),
'height': ParamIntInvocation(id='height', a=512),
'seed': ParamIntInvocation(id='seed', a=-1),
'3': NoiseInvocation(id='3'),
'4': CompelInvocation(id='4'),
'5': CompelInvocation(id='5'),
'6': TextToLatentsInvocation(id='6'),
'7': LatentsToImageInvocation(id='7'),
"width": IntegerInvocation(id="width", a=512),
"height": IntegerInvocation(id="height", a=512),
"seed": IntegerInvocation(id="seed", a=-1),
"3": NoiseInvocation(id="3"),
"4": CompelInvocation(id="4"),
"5": CompelInvocation(id="5"),
"6": DenoiseLatentsInvocation(id="6"),
"7": LatentsToImageInvocation(id="7"),
"8": ImageNSFWBlurInvocation(id="8"),
},
edges=[
Edge(source=EdgeConnection(node_id='width', field='a'), destination=EdgeConnection(node_id='3', field='width')),
Edge(source=EdgeConnection(node_id='height', field='a'), destination=EdgeConnection(node_id='3', field='height')),
Edge(source=EdgeConnection(node_id='seed', field='a'), destination=EdgeConnection(node_id='3', field='seed')),
Edge(source=EdgeConnection(node_id='3', field='noise'), destination=EdgeConnection(node_id='6', field='noise')),
Edge(source=EdgeConnection(node_id='6', field='latents'), destination=EdgeConnection(node_id='7', field='latents')),
Edge(source=EdgeConnection(node_id='4', field='conditioning'), destination=EdgeConnection(node_id='6', field='positive_conditioning')),
Edge(source=EdgeConnection(node_id='5', field='conditioning'), destination=EdgeConnection(node_id='6', field='negative_conditioning')),
]
Edge(
source=EdgeConnection(node_id="width", field="a"),
destination=EdgeConnection(node_id="3", field="width"),
),
Edge(
source=EdgeConnection(node_id="height", field="a"),
destination=EdgeConnection(node_id="3", field="height"),
),
Edge(
source=EdgeConnection(node_id="seed", field="a"),
destination=EdgeConnection(node_id="3", field="seed"),
),
Edge(
source=EdgeConnection(node_id="3", field="noise"),
destination=EdgeConnection(node_id="6", field="noise"),
),
Edge(
source=EdgeConnection(node_id="6", field="latents"),
destination=EdgeConnection(node_id="7", field="latents"),
),
Edge(
source=EdgeConnection(node_id="4", field="conditioning"),
destination=EdgeConnection(node_id="6", field="positive_conditioning"),
),
Edge(
source=EdgeConnection(node_id="5", field="conditioning"),
destination=EdgeConnection(node_id="6", field="negative_conditioning"),
),
Edge(
source=EdgeConnection(node_id="7", field="image"),
destination=EdgeConnection(node_id="8", field="image"),
),
],
),
exposed_inputs=[
ExposedNodeInput(node_path='4', field='prompt', alias='positive_prompt'),
ExposedNodeInput(node_path='5', field='prompt', alias='negative_prompt'),
ExposedNodeInput(node_path='width', field='a', alias='width'),
ExposedNodeInput(node_path='height', field='a', alias='height'),
ExposedNodeInput(node_path='seed', field='a', alias='seed'),
ExposedNodeInput(node_path="4", field="prompt", alias="positive_prompt"),
ExposedNodeInput(node_path="5", field="prompt", alias="negative_prompt"),
ExposedNodeInput(node_path="width", field="a", alias="width"),
ExposedNodeInput(node_path="height", field="a", alias="height"),
ExposedNodeInput(node_path="seed", field="a", alias="seed"),
],
exposed_outputs=[
ExposedNodeOutput(node_path='7', field='image', alias='image')
])
exposed_outputs=[ExposedNodeOutput(node_path="8", field="image", alias="image")],
)
def create_system_graphs(graph_library: ItemStorageABC[LibraryGraph]) -> list[LibraryGraph]:
"""Creates the default system graphs, or adds new versions if the old ones don't match"""
# TODO: Uncomment this when we are ready to fix this up to prevent breaking changes
graphs: list[LibraryGraph] = list()
# text_to_image = graph_library.get(default_text_to_image_graph_id)
# # TODO: Check if the graph is the same as the default one, and if not, update it
# #if text_to_image is None:
text_to_image = create_text_to_image()

View File

@ -3,7 +3,13 @@
from typing import Any, Optional
from invokeai.app.models.image import ProgressImage
from invokeai.app.util.misc import get_timestamp
from invokeai.app.services.model_manager_service import BaseModelType, ModelType, SubModelType, ModelInfo
from invokeai.app.services.model_manager_service import (
BaseModelType,
ModelType,
SubModelType,
ModelInfo,
)
class EventServiceBase:
session_event: str = "session_event"
@ -29,6 +35,7 @@ class EventServiceBase:
source_node_id: str,
progress_image: Optional[ProgressImage],
step: int,
order: int,
total_steps: int,
) -> None:
"""Emitted when there is generation progress"""
@ -40,6 +47,7 @@ class EventServiceBase:
source_node_id=source_node_id,
progress_image=progress_image.dict() if progress_image is not None else None,
step=step,
order=order,
total_steps=total_steps,
),
)
@ -67,6 +75,7 @@ class EventServiceBase:
graph_execution_state_id: str,
node: dict,
source_node_id: str,
error_type: str,
error: str,
) -> None:
"""Emitted when an invocation has completed"""
@ -76,13 +85,12 @@ class EventServiceBase:
graph_execution_state_id=graph_execution_state_id,
node=node,
source_node_id=source_node_id,
error_type=error_type,
error=error,
),
)
def emit_invocation_started(
self, graph_execution_state_id: str, node: dict, source_node_id: str
) -> None:
def emit_invocation_started(self, graph_execution_state_id: str, node: dict, source_node_id: str) -> None:
"""Emitted when an invocation has started"""
self.__emit_session_event(
event_name="invocation_started",
@ -102,13 +110,13 @@ class EventServiceBase:
),
)
def emit_model_load_started (
self,
graph_execution_state_id: str,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel: SubModelType,
def emit_model_load_started(
self,
graph_execution_state_id: str,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel: SubModelType,
) -> None:
"""Emitted when a model is requested"""
self.__emit_session_event(
@ -123,13 +131,13 @@ class EventServiceBase:
)
def emit_model_load_completed(
self,
graph_execution_state_id: str,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel: SubModelType,
model_info: ModelInfo,
self,
graph_execution_state_id: str,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel: SubModelType,
model_info: ModelInfo,
) -> None:
"""Emitted when a model is correctly loaded (returns model info)"""
self.__emit_session_event(
@ -145,3 +153,37 @@ class EventServiceBase:
precision=str(model_info.precision),
),
)
def emit_session_retrieval_error(
self,
graph_execution_state_id: str,
error_type: str,
error: str,
) -> None:
"""Emitted when session retrieval fails"""
self.__emit_session_event(
event_name="session_retrieval_error",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
error_type=error_type,
error=error,
),
)
def emit_invocation_retrieval_error(
self,
graph_execution_state_id: str,
node_id: str,
error_type: str,
error: str,
) -> None:
"""Emitted when invocation retrieval fails"""
self.__emit_session_event(
event_name="invocation_retrieval_error",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
node_id=node_id,
error_type=error_type,
error=error,
),
)

View File

@ -3,16 +3,7 @@
import copy
import itertools
import uuid
from typing import (
Annotated,
Any,
Literal,
Optional,
Union,
get_args,
get_origin,
get_type_hints,
)
from typing import Annotated, Any, Literal, Optional, Union, get_args, get_origin, get_type_hints
import networkx as nx
from pydantic import BaseModel, root_validator, validator
@ -22,12 +13,17 @@ from ..invocations import *
from ..invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
)
# in 3.10 this would be "from types import NoneType"
NoneType = type(None)
class EdgeConnection(BaseModel):
node_id: str = Field(description="The id of the node for this edge connection")
field: str = Field(description="The field for this connection")
@ -61,6 +57,7 @@ def get_input_field(node: BaseInvocation, field: str) -> Any:
node_input_field = node_inputs.get(field) or None
return node_input_field
def is_union_subtype(t1, t2):
t1_args = get_args(t1)
t2_args = get_args(t2)
@ -71,6 +68,7 @@ def is_union_subtype(t1, t2):
# t1 is a Union, check that all of its types are in t2_args
return all(arg in t2_args for arg in t1_args)
def is_list_or_contains_list(t):
t_args = get_args(t)
@ -154,15 +152,17 @@ class GraphInvocationOutput(BaseInvocationOutput):
class Config:
schema_extra = {
'required': [
'type',
'image',
"required": [
"type",
"image",
]
}
# TODO: Fill this out and move to invocations
class GraphInvocation(BaseInvocation):
"""Execute a graph"""
type: Literal["graph"] = "graph"
# TODO: figure out how to create a default here
@ -178,27 +178,21 @@ class IterateInvocationOutput(BaseInvocationOutput):
type: Literal["iterate_output"] = "iterate_output"
item: Any = Field(description="The item being iterated over")
item: Any = OutputField(
description="The item being iterated over", title="Collection Item", ui_type=UIType.CollectionItem
)
class Config:
schema_extra = {
'required': [
'type',
'item',
]
}
# TODO: Fill this out and move to invocations
class IterateInvocation(BaseInvocation):
"""Iterates over a list of items"""
type: Literal["iterate"] = "iterate"
collection: list[Any] = Field(
description="The list of items to iterate over", default_factory=list
)
index: int = Field(
description="The index, will be provided on executed iterators", default=0
collection: list[Any] = InputField(
description="The list of items to iterate over", default_factory=list, ui_type=UIType.Collection
)
index: int = InputField(description="The index, will be provided on executed iterators", default=0, ui_hidden=True)
def invoke(self, context: InvocationContext) -> IterateInvocationOutput:
"""Produces the outputs as values"""
@ -208,28 +202,24 @@ class IterateInvocation(BaseInvocation):
class CollectInvocationOutput(BaseInvocationOutput):
type: Literal["collect_output"] = "collect_output"
collection: list[Any] = Field(description="The collection of input items")
collection: list[Any] = OutputField(
description="The collection of input items", title="Collection", ui_type=UIType.Collection
)
class Config:
schema_extra = {
'required': [
'type',
'collection',
]
}
class CollectInvocation(BaseInvocation):
"""Collects values into a collection"""
type: Literal["collect"] = "collect"
item: Any = Field(
item: Any = InputField(
description="The item to collect (all inputs must be of the same type)",
default=None,
ui_type=UIType.CollectionItem,
title="Collection Item",
input=Input.Connection,
)
collection: list[Any] = Field(
description="The collection, will be provided on execution",
default_factory=list,
collection: list[Any] = InputField(
description="The collection, will be provided on execution", default_factory=list, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> CollectInvocationOutput:
@ -269,9 +259,7 @@ class Graph(BaseModel):
if node_path in self.nodes:
return (self, node_path)
node_id = (
node_path if "." not in node_path else node_path[: node_path.index(".")]
)
node_id = node_path if "." not in node_path else node_path[: node_path.index(".")]
if node_id not in self.nodes:
raise NodeNotFoundError(f"Node {node_path} not found in graph")
@ -333,9 +321,7 @@ class Graph(BaseModel):
return False
# Validate all edges reference nodes in the graph
node_ids = set(
[e.source.node_id for e in self.edges] + [e.destination.node_id for e in self.edges]
)
node_ids = set([e.source.node_id for e in self.edges] + [e.destination.node_id for e in self.edges])
if not all((self.has_node(node_id) for node_id in node_ids)):
return False
@ -361,22 +347,14 @@ class Graph(BaseModel):
# Validate all iterators
# TODO: may need to validate all iterators in subgraphs so edge connections in parent graphs will be available
if not all(
(
self._is_iterator_connection_valid(n.id)
for n in self.nodes.values()
if isinstance(n, IterateInvocation)
)
(self._is_iterator_connection_valid(n.id) for n in self.nodes.values() if isinstance(n, IterateInvocation))
):
return False
# Validate all collectors
# TODO: may need to validate all collectors in subgraphs so edge connections in parent graphs will be available
if not all(
(
self._is_collector_connection_valid(n.id)
for n in self.nodes.values()
if isinstance(n, CollectInvocation)
)
(self._is_collector_connection_valid(n.id) for n in self.nodes.values() if isinstance(n, CollectInvocation))
):
return False
@ -395,48 +373,51 @@ class Graph(BaseModel):
# Validate that an edge to this node+field doesn't already exist
input_edges = self._get_input_edges(edge.destination.node_id, edge.destination.field)
if len(input_edges) > 0 and not isinstance(to_node, CollectInvocation):
raise InvalidEdgeError(f'Edge to node {edge.destination.node_id} field {edge.destination.field} already exists')
raise InvalidEdgeError(
f"Edge to node {edge.destination.node_id} field {edge.destination.field} already exists"
)
# Validate that no cycles would be created
g = self.nx_graph_flat()
g.add_edge(edge.source.node_id, edge.destination.node_id)
if not nx.is_directed_acyclic_graph(g):
raise InvalidEdgeError(f'Edge creates a cycle in the graph: {edge.source.node_id} -> {edge.destination.node_id}')
raise InvalidEdgeError(
f"Edge creates a cycle in the graph: {edge.source.node_id} -> {edge.destination.node_id}"
)
# Validate that the field types are compatible
if not are_connections_compatible(
from_node, edge.source.field, to_node, edge.destination.field
):
raise InvalidEdgeError(f'Fields are incompatible: cannot connect {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}')
if not are_connections_compatible(from_node, edge.source.field, to_node, edge.destination.field):
raise InvalidEdgeError(
f"Fields are incompatible: cannot connect {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
# Validate if iterator output type matches iterator input type (if this edge results in both being set)
if isinstance(to_node, IterateInvocation) and edge.destination.field == "collection":
if not self._is_iterator_connection_valid(
edge.destination.node_id, new_input=edge.source
):
raise InvalidEdgeError(f'Iterator input type does not match iterator output type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}')
if not self._is_iterator_connection_valid(edge.destination.node_id, new_input=edge.source):
raise InvalidEdgeError(
f"Iterator input type does not match iterator output type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
# Validate if iterator input type matches output type (if this edge results in both being set)
if isinstance(from_node, IterateInvocation) and edge.source.field == "item":
if not self._is_iterator_connection_valid(
edge.source.node_id, new_output=edge.destination
):
raise InvalidEdgeError(f'Iterator output type does not match iterator input type:, {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}')
if not self._is_iterator_connection_valid(edge.source.node_id, new_output=edge.destination):
raise InvalidEdgeError(
f"Iterator output type does not match iterator input type:, {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
# Validate if collector input type matches output type (if this edge results in both being set)
if isinstance(to_node, CollectInvocation) and edge.destination.field == "item":
if not self._is_collector_connection_valid(
edge.destination.node_id, new_input=edge.source
):
raise InvalidEdgeError(f'Collector output type does not match collector input type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}')
if not self._is_collector_connection_valid(edge.destination.node_id, new_input=edge.source):
raise InvalidEdgeError(
f"Collector output type does not match collector input type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
# Validate if collector output type matches input type (if this edge results in both being set)
if isinstance(from_node, CollectInvocation) and edge.source.field == "collection":
if not self._is_collector_connection_valid(
edge.source.node_id, new_output=edge.destination
):
raise InvalidEdgeError(f'Collector input type does not match collector output type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}')
if not self._is_collector_connection_valid(edge.source.node_id, new_output=edge.destination):
raise InvalidEdgeError(
f"Collector input type does not match collector output type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
def has_node(self, node_path: str) -> bool:
"""Determines whether or not a node exists in the graph."""
@ -465,17 +446,13 @@ class Graph(BaseModel):
# Ensure the node type matches the new node
if type(node) != type(new_node):
raise TypeError(
f"Node {node_path} is type {type(node)} but new node is type {type(new_node)}"
)
raise TypeError(f"Node {node_path} is type {type(node)} but new node is type {type(new_node)}")
# Ensure the new id is either the same or is not in the graph
prefix = None if "." not in node_path else node_path[: node_path.rindex(".")]
new_path = self._get_node_path(new_node.id, prefix=prefix)
if new_node.id != node.id and self.has_node(new_path):
raise NodeAlreadyInGraphError(
"Node with id {new_node.id} already exists in graph"
)
raise NodeAlreadyInGraphError("Node with id {new_node.id} already exists in graph")
# Set the new node in the graph
graph.nodes[new_node.id] = new_node
@ -497,9 +474,7 @@ class Graph(BaseModel):
graph.add_edge(
Edge(
source=edge.source,
destination=EdgeConnection(
node_id=new_graph_node_path, field=edge.destination.field
)
destination=EdgeConnection(node_id=new_graph_node_path, field=edge.destination.field),
)
)
@ -512,16 +487,12 @@ class Graph(BaseModel):
)
graph.add_edge(
Edge(
source=EdgeConnection(
node_id=new_graph_node_path, field=edge.source.field
),
destination=edge.destination
source=EdgeConnection(node_id=new_graph_node_path, field=edge.source.field),
destination=edge.destination,
)
)
def _get_input_edges(
self, node_path: str, field: Optional[str] = None
) -> list[Edge]:
def _get_input_edges(self, node_path: str, field: Optional[str] = None) -> list[Edge]:
"""Gets all input edges for a node"""
edges = self._get_input_edges_and_graphs(node_path)
@ -538,7 +509,7 @@ class Graph(BaseModel):
destination=EdgeConnection(
node_id=self._get_node_path(e.destination.node_id, prefix=prefix),
field=e.destination.field,
)
),
)
for _, prefix, e in filtered_edges
]
@ -550,32 +521,20 @@ class Graph(BaseModel):
edges = list()
# Return any input edges that appear in this graph
edges.extend(
[(self, prefix, e) for e in self.edges if e.destination.node_id == node_path]
)
edges.extend([(self, prefix, e) for e in self.edges if e.destination.node_id == node_path])
node_id = (
node_path if "." not in node_path else node_path[: node_path.index(".")]
)
node_id = node_path if "." not in node_path else node_path[: node_path.index(".")]
node = self.nodes[node_id]
if isinstance(node, GraphInvocation):
graph = node.graph
graph_path = (
node.id
if prefix is None or prefix == ""
else self._get_node_path(node.id, prefix=prefix)
)
graph_edges = graph._get_input_edges_and_graphs(
node_path[(len(node_id) + 1) :], prefix=graph_path
)
graph_path = node.id if prefix is None or prefix == "" else self._get_node_path(node.id, prefix=prefix)
graph_edges = graph._get_input_edges_and_graphs(node_path[(len(node_id) + 1) :], prefix=graph_path)
edges.extend(graph_edges)
return edges
def _get_output_edges(
self, node_path: str, field: str
) -> list[Edge]:
def _get_output_edges(self, node_path: str, field: str) -> list[Edge]:
"""Gets all output edges for a node"""
edges = self._get_output_edges_and_graphs(node_path)
@ -592,7 +551,7 @@ class Graph(BaseModel):
destination=EdgeConnection(
node_id=self._get_node_path(e.destination.node_id, prefix=prefix),
field=e.destination.field,
)
),
)
for _, prefix, e in filtered_edges
]
@ -604,25 +563,15 @@ class Graph(BaseModel):
edges = list()
# Return any input edges that appear in this graph
edges.extend(
[(self, prefix, e) for e in self.edges if e.source.node_id == node_path]
)
edges.extend([(self, prefix, e) for e in self.edges if e.source.node_id == node_path])
node_id = (
node_path if "." not in node_path else node_path[: node_path.index(".")]
)
node_id = node_path if "." not in node_path else node_path[: node_path.index(".")]
node = self.nodes[node_id]
if isinstance(node, GraphInvocation):
graph = node.graph
graph_path = (
node.id
if prefix is None or prefix == ""
else self._get_node_path(node.id, prefix=prefix)
)
graph_edges = graph._get_output_edges_and_graphs(
node_path[(len(node_id) + 1) :], prefix=graph_path
)
graph_path = node.id if prefix is None or prefix == "" else self._get_node_path(node.id, prefix=prefix)
graph_edges = graph._get_output_edges_and_graphs(node_path[(len(node_id) + 1) :], prefix=graph_path)
edges.extend(graph_edges)
return edges
@ -646,12 +595,8 @@ class Graph(BaseModel):
return False
# Get input and output fields (the fields linked to the iterator's input/output)
input_field = get_output_field(
self.get_node(inputs[0].node_id), inputs[0].field
)
output_fields = list(
[get_input_field(self.get_node(e.node_id), e.field) for e in outputs]
)
input_field = get_output_field(self.get_node(inputs[0].node_id), inputs[0].field)
output_fields = list([get_input_field(self.get_node(e.node_id), e.field) for e in outputs])
# Input type must be a list
if get_origin(input_field) != list:
@ -659,12 +604,7 @@ class Graph(BaseModel):
# Validate that all outputs match the input type
input_field_item_type = get_args(input_field)[0]
if not all(
(
are_connection_types_compatible(input_field_item_type, f)
for f in output_fields
)
):
if not all((are_connection_types_compatible(input_field_item_type, f) for f in output_fields)):
return False
return True
@ -684,35 +624,21 @@ class Graph(BaseModel):
outputs.append(new_output)
# Get input and output fields (the fields linked to the iterator's input/output)
input_fields = list(
[get_output_field(self.get_node(e.node_id), e.field) for e in inputs]
)
output_fields = list(
[get_input_field(self.get_node(e.node_id), e.field) for e in outputs]
)
input_fields = list([get_output_field(self.get_node(e.node_id), e.field) for e in inputs])
output_fields = list([get_input_field(self.get_node(e.node_id), e.field) for e in outputs])
# Validate that all inputs are derived from or match a single type
input_field_types = set(
[
t
for input_field in input_fields
for t in (
[input_field]
if get_origin(input_field) == None
else get_args(input_field)
)
for t in ([input_field] if get_origin(input_field) == None else get_args(input_field))
if t != NoneType
]
) # Get unique types
type_tree = nx.DiGraph()
type_tree.add_nodes_from(input_field_types)
type_tree.add_edges_from(
[
e
for e in itertools.permutations(input_field_types, 2)
if issubclass(e[1], e[0])
]
)
type_tree.add_edges_from([e for e in itertools.permutations(input_field_types, 2) if issubclass(e[1], e[0])])
type_degrees = type_tree.in_degree(type_tree.nodes)
if sum((t[1] == 0 for t in type_degrees)) != 1: # type: ignore
return False # There is more than one root type
@ -729,9 +655,7 @@ class Graph(BaseModel):
return False
# Verify that all outputs match the input type (are a base class or the same class)
if not all(
(issubclass(input_root_type, get_args(f)[0]) for f in output_fields)
):
if not all((issubclass(input_root_type, get_args(f)[0]) for f in output_fields)):
return False
return True
@ -751,9 +675,7 @@ class Graph(BaseModel):
g.add_edges_from(set([(e.source.node_id, e.destination.node_id) for e in self.edges]))
return g
def nx_graph_flat(
self, nx_graph: Optional[nx.DiGraph] = None, prefix: Optional[str] = None
) -> nx.DiGraph:
def nx_graph_flat(self, nx_graph: Optional[nx.DiGraph] = None, prefix: Optional[str] = None) -> nx.DiGraph:
"""Returns a flattened NetworkX DiGraph, including all subgraphs (but not with iterations expanded)"""
g = nx_graph or nx.DiGraph()
@ -762,26 +684,18 @@ class Graph(BaseModel):
[
self._get_node_path(n.id, prefix)
for n in self.nodes.values()
if not isinstance(n, GraphInvocation)
and not isinstance(n, IterateInvocation)
if not isinstance(n, GraphInvocation) and not isinstance(n, IterateInvocation)
]
)
# Expand graph nodes
for sgn in (
gn for gn in self.nodes.values() if isinstance(gn, GraphInvocation)
):
for sgn in (gn for gn in self.nodes.values() if isinstance(gn, GraphInvocation)):
g = sgn.graph.nx_graph_flat(g, self._get_node_path(sgn.id, prefix))
# TODO: figure out if iteration nodes need to be expanded
unique_edges = set([(e.source.node_id, e.destination.node_id) for e in self.edges])
g.add_edges_from(
[
(self._get_node_path(e[0], prefix), self._get_node_path(e[1], prefix))
for e in unique_edges
]
)
g.add_edges_from([(self._get_node_path(e[0], prefix), self._get_node_path(e[1], prefix)) for e in unique_edges])
return g
@ -800,23 +714,19 @@ class GraphExecutionState(BaseModel):
)
# Nodes that have been executed
executed: set[str] = Field(
description="The set of node ids that have been executed", default_factory=set
)
executed: set[str] = Field(description="The set of node ids that have been executed", default_factory=set)
executed_history: list[str] = Field(
description="The list of node ids that have been executed, in order of execution",
default_factory=list,
)
# The results of executed nodes
results: dict[
str, Annotated[InvocationOutputsUnion, Field(discriminator="type")]
] = Field(description="The results of node executions", default_factory=dict)
results: dict[str, Annotated[InvocationOutputsUnion, Field(discriminator="type")]] = Field(
description="The results of node executions", default_factory=dict
)
# Errors raised when executing nodes
errors: dict[str, str] = Field(
description="Errors raised when executing nodes", default_factory=dict
)
errors: dict[str, str] = Field(description="Errors raised when executing nodes", default_factory=dict)
# Map of prepared/executed nodes to their original nodes
prepared_source_mapping: dict[str, str] = Field(
@ -832,16 +742,16 @@ class GraphExecutionState(BaseModel):
class Config:
schema_extra = {
'required': [
'id',
'graph',
'execution_graph',
'executed',
'executed_history',
'results',
'errors',
'prepared_source_mapping',
'source_prepared_mapping',
"required": [
"id",
"graph",
"execution_graph",
"executed",
"executed_history",
"results",
"errors",
"prepared_source_mapping",
"source_prepared_mapping",
]
}
@ -899,9 +809,7 @@ class GraphExecutionState(BaseModel):
"""Returns true if the graph has any errors"""
return len(self.errors) > 0
def _create_execution_node(
self, node_path: str, iteration_node_map: list[tuple[str, str]]
) -> list[str]:
def _create_execution_node(self, node_path: str, iteration_node_map: list[tuple[str, str]]) -> list[str]:
"""Prepares an iteration node and connects all edges, returning the new node id"""
node = self.graph.get_node(node_path)
@ -911,20 +819,12 @@ class GraphExecutionState(BaseModel):
# If this is an iterator node, we must create a copy for each iteration
if isinstance(node, IterateInvocation):
# Get input collection edge (should error if there are no inputs)
input_collection_edge = next(
iter(self.graph._get_input_edges(node_path, "collection"))
)
input_collection_edge = next(iter(self.graph._get_input_edges(node_path, "collection")))
input_collection_prepared_node_id = next(
n[1]
for n in iteration_node_map
if n[0] == input_collection_edge.source.node_id
)
input_collection_prepared_node_output = self.results[
input_collection_prepared_node_id
]
input_collection = getattr(
input_collection_prepared_node_output, input_collection_edge.source.field
n[1] for n in iteration_node_map if n[0] == input_collection_edge.source.node_id
)
input_collection_prepared_node_output = self.results[input_collection_prepared_node_id]
input_collection = getattr(input_collection_prepared_node_output, input_collection_edge.source.field)
self_iteration_count = len(input_collection)
new_nodes = list()
@ -939,9 +839,7 @@ class GraphExecutionState(BaseModel):
# For collect nodes, this may contain multiple inputs to the same field
new_edges = list()
for edge in input_edges:
for input_node_id in (
n[1] for n in iteration_node_map if n[0] == edge.source.node_id
):
for input_node_id in (n[1] for n in iteration_node_map if n[0] == edge.source.node_id):
new_edge = Edge(
source=EdgeConnection(node_id=input_node_id, field=edge.source.field),
destination=EdgeConnection(node_id="", field=edge.destination.field),
@ -982,11 +880,7 @@ class GraphExecutionState(BaseModel):
def _iterator_graph(self) -> nx.DiGraph:
"""Gets a DiGraph with edges to collectors removed so an ancestor search produces all active iterators for any node"""
g = self.graph.nx_graph_flat()
collectors = (
n
for n in self.graph.nodes
if isinstance(self.graph.get_node(n), CollectInvocation)
)
collectors = (n for n in self.graph.nodes if isinstance(self.graph.get_node(n), CollectInvocation))
for c in collectors:
g.remove_edges_from(list(g.in_edges(c)))
return g
@ -994,11 +888,7 @@ class GraphExecutionState(BaseModel):
def _get_node_iterators(self, node_id: str) -> list[str]:
"""Gets iterators for a node"""
g = self._iterator_graph()
iterators = [
n
for n in nx.ancestors(g, node_id)
if isinstance(self.graph.get_node(n), IterateInvocation)
]
iterators = [n for n in nx.ancestors(g, node_id) if isinstance(self.graph.get_node(n), IterateInvocation)]
return iterators
def _prepare(self) -> Optional[str]:
@ -1045,29 +935,18 @@ class GraphExecutionState(BaseModel):
if isinstance(next_node, CollectInvocation):
# Collapse all iterator input mappings and create a single execution node for the collect invocation
all_iteration_mappings = list(
itertools.chain(
*(
((s, p) for p in self.source_prepared_mapping[s])
for s in next_node_parents
)
)
itertools.chain(*(((s, p) for p in self.source_prepared_mapping[s]) for s in next_node_parents))
)
# all_iteration_mappings = list(set(itertools.chain(*prepared_parent_mappings)))
create_results = self._create_execution_node(
next_node_id, all_iteration_mappings
)
create_results = self._create_execution_node(next_node_id, all_iteration_mappings)
if create_results is not None:
new_node_ids.extend(create_results)
else: # Iterators or normal nodes
# Get all iterator combinations for this node
# Will produce a list of lists of prepared iterator nodes, from which results can be iterated
iterator_nodes = self._get_node_iterators(next_node_id)
iterator_nodes_prepared = [
list(self.source_prepared_mapping[n]) for n in iterator_nodes
]
iterator_node_prepared_combinations = list(
itertools.product(*iterator_nodes_prepared)
)
iterator_nodes_prepared = [list(self.source_prepared_mapping[n]) for n in iterator_nodes]
iterator_node_prepared_combinations = list(itertools.product(*iterator_nodes_prepared))
# Select the correct prepared parents for each iteration
# For every iterator, the parent must either not be a child of that iterator, or must match the prepared iteration for that iterator
@ -1096,31 +975,16 @@ class GraphExecutionState(BaseModel):
return next(iter(prepared_nodes))
# Check if the requested node is an iterator
prepared_iterator = next(
(n for n in prepared_nodes if n in prepared_iterator_nodes), None
)
prepared_iterator = next((n for n in prepared_nodes if n in prepared_iterator_nodes), None)
if prepared_iterator is not None:
return prepared_iterator
# Filter to only iterator nodes that are a parent of the specified node, in tuple format (prepared, source)
iterator_source_node_mapping = [
(n, self.prepared_source_mapping[n]) for n in prepared_iterator_nodes
]
parent_iterators = [
itn
for itn in iterator_source_node_mapping
if nx.has_path(graph, itn[1], source_node_path)
]
iterator_source_node_mapping = [(n, self.prepared_source_mapping[n]) for n in prepared_iterator_nodes]
parent_iterators = [itn for itn in iterator_source_node_mapping if nx.has_path(graph, itn[1], source_node_path)]
return next(
(
n
for n in prepared_nodes
if all(
nx.has_path(execution_graph, pit[0], n)
for pit in parent_iterators
)
),
(n for n in prepared_nodes if all(nx.has_path(execution_graph, pit[0], n) for pit in parent_iterators)),
None,
)
@ -1130,13 +994,13 @@ class GraphExecutionState(BaseModel):
# Depth-first search with pre-order traversal is a depth-first topological sort
sorted_nodes = nx.dfs_preorder_nodes(g)
next_node = next(
(
n
for n in sorted_nodes
if n not in self.executed # the node must not already be executed...
and all((e[0] in self.executed for e in g.in_edges(n))) # ...and all its inputs must be executed
if n not in self.executed # the node must not already be executed...
and all((e[0] in self.executed for e in g.in_edges(n))) # ...and all its inputs must be executed
),
None,
)
@ -1221,15 +1085,18 @@ class ExposedNodeOutput(BaseModel):
field: str = Field(description="The field name of the output")
alias: str = Field(description="The alias of the output")
class LibraryGraph(BaseModel):
id: str = Field(description="The unique identifier for this library graph", default_factory=uuid.uuid4)
graph: Graph = Field(description="The graph")
name: str = Field(description="The name of the graph")
description: str = Field(description="The description of the graph")
exposed_inputs: list[ExposedNodeInput] = Field(description="The inputs exposed by this graph", default_factory=list)
exposed_outputs: list[ExposedNodeOutput] = Field(description="The outputs exposed by this graph", default_factory=list)
exposed_outputs: list[ExposedNodeOutput] = Field(
description="The outputs exposed by this graph", default_factory=list
)
@validator('exposed_inputs', 'exposed_outputs')
@validator("exposed_inputs", "exposed_outputs")
def validate_exposed_aliases(cls, v):
if len(v) != len(set(i.alias for i in v)):
raise ValueError("Duplicate exposed alias")
@ -1237,23 +1104,27 @@ class LibraryGraph(BaseModel):
@root_validator
def validate_exposed_nodes(cls, values):
graph = values['graph']
graph = values["graph"]
# Validate exposed inputs
for exposed_input in values['exposed_inputs']:
for exposed_input in values["exposed_inputs"]:
if not graph.has_node(exposed_input.node_path):
raise ValueError(f"Exposed input node {exposed_input.node_path} does not exist")
node = graph.get_node(exposed_input.node_path)
if get_input_field(node, exposed_input.field) is None:
raise ValueError(f"Exposed input field {exposed_input.field} does not exist on node {exposed_input.node_path}")
raise ValueError(
f"Exposed input field {exposed_input.field} does not exist on node {exposed_input.node_path}"
)
# Validate exposed outputs
for exposed_output in values['exposed_outputs']:
for exposed_output in values["exposed_outputs"]:
if not graph.has_node(exposed_output.node_path):
raise ValueError(f"Exposed output node {exposed_output.node_path} does not exist")
node = graph.get_node(exposed_output.node_path)
if get_output_field(node, exposed_output.field) is None:
raise ValueError(f"Exposed output field {exposed_output.field} does not exist on node {exposed_output.node_path}")
raise ValueError(
f"Exposed output field {exposed_output.field} does not exist on node {exposed_output.node_path}"
)
return values

View File

@ -85,9 +85,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
self.__cache_ids = Queue()
self.__max_cache_size = 10 # TODO: get this from config
self.__output_folder: Path = (
output_folder if isinstance(output_folder, Path) else Path(output_folder)
)
self.__output_folder: Path = output_folder if isinstance(output_folder, Path) else Path(output_folder)
self.__thumbnails_folder = self.__output_folder / "thumbnails"
# Validate required output folders at launch
@ -120,7 +118,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
image_path = self.get_path(image_name)
pnginfo = PngImagePlugin.PngInfo()
if metadata is not None:
pnginfo.add_text("invokeai_metadata", json.dumps(metadata))
if graph is not None:
@ -183,9 +181,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
def __set_cache(self, image_name: Path, image: PILImageType):
if not image_name in self.__cache:
self.__cache[image_name] = image
self.__cache_ids.put(
image_name
) # TODO: this should refresh position for LRU cache
self.__cache_ids.put(image_name) # TODO: this should refresh position for LRU cache
if len(self.__cache) > self.__max_cache_size:
cache_id = self.__cache_ids.get()
if cache_id in self.__cache:

View File

@ -67,6 +67,7 @@ IMAGE_DTO_COLS = ", ".join(
"created_at",
"updated_at",
"deleted_at",
"starred",
],
)
)
@ -139,6 +140,7 @@ class ImageRecordStorageBase(ABC):
node_id: Optional[str],
metadata: Optional[dict],
is_intermediate: bool = False,
starred: bool = False,
) -> datetime:
"""Saves an image record."""
pass
@ -200,6 +202,16 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
"""
)
self._cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in self._cursor.fetchall()]
if "starred" not in columns:
self._cursor.execute(
"""--sql
ALTER TABLE images ADD COLUMN starred BOOLEAN DEFAULT FALSE;
"""
)
# Create the `images` table indices.
self._cursor.execute(
"""--sql
@ -222,6 +234,12 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_starred ON images(starred);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
@ -321,6 +339,17 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
(changes.is_intermediate, image_name),
)
# Change the image's `starred`` state
if changes.starred is not None:
self._cursor.execute(
f"""--sql
UPDATE images
SET starred = ?
WHERE image_name = ?;
""",
(changes.starred, image_name),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
@ -397,7 +426,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
query_params.append(board_id)
query_pagination = """--sql
ORDER BY images.created_at DESC LIMIT ? OFFSET ?
ORDER BY images.starred DESC, images.created_at DESC LIMIT ? OFFSET ?
"""
# Final images query with pagination
@ -426,9 +455,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
finally:
self._lock.release()
return OffsetPaginatedResults(
items=images, offset=offset, limit=limit, total=count
)
return OffsetPaginatedResults(items=images, offset=offset, limit=limit, total=count)
def delete(self, image_name: str) -> None:
try:
@ -466,7 +493,6 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
finally:
self._lock.release()
def delete_intermediates(self) -> list[str]:
try:
self._lock.acquire()
@ -503,11 +529,10 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
node_id: Optional[str],
metadata: Optional[dict],
is_intermediate: bool = False,
starred: bool = False,
) -> datetime:
try:
metadata_json = (
None if metadata is None else json.dumps(metadata)
)
metadata_json = None if metadata is None else json.dumps(metadata)
self._lock.acquire()
self._cursor.execute(
"""--sql
@ -520,9 +545,10 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
node_id,
session_id,
metadata,
is_intermediate
is_intermediate,
starred
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?);
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);
""",
(
image_name,
@ -534,6 +560,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
session_id,
metadata_json,
is_intermediate,
starred,
),
)
self._conn.commit()

View File

@ -216,16 +216,9 @@ class ImageService(ImageServiceABC):
metadata=metadata,
session_id=session_id,
)
if board_id is not None:
self._services.board_image_records.add_image_to_board(
board_id=board_id, image_name=image_name
)
self._services.image_files.save(
image_name=image_name, image=image, metadata=metadata, graph=graph
)
self._services.board_image_records.add_image_to_board(board_id=board_id, image_name=image_name)
self._services.image_files.save(image_name=image_name, image=image, metadata=metadata, graph=graph)
image_dto = self.get_dto(image_name)
return image_dto
@ -236,7 +229,7 @@ class ImageService(ImageServiceABC):
self._services.logger.error("Failed to save image file")
raise
except Exception as e:
self._services.logger.error("Problem saving image record and file")
self._services.logger.error(f"Problem saving image record and file: {str(e)}")
raise e
def update(
@ -296,13 +289,12 @@ class ImageService(ImageServiceABC):
def get_metadata(self, image_name: str) -> Optional[ImageMetadata]:
try:
image_record = self._services.image_records.get(image_name)
metadata = self._services.image_records.get_metadata(image_name)
if not image_record.session_id:
return ImageMetadata()
return ImageMetadata(metadata=metadata)
session_raw = self._services.graph_execution_manager.get_raw(
image_record.session_id
)
session_raw = self._services.graph_execution_manager.get_raw(image_record.session_id)
graph = None
if session_raw:
@ -312,7 +304,6 @@ class ImageService(ImageServiceABC):
self._services.logger.warn(f"Failed to parse session graph: {e}")
graph = None
metadata = self._services.image_records.get_metadata(image_name)
return ImageMetadata(graph=graph, metadata=metadata)
except ImageRecordNotFoundException:
self._services.logger.error("Image record not found")
@ -367,9 +358,7 @@ class ImageService(ImageServiceABC):
r,
self._services.urls.get_image_url(r.image_name),
self._services.urls.get_image_url(r.image_name, True),
self._services.board_image_records.get_board_for_image(
r.image_name
),
self._services.board_image_records.get_board_for_image(r.image_name),
),
results.items,
)
@ -401,11 +390,7 @@ class ImageService(ImageServiceABC):
def delete_images_on_board(self, board_id: str):
try:
image_names = (
self._services.board_image_records.get_all_board_image_names_for_board(
board_id
)
)
image_names = self._services.board_image_records.get_all_board_image_names_for_board(board_id)
for image_name in image_names:
self._services.image_files.delete(image_name)
self._services.image_records.delete_many(image_names)

Some files were not shown because too many files have changed in this diff Show More