Compare commits

..

3659 Commits

Author SHA1 Message Date
86c11f9e27 make session_list api return a raw dict rather than pydantic object 2023-08-17 22:22:06 -04:00
832335998f Update 'monkeypatched' controlnet class (#4269)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Should be removed when added in diffusers
https://github.com/huggingface/diffusers/pull/4599
2023-08-17 15:49:54 -04:00
1102c12084 Merge branch 'main' into fix/sdxl_controlnet 2023-08-17 15:40:51 -04:00
b5cee7d20c blackify chore 2023-08-17 15:40:15 -04:00
89b82b3dc4 (feat): Add Seam Painting to Canvas (1.x, 2.x & SDXL w/ Refiner) (#4292)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes
      
## Description

PR to add Seam Painting back to the Canvas.

## TODO Later

While the graph works as intended, it has become extremely large and
complex. I don't know if there's a simpler way to do this. Maybe there
is but there's soo many connections and visualizing the graph in my head
is extremely difficult. We might need to create some kind of tooling for
this. Coz it's going going to get crazier.

But well works for now.
2023-08-17 21:24:39 +12:00
8923201fdf Merge branch 'main' into seam-painting 2023-08-17 21:21:44 +12:00
226409107b Fix for Image Deletion issue 2023-08-17 17:18:11 +10:00
ae986bf873 Report RAM usage and RAM cache statistics after each generation (#4287)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

     
## Have you updated all relevant documentation?
- [X] Yes


## Description

This PR enhances the logging of performance statistics to include RAM
and model cache information. After each generation, the following will
be logged. The new information follows TOTAL GRAPH EXECUTION TIME.

```
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Graph stats: 2408dbec-50d0-44a3-bbc4-427037e3f7d4
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Node                 Calls    Seconds VRAM Used
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> main_model_loader        1     0.004s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> clip_skip                1     0.002s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> compel                   2     2.706s     0.246G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> rand_int                 1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> range_of_size            1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> iterate                  1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> metadata_accumulator     1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> noise                    1     0.003s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> denoise_latents          1     2.429s     2.022G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> l2i                      1     1.020s     1.858G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:    6.171s
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used by InvokeAI process: 4.50G (delta=0.10G)
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used to load models: 1.99G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> VRAM in use: 0.303G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM cache statistics:
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache hits: 2
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache misses: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cached: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cleared from cache: 0
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Cache high water mark: 1.99/7.50G    
```

There may be a memory leak in InvokeAI. I'm seeing the process memory
usage increasing by about 100 MB with each generation as shown in the
example above.
2023-08-17 16:10:18 +12:00
daf75a1361 blackify 2023-08-16 21:47:29 -04:00
fe4b2d53ed Merge branch 'feat/collect-more-stats' of github.com:invoke-ai/InvokeAI into feat/collect-more-stats 2023-08-16 21:39:29 -04:00
c39f8b478b fix misplaced ram_used and ram_changed attributes 2023-08-16 21:39:18 -04:00
1f82d8013e Merge branch 'main' into feat/collect-more-stats 2023-08-16 18:51:17 -04:00
e373bfca54 fix several broken links in the installation index 2023-08-16 17:54:39 -04:00
2ca8611723 add +/- sign in front of RAM delta 2023-08-16 15:53:01 -04:00
b12cf315a8 Merge branch 'main' into feat/collect-more-stats 2023-08-16 09:19:33 -04:00
975586bb40 Merge branch 'main' into seam-painting 2023-08-17 01:05:42 +12:00
a7ba142ad9 feat(ui): set min zoom on nodes to 0.1 2023-08-16 23:04:36 +10:00
0d36bab6cc fix(ui): do not rerender top panel buttons 2023-08-16 23:04:36 +10:00
c2e7f62701 fix(ui): do not rerender edges 2023-08-16 23:04:36 +10:00
1f194e3688 chore(ui): lint 2023-08-16 23:04:36 +10:00
f9b8b5cff2 fix(ui): improve node rendering performance
Previously the editor was using prop-drilling node data and templates to get values deep into nodes. This ended up causing very noticeable performance degradation. For example, any text entry fields were super laggy.

Refactor the whole thing to use memoized selectors via hooks. The hooks are mostly very narrow, returning only the data needed.

Data objects are never passed down, only node id and field name - sometimes the field kind ('input' or 'output').

The end result is a *much* smoother node editor with very minimal rerenders.
2023-08-16 23:04:36 +10:00
f7c92e1eff fix(ui): disable awkward resize animation for <Flow /> 2023-08-16 23:04:36 +10:00
70b8c3dfea fix(ui): fix context menu on workflow editor
There is a tricky mouse event interaction between chakra's `useOutsideClick()` hook (used by chakra `<Menu />`) and reactflow. The hook doesn't work when you click the main reactflow area.

To get around this, I've used a dirty hack, copy-pasting the simple context menu component we use, and extending it slightly to respond to a global `contextMenusClosed` redux action.
2023-08-16 23:04:36 +10:00
43b30355e4 feat: make primitive node titles consistent 2023-08-16 23:04:36 +10:00
a93bd01353 fix bad merge 2023-08-16 08:53:07 -04:00
bb1b8ceaa8 Update invokeai/backend/model_management/model_cache.py
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
2023-08-16 08:48:44 -04:00
be8edaf3fd Merge branch 'main' into feat/collect-more-stats 2023-08-16 08:48:14 -04:00
9cbaefaa81 feat: Add Seam Painting to SDXL 2023-08-16 19:46:48 +12:00
cc7c6e5d41 feat: Add Seam Painting with Scale Before 2023-08-16 19:35:03 +12:00
f2ee8a3da8 wip: Basic Seam Painting (only normal models) (no scale) 2023-08-16 17:26:23 +12:00
e98d7a52d4 feat: Add Seam Painting Options 2023-08-16 17:25:55 +12:00
21e1c0a5f0 tweaked formatting 2023-08-15 22:25:30 -04:00
611e241ca7 chore(ui): regen types 2023-08-16 12:07:34 +10:00
6df4af2c79 chore: lint 2023-08-16 12:07:34 +10:00
0f8606914e feat(ui): remove shouldShowDeleteButton
- remove this state entirely
- use `state.hotkeys.shift` directly to hide and show the icon on gallery
- also formatting
2023-08-16 12:07:34 +10:00
5b1099193d fix(ui): restore reset button in node image component 2023-08-16 12:07:34 +10:00
230131646f feat(ui): use imageDTOs instead of images in starring queries 2023-08-16 12:07:34 +10:00
8b1ec2685f chore: black 2023-08-16 12:07:34 +10:00
60c2c877d7 fix: add response model for star/unstar routes
- also implement pessimistic updates for starring, only changing the images that were successfully updated by backend
- some autoformat changes crept in
2023-08-16 12:07:34 +10:00
315a056686 feat(ui): show Star All if selection is a mix of starred and unstarred 2023-08-16 12:07:34 +10:00
80b0c5eab4 change from pin to star 2023-08-16 12:07:34 +10:00
08dc265e09 add listener to update selection list with change in star status 2023-08-16 12:07:34 +10:00
029a95550e rename pin to star, add multiselect and remove single image update api 2023-08-16 12:07:34 +10:00
ee6a26a97d update list images endpoint to sort by pinnedness and then created_at 2023-08-16 12:07:34 +10:00
a512fdc0f6 update IAIDndImage to use children for icons, add UI for shift+delete to delete images from gallery 2023-08-16 12:07:34 +10:00
767a612746 (ui) WIP trying to get all cache scenarios working smoothly, fix assets 2023-08-16 12:07:34 +10:00
0a71d6baa1 (ui) update cache to render pinned images alongside unpinned correctly, as well as changes in pinnedness 2023-08-16 12:07:34 +10:00
37be827e17 (ui) hook up toggle pin mutation with context menu for single image 2023-08-16 12:07:34 +10:00
04a9894e77 (api) add ability to pin and unpin images 2023-08-16 12:07:34 +10:00
f9958de6be added memory used to load models 2023-08-15 21:56:19 -04:00
ec10aca91e report RAM and RAM cache statistics 2023-08-15 21:00:30 -04:00
2b7dd3e236 feat: add missing primitive collections
- add missing primitive collections
- remove `Seed` and `LoRAField` (they don't exist)
2023-08-16 09:54:38 +10:00
fa884134d9 feat: rename ui_type_hint to ui_type
Just a bit more succinct while not losing any clarity.
2023-08-16 09:54:38 +10:00
18006cab9a chore: Regen frontend types 2023-08-16 09:54:38 +10:00
75ea716c13 feat(ui): hide node footer if there is nothing to display 2023-08-16 09:54:38 +10:00
d5f7027597 feat: Save Mask option for Canvas 2023-08-16 09:54:38 +10:00
b1ad777f5a fix: Outpainting being broken due to field name change 2023-08-16 09:54:38 +10:00
f65c8092cb fix(ui): fix issue with node editor state not restoring correctly on mount
If `reactflow` initializes before the node templates are parsed, edges may not be rendered and the viewport may get reset.

- Add `isReady` state to `NodesState`. This is false when we are loading or parsing node templates and true when that is finished.
- Conditionally render `reactflow` based on `isReady`.
- Add `viewport` to `NodesState` & handlers to keep it synced. This allows `reactflow` to mount and unmount freely and not lose viewport.
2023-08-16 09:54:38 +10:00
94bfef3543 feat(ui): add UI component for unknown node types 2023-08-16 09:54:38 +10:00
c48fd9c083 feat(nodes): refactor parameter/primitive nodes
Refine concept of "parameter" nodes to "primitives":
- integer
- float
- string
- boolean
- image
- latents
- conditioning
- color

Each primitive has:
- A field definition, if it is not already python primitive value. The field is how this primitive value is passed between nodes. Collections are lists of the field in node definitions. ex: `ImageField` & `list[ImageField]`
- A single output class. ex: `ImageOutput`
- A collection output class. ex: `ImageCollectionOutput`
- A node, which functions to load or pass on the primitive value. ex: `ImageInvocation` (in this case, `ImageInvocation` replaces `LoadImage`)

Plus a number of related changes:
- Reorganize these into `primitives.py`
- Update all nodes and logic to use primitives
- Consolidate "prompt" outputs into "string" & "mask" into "image" (there's no reason for these to be different, the function identically)
- Update default graphs & tests
- Regen frontend types & minor frontend tidy related to changes
2023-08-16 09:54:38 +10:00
f49fc7fb55 feat: node editor
squashed rebase on main after backendd refactor
2023-08-16 09:54:38 +10:00
a4b029d03c write RAM usage and change after each generation 2023-08-15 18:21:31 -04:00
d6c9bf5b38 added sdxl controlnet detection 2023-08-15 12:51:15 -04:00
4f82273fc4 Update 'monkeypatched' controlnet class 2023-08-15 11:07:43 -04:00
e54355f0f3 Prevent merge from crashing with a WindowsPath serialization error (#4271)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [X] Yes

## Description

On Windows systems, model merging was crashing at the very last step
with an error related to not being able to serialize a WindowsPath
object. I have converted the path that is passed to `save_pretrained`
into a string, which I believe will solve the problem.

Note that I had to rebuild the web frontend and add it to the PR in
order to test on my Windows VM which does not have the full node stack
installed due to space limitations.

## Related Tickets & Documents


https://discord.com/channels/1020123559063990373/1042475531079262378/1140680788954861698
2023-08-15 15:11:01 +12:00
b2934be6ba use as_posix() instead of str() 2023-08-14 22:59:26 -04:00
eab67b6a01 fixed actual bug 2023-08-14 22:59:26 -04:00
02fa116690 rebuild frontend for windows testing 2023-08-14 22:59:26 -04:00
5190a4c282 further removal of Paths 2023-08-14 22:59:26 -04:00
141d438517 prevent windows from crashing with a WindowsPath serialization error on merge 2023-08-14 22:59:26 -04:00
549d2e0485 chore: remove old web server code and python deps 2023-08-15 10:54:57 +10:00
d3d8b71c67 feat: Change refinerStart default to 0.8
This is the recommended value according to the paper.
2023-08-15 10:13:02 +10:00
6eaaa75a5d Use double quotes in docker entrypoint to prevent word splitting (#4260)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: it's smol

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
docker_entrypoint.sh does not quote variable expansion to prevent word
splitting, causing paths with spaces to fail as in #3913

## Related Tickets & Documents
#3913

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #3913
- Closes #3913

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-15 02:15:22 +12:00
ba57ec5907 Merge branch 'main' into fix/docker_entrypoint 2023-08-14 09:26:32 -04:00
cd0e4bc1d7 Refactor generation backend (#4201)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [x] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
- Remove SDXL raw prompt nodes
- SDXL and SD1/2 generation merged to same nodes - t2l/l2l
- Fixed - if no xformers installed we trying to enable attention
slicing, ignoring torch-sdp availability
- Fixed - In SDXL negative prompt now creating zeroed tensor(according
to official code)
- Added mask field to l2l node
- Removed inpaint node and all legacy code related to this node
- Pass info about seed in latents, so we can use it to initialize
ancestral/sde schedulers
- t2l and l2l nodes moved from strength to denoising_start/end
- Removed code for noise threshold(@hipsterusername said that there no
plans to restore this feature)
- Fixed - first preview image now not gray
- Fixed - report correct total step count in progress, added scheduler
order in progress event
- Added MaskEdge and ColorCorrect nodes (@hipsterusername)

## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-13 23:08:11 -04:00
9d3cd85bdd chore: black 2023-08-14 13:02:33 +10:00
46a8eed33e Merge branch 'main' into feat/refactor_generation_backend 2023-08-14 13:01:28 +10:00
9fee3f7b66 Revert "Add magic to debug"
This reverts commit 511da59793.
2023-08-14 12:58:08 +10:00
9217a217d4 fix(ui): refiner uses steps directly, no math 2023-08-14 12:56:37 +10:00
b2700ffde4 Update post processing docs 2023-08-13 22:25:49 -04:00
511da59793 Add magic to debug 2023-08-14 05:14:24 +03:00
409e5d01ba Fix cpu_only schedulers(unipc) 2023-08-14 05:14:05 +03:00
58d5c61c79 fix: SDXL Inpaint & Outpaint using regular Img2Img strength 2023-08-14 12:55:18 +12:00
3d8da67be3 Remove callback-generator wrapper 2023-08-14 03:35:15 +03:00
957ee6d370 fix: SDXL Canvas Inpaint & Outpaint not respecting SDXL Refiner start value 2023-08-14 12:13:29 +12:00
fecad2c014 fix: SDXL Denoising Strength not plugged in correctly 2023-08-14 11:59:11 +12:00
550e6ef27a re: Set the image denoise str back to 0
Bug has been fixed. No longer needed.
2023-08-14 10:27:07 +12:00
cc85c98bf3 feat: Upgrade Diffusers to 0.19.3
Needed for some schedulers
2023-08-14 09:26:28 +12:00
75fb3f429f re: Readd Refiner Step Math but cap max steps to 1000 2023-08-14 09:26:01 +12:00
d63bb39475 Make dpmpp_sde(_k) use not random seed 2023-08-14 00:24:38 +03:00
096333ba3f Fix error on zero timesteps 2023-08-14 00:20:01 +03:00
0b2925709c Use double quotes in docker entrypoint to prevent word splitting 2023-08-13 14:36:55 -05:00
7a8f14d595 Clean-up code a bit 2023-08-13 19:50:48 +03:00
59ba9fc0f6 Flip bits in seed for sde/ancestral schedulers to have different noise from initial 2023-08-13 19:50:16 +03:00
6e0beb1ed4 Fixes for second order scheduler timesteps 2023-08-13 19:31:47 +03:00
94636ddb03 Fix empty prompt handling 2023-08-13 19:31:14 +03:00
746e099f0d fix: Do not do step math for refinerSteps
This is probably better done on the backend or in a different way. This can cause steps to go above 1000 which is more than the set number for the model.
2023-08-14 04:04:15 +12:00
499e89d6f6 feat: Add SDXL Negative Aesthetic Score 2023-08-14 04:02:36 +12:00
250d530260 Fixed import issue in invokeai/frontend/install/model_install.py (#4259)
This fixes an import issue introduced in commit 1bfe983. The change made
'invokeai_configure' into a module but this line still tries to call it
as if it's a function. This will result in a `'module' not callable`
error.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description

imic from discord ask that I submit a PR to fix this bug.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-14 02:40:08 +12:00
90fa3eebb3 feat: Make SDXL Style Prompt not take spaces 2023-08-14 02:25:39 +12:00
0aba105a8f Missed a spot in configure_invokeai.py 2023-08-13 05:32:35 -07:00
9e2e82a752 Fixed import issue in invokeai/frontend/install/model_install.py
This fixes an import issue introduced in commit 1bfe983.
The change made 'invokeai_configure' into a module but this line still tries to call it as if it's a function. This will result in a `'module' not callable` error.
2023-08-13 05:15:55 -07:00
561951ad98 chore: Black linting 2023-08-13 21:28:39 +12:00
3ff9961bda fix: Circular dependency in Mask Blur Method 2023-08-13 21:26:20 +12:00
33779b6339 chore: Remove shouldFitToWidthHeight from Inpaint Graphs
Was never used for inpainting but was fed to the node anyway.
2023-08-13 21:16:37 +12:00
b35cdc05a5 feat: Scaled Processing to Inpainting & Outpainting / 1.x & SDXL 2023-08-13 20:17:23 +12:00
9afb5d6ace Update version to 3.0.2post1 2023-08-12 19:49:33 -04:00
50177b8ed9 Update frontend JS files 2023-08-12 19:49:33 -04:00
c8864e475b fix: SDXL Lora's not working on Canvas Image To Image 2023-08-13 04:34:15 +12:00
fcf7f4ac77 feat: Add SDXL ControlNet To Linear UI 2023-08-13 04:27:38 +12:00
29f1c6dc82 fix: Image To Image FP32 Fix for Canvas SDXL 2023-08-13 04:23:52 +12:00
28208e6f49 fix: Fix VAE Precision not working for SDXL Canvas Modes 2023-08-13 04:09:51 +12:00
c33acf951e feat: Make Refiner work with Canvas 2023-08-13 03:53:40 +12:00
500cd552bc feat: Make SDXL work across the board + Custom VAE Support
Also a major cleanup pass to the SDXL graphs to ensure there's no ID overlap
2023-08-13 01:45:03 +12:00
55d27f71a3 feat: Give each graph its own unique id 2023-08-13 00:51:10 +12:00
746c7c59ff fix: remove extra node for canvas output catch 2023-08-12 22:39:30 +12:00
ad96c41156 feat: Add Canvas Output node to all Canvas Graphs 2023-08-12 22:04:43 +12:00
27bd127fb0 fix: Do not add anything but final output to staging area 2023-08-12 21:10:30 +12:00
f296e5c41e wip: Remove MaskBlur / Adjust color correction 2023-08-12 20:54:30 +12:00
a67d8376c7 fix missed spot for autoAddBoardId none 2023-08-12 18:07:01 +10:00
9f6221fe8c Merge branch 'main' into feat/refactor_generation_backend 2023-08-12 18:37:47 +12:00
7587b54787 chore: Cleanup, comment and organize Node Graphs
Before it gets too chaotic
2023-08-12 17:17:46 +12:00
7254ffc3e7 chore: Split Inpaint and Outpaint Graphs 2023-08-12 16:30:20 +12:00
6034fa12de feat: Add Mask Blur node 2023-08-12 16:20:58 +12:00
ce3675fc14 Apply denoising_start/end according on timestep value 2023-08-12 03:19:49 +03:00
8acd7eeca5 feat: Disable clip skip for SDXL Canvas 2023-08-12 08:18:30 +12:00
7293a6036a feat(wip): Add SDXL To Canvas 2023-08-12 08:16:05 +12:00
0b11f309ca instead of crashing when a corrupted model is detected, warn and move on 2023-08-11 15:05:14 -04:00
6a8eb392b2 Add support for loading SDXL LoRA weights in diffusers format. 2023-08-11 14:40:22 -04:00
f343ab0302 wip: Port Outpainting to new backend 2023-08-12 06:15:59 +12:00
824ca92760 fix maximum python version instructions 2023-08-11 13:49:39 -04:00
d7d6298ec0 feat: Add Infill Method support 2023-08-12 05:32:11 +12:00
58a48bf197 fix: LoRA list name sorting 2023-08-12 04:47:15 +12:00
5629d8fa37 fix; Key issue in Lora List 2023-08-12 04:43:40 +12:00
1affb7f647 feat: Add Paste / Mask Blur / Color Correction to Inpainting
Seam options are now removed. They are replaced by two options --Mask Blur and Mask Blur Method .. which control the softness of the mask that is being painted.
2023-08-12 03:28:19 +12:00
69a9dc7b36 wip: Add initial Inpaint Graph 2023-08-12 02:42:13 +12:00
f3ae52ff97 Fix error at high denoising_start, fix unipc(cpu_only) 2023-08-11 15:46:16 +03:00
7479f9cc02 feat: Update LinearUI to use new backend (except Inpaint) 2023-08-11 22:22:01 +12:00
87ce4ab27c fix: Update default_graph to use new DenoiseLatents 2023-08-11 22:21:13 +12:00
7c0023ad9e feat: Remove TextToLatents / Rename Latents To Latents -> DenoiseLatents 2023-08-11 22:20:37 +12:00
231e665675 Merge branch 'main' into feat/refactor_generation_backend 2023-08-11 20:53:38 +12:00
80fd4c2176 undo lint changes 2023-08-11 14:26:09 +10:00
3b6e425e17 fix error detail in toast 2023-08-11 14:26:09 +10:00
50415450d8 invalidate board total when images deleted, only run date range logic if board has less than 20 images 2023-08-11 14:26:09 +10:00
06296896a9 Update invokeai version 2023-08-10 22:23:41 -04:00
a7399aca0c Add new JS files for 3.0.2 build 2023-08-10 22:23:41 -04:00
d1ea8b1e98 Two changes to command-line scripts (#4235)
During install testing I discovered two small problems in the
command-line scripts. These are fixed.

## What type of PR is this? (check all applicable)

- [X Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- 
      
## Have you updated all relevant documentation?
- [X] Yes


## Description

- installer - use correct entry point for invokeai-configure
- model merge script - prevent error when `--root` not provided
2023-08-10 21:11:45 -04:00
f851ad7ba0 Two changes to command-line scripts
- installer - use correct entry point for invokeai-configure
- model merge script - prevent error when `--root` not provided
2023-08-10 20:59:22 -04:00
591838a84b Add support for LyCORIS IA3 format (#4234)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Add support for LyCORIS IA3 format

## Related Tickets & Documents
- Closes #4229 

## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-11 03:30:35 +03:00
c0c2ab3dcf Format by black 2023-08-11 03:20:56 +03:00
56023bc725 Add support for LyCORIS IA3 format 2023-08-11 02:08:08 +03:00
2ef6a8995b Temporary force set vae to same precision as unet 2023-08-10 18:01:58 -04:00
d0fee93aac round slider values to nice numbers 2023-08-10 18:00:45 -04:00
1bfe9835cf clip cache settings to permissible values; remove redundant imports in install __init__ file 2023-08-10 18:00:45 -04:00
8e7eae6cc7 Probe LoRAs that do not have the text encoder (#4181)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] No - minor fix

      
## Have you updated all relevant documentation?
- [X] Yes

## Description

It turns out that some LoRAs do not have the text encoder model, and
this was causing the code that distinguishes the model base type during
model import to reject them as having an unknown base model. This PR
enables detection of these cases.
2023-08-10 17:50:20 -04:00
f6522c8971 Merge branch 'main' into fix/detect-more-loras 2023-08-10 17:33:16 -04:00
a969707e45 prevent vae: '' from crashing model 2023-08-10 17:33:04 -04:00
6c8e898f09 Update scripts/verify_checkpoint_template.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-08-10 16:00:33 -04:00
7bad9bcf53 update dependencies and docs to cu118 2023-08-10 15:19:12 -04:00
d42b45116f fix(ui): fix lora sort (#4222)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [s] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      

## Description

was sorting with disabled at top of list instead of bottom

fixes #4217

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4217

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/dd895b86-05de-4303-8674-9b181037abaa)
2023-08-10 21:04:28 +12:00
d4812bbc8d Merge branch 'main' into fix/ui/fix-lora-sort 2023-08-10 19:00:26 +10:00
3cd05cf6bf fix(ui): fix lora sort
was sorting with disabled at top of list instead of bottom

fixes #4217
2023-08-10 15:31:29 +10:00
2564301aeb fix(ui): fix canvas model switching (#4221)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

There was no check at all to see if the canvas had a valid model already
selected. The first model in the list was selected every time.

Now, we check if its valid. If not, we go through the logic to try and
pick the first valid model.

If there are no valid models, or there was a problem listing models, the
model selection is cleared.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->


- Closes #4125

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

- Go to Canvas tab
- Select a model other than the first one in the list
- Go to a different tab
- Go back to Canvas tab
- The model should be the same as you selected
2023-08-10 17:29:41 +12:00
da0efeaa7f fix(ui): fix canvas model switching
There was no check at all to see if the canvas had a valid model already selected. The first model in the list was selected every time.

Now, we check if its valid. If not, we go through the logic to try and pick the first valid model.

If there are no valid models, or there was a problem listing models, the model selection is cleared.
2023-08-10 15:20:37 +10:00
49cce1eec6 feat: add app_version to image metadata 2023-08-10 14:22:39 +10:00
e9ec5ab85c Apply requested changes
Co-Authored-By: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-10 06:19:22 +03:00
17fed1c870 Fix merge conflict errors 2023-08-10 05:03:33 +03:00
ade78b9591 Merge branch 'main' into feat/refactor_generation_backend 2023-08-10 04:32:16 +03:00
c8fbaf54b6 Add self.min, not self.max 2023-08-10 09:59:22 +10:00
f86d388786 refactor(diffusers_pipeline): remove unused pipeline methods 🚮 (#4175) 2023-08-09 15:19:27 -07:00
cd2c688562 Merge branch 'main' into refactor/remove_unused_pipeline_methods 2023-08-09 17:26:09 -04:00
2d29ac6f0d Add techjedi's image import script (#4171)
## What type of PR is this? (check all applicable)

- [X ] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [X] Yes

## Description

This PR adds the `invokeai-import-images` script, which imports a
directory of 2.*.* -generated images into the current InvokeAI root
directory, preserving and converting their metadata. The script also
handles 3.* images.

Many thanks to @techjedi for writing this. This version differs from the
original in two minor respects:

1. It is installed as an `invokeai-import-images` command.
2. The prompts for image and database paths use file completion provided
by the `prompt_toolkit` library.
## To Test

1. Activate the virtual environment for the destination root to import
INTO
2. Run `invokeai-import-images`
3. Follow the prompts

## Related Tickets & Documents

This is a frequently-requested feature on Discord, but I couldn't find
an Issue.

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : but should in the future
2023-08-09 13:17:08 -04:00
2c2b731386 fix typo 2023-08-09 13:08:59 -04:00
2f68a1a76c use Stalker's simplified LoRA vector-length detection code 2023-08-09 09:21:29 -04:00
930e7bc754 Merge branch 'main' into feat/image-import-script 2023-08-09 08:54:56 -04:00
7d4ace962a Merge branch 'main' into fix/detect-more-loras 2023-08-09 08:48:27 -04:00
06842f8e0a Update to 3.0.2rc1 2023-08-09 00:29:43 -04:00
c82da330db Pin safetensors to 0.3.1
Safetensors 0.3.2 does not ship an ARM64 wheel so install on macOS fails
2023-08-09 00:29:43 -04:00
628df4ec98 Add updated frontend html file 2023-08-09 00:29:43 -04:00
16b956616f Update version to 3.0.2 2023-08-09 00:29:43 -04:00
604cc17a3a Yarn build JS files 2023-08-09 00:29:43 -04:00
37c9b85549 Add slider for VRAM cache in configure script (#4133)
## What type of PR is this? (check all applicable)

- [X ] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No - will be in release notes

## Description

On CUDA systems, this PR adds a new slider to the install-time configure
script for adjusting the VRAM cache and suggests a good starting value
based on the user's max VRAM (this is subject to verification).

On non-CUDA systems this slider is suppressed.

Please test on both CUDA and non-CUDA systems using:
```
invokeai-configure --root ~/invokeai-main/ --skip-sd --skip-support
```

To see and test the default values, move `invokeai.yaml` out of the way
before running.

**Note added 8 August 2023**

This PR also fixes the configure and model install scripts so that if
the window is too small to fit the user interface, the user will be
prompted to interactively resize the window and/or change font size
(with the option to give up). This will prevent `npyscreen` from
generating its horrible tracebacks.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-09 12:27:54 +10:00
8b39b67ec7 Merge branch 'main' into feat/select-vram-in-config 2023-08-09 12:17:27 +10:00
a933977861 Pick correct config file for sdxl models (#4191)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X Yes
- [ ] No


## Description

If `models.yaml` is cleared out for some reason, the model manager will
repopulate it by scanning `models`. However, this would fail with a
pydantic validation error if any SDXL checkpoint models were present
because the lack of logic to pick the correct configuration file. This
has now been added.
2023-08-09 11:16:48 +10:00
dfb41d8461 Merge branch 'main' into bugfix/autodetect-sdxl-ckpt-config 2023-08-09 03:57:44 +03:00
e98f7eda2e Fix total_steps in generation event, order field added 2023-08-09 03:34:25 +03:00
b4a74f6523 Add MaskEdge and ColorCorrect nodes
Co-Authored-By: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2023-08-08 23:57:02 +03:00
f7aec3b934 Move conditioning class to backend 2023-08-08 23:33:52 +03:00
4d5169e16d Merge branch 'main' into feat/select-vram-in-config 2023-08-08 13:50:02 -04:00
a7e44678fb Remove legacy/unused code 2023-08-08 20:49:01 +03:00
da0184a786 Invert mask, fix l2l on no mask conntected, remove zeroing latents on zero start 2023-08-08 20:01:49 +03:00
f56f19710d allow user to interactively resize screen before UI runs 2023-08-08 12:27:25 -04:00
96b7248051 Add mask to l2l 2023-08-08 18:50:36 +03:00
e77400ab62 remove deprecated options from config 2023-08-08 08:33:30 -07:00
13347f6aec blackified 2023-08-08 08:33:30 -07:00
a9bf387e5e turned on Pydantic validate_assignment 2023-08-08 08:33:30 -07:00
8258c87a9f refrain from writing deprecated legacy options to invokeai.yaml 2023-08-08 08:33:30 -07:00
1b1b399fd0 Fix crash when attempting to update a model (#4192)
## What type of PR is this? (check all applicable)

- [X] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [X No, because small fix

      
## Have you updated all relevant documentation?
- [X] Yes

## Description

A logic bug was introduced in PR #4109 that caused Web-based model
updates to fail with a pydantic validation error. This corrects the
problem.

## Related Tickets & Documents

PR #4109
2023-08-08 10:54:27 -04:00
a8d3e078c0 Merge branch 'main' into fix/detect-more-loras 2023-08-08 10:42:45 -04:00
6ed7ba57dd Merge branch 'main' into bugfix/fix-model-updates 2023-08-08 09:05:25 -04:00
2b3b77a276 api(images): allow HEAD request on image/full (#4193) 2023-08-08 00:08:48 -07:00
8b8ec68b30 Merge branch 'main' into feat/image_http_head 2023-08-08 00:02:48 -07:00
e20af5aef0 feat(ui): add LoRA support to SDXL linear UI
new graph modifier `addSDXLLoRasToGraph()` handles adding LoRA to the SDXL t2i and i2i graphs.
2023-08-08 15:02:00 +10:00
57e8ec9488 chore(ui): lint/format 2023-08-08 12:53:47 +10:00
734a9e4271 invalidate board total when images deleted, only run date range logic if board has less than 20 images 2023-08-08 12:53:47 +10:00
fe924daee3 add option to disable multiselect 2023-08-08 12:53:47 +10:00
750f09fbed blackify 2023-08-07 21:01:59 -04:00
4df581811e add template verification script 2023-08-07 21:01:48 -04:00
eb70bc2ae4 add scripts to create model templates and check whether they match 2023-08-07 21:00:47 -04:00
5f29526a8e Add seed to latents field 2023-08-08 04:00:33 +03:00
492bfe002a Remove sdxl t2l/l2l nodes 2023-08-08 03:38:42 +03:00
809705c30d api(images): allow HEAD request on image/full 2023-08-07 15:11:47 -07:00
f0918edf98 improve error reporting on unrecognized lora models 2023-08-07 16:38:58 -04:00
a846d82fa1 Add techedi code to avoid rendering prompt/seed with null
- Added techjedi github and real names
2023-08-07 16:29:46 -04:00
22f7cf0638 add stalker's complicated but effective code for finding token vector length in LoRAs 2023-08-07 16:19:57 -04:00
25c669b1d6 Merge remote-tracking branch 'origin/main' into refactor/remove_unused_pipeline_methods 2023-08-07 13:03:10 -07:00
4367061b19 fix(ModelManager): fix overridden VAE with relative path (#4059) 2023-08-07 12:57:32 -07:00
0fd13d3604 Merge branch 'main' into feat/select-vram-in-config 2023-08-07 15:51:59 -04:00
72a3e776b2 fix logic error introduced in PR 4109 2023-08-07 15:38:22 -04:00
af044007d5 pick correct config file for sdxl models 2023-08-07 15:19:49 -04:00
1db2c93f75 Fix preview, inpaint 2023-08-07 21:27:32 +03:00
f272a44feb Merge branch 'main' into refactor/model_manager_instantiate 2023-08-07 10:59:28 -07:00
2539e26c18 Apply denoising_start/end, add torch-sdp to memory effictiend attention func 2023-08-07 19:57:11 +03:00
b0738b7f70 Fixes, zero tensor for empty negative prompt, remove raw prompt node 2023-08-07 18:37:06 +03:00
8469d3e95a chore: black 2023-08-07 10:05:52 +10:00
ae17d01e1d Fix hue adjustment (#4182)
* Fix hue adjustment

Hue adjustment wasn't working correctly because color channels got swapped. This has now been fixed and we're using PIL rather than cv2 to do the RGBA->HSV->RGBA conversion. The range of hue adjustment is also the more typical 0..360 degrees.
2023-08-06 23:23:51 +00:00
f3d3316558 probe LoRAs that do not have the text encoder 2023-08-06 16:00:53 -04:00
5a6cefb0ea add backslash to end of incomplete windows paths 2023-08-06 12:34:35 -04:00
1a6f5f0860 use backslash on Windows systems for autoadded delimiter 2023-08-06 12:29:31 -04:00
5bfd6cb66f Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-08-05 22:02:28 -07:00
59caff7ff0 refactor(diffusers_pipeline): remove unused img2img wrappers 🚮
invokeai.app no longer needs this as a single method, as it builds on latents2latents instead.
2023-08-05 21:50:52 -07:00
6487e7d906 refactor(diffusers_pipeline): remove unused ModelGroup 🚮
orphaned since #3550 removed the LazilyLoadedModelGroup code, probably unused since ModelCache took over responsibility for sequential offload somewhere around #3335.
2023-08-05 21:50:52 -07:00
77033eabd3 refactor(diffusers_pipeline): remove unused precision 🚮 2023-08-05 21:50:52 -07:00
b80abdd101 refactor(diffusers_pipeline): remove unused image_from_embeddings 🚮 2023-08-05 21:50:52 -07:00
006d782cc8 refactor(diffusers_pipeline): tidy imports 🚮 2023-08-05 21:50:52 -07:00
d09dfc3e9b fix(api): use db_location instead of db_path_string
This may just be the SQLite memory sentinel value.
2023-08-06 14:09:04 +10:00
66f524cae7 fix(mm): fix a lot of typing issues
Most fixes are just things being typed as `str` but having default values of `None`, but there are some minor logic changes.
2023-08-06 14:09:04 +10:00
9ba50130a1 fix(api): fix db location types
The services all want strings instead of `Path`s; create variable for the string representation of the path provided by the config services.
2023-08-06 14:09:04 +10:00
d4cf2d2666 fix(api): fix ApiDependencies.invoker types
ApiDependencies.invoker` provides typing for the API's services layer. Marking it `Optional` results in all the routes seeing it as optional, which is not good.

Instead of marking it optional to satisfy the initial assignment to `None`, we can just skip the initial assignment. This preserves the IDE hinting in API layer and is types-legal.
2023-08-06 14:09:04 +10:00
9aaf67c5b4 wip 2023-08-06 05:05:25 +03:00
b8b589c150 fix(nodes): fix hsl nodes rebase conflict 2023-08-06 09:57:49 +10:00
d93900a8de Added HSL Nodes 2023-08-06 09:57:49 +10:00
7f4c387080 test(model_management): factor out name strings 2023-08-05 15:46:46 -07:00
80876bbbd1 Merge remote-tracking branch 'origin/refactor/model_manager_instantiate' into refactor/model_manager_instantiate 2023-08-05 15:25:05 -07:00
7a4ff4c089 Merge branch 'main' into refactor/model_manager_instantiate 2023-08-05 15:23:38 -07:00
44bf308192 test(model_management): add a couple tests for _get_model_path 2023-08-05 15:22:23 -07:00
12e51c84ae blackified 2023-08-05 14:26:16 -07:00
b2eb83deff add docs 2023-08-05 14:26:16 -07:00
0ccc3b509e add techjedi's import script, with some filecompletion tweaks 2023-08-05 14:26:16 -07:00
4043a4c21c blackified 2023-08-05 12:44:58 -04:00
c8ceb96091 add docs 2023-08-05 12:26:52 -04:00
83f75750a9 add techjedi's import script, with some filecompletion tweaks 2023-08-05 12:19:24 -04:00
dc96a3e79d Fix random number generator
Passing in seed=0 is not equivalent to seed=None. The latter will get a new seed from entropy in the OS, and that's what we should be using.
2023-08-06 00:29:08 +10:00
c076f1397e rebuild frontend 2023-08-05 14:40:42 +10:00
2568aafc0b bump version number so that pip updates work 2023-08-05 14:40:42 +10:00
65ed224bfc Merge branch 'main' into refactor/model_manager_instantiate 2023-08-04 21:34:38 -07:00
b6e369c745 chore: black 2023-08-05 12:28:35 +10:00
ecabfc252b devices.py - Update MPS FP16 check to account for upcoming MacOS Sonoma
float16 doesn't seem to work on MacOS Sonoma due to further changes with Metal. This'll default back to float32 for Sonoma users.
2023-08-05 12:28:35 +10:00
da96a41103 Merge branch 'main' into feat/select-vram-in-config 2023-08-05 12:11:50 +10:00
d162b78767 fix broken civitai example link 2023-08-05 12:10:52 +10:00
eb6c317f04 chore: black 2023-08-05 12:05:24 +10:00
6d7223238f fix: fix typo in message 2023-08-05 12:05:24 +10:00
8607d124c5 improve message about the consequences of the --ignore_missing_core_models flag 2023-08-05 12:05:24 +10:00
23497bf759 add --ignore_missing_core_models CLI flag to bypass checking for missing core models 2023-08-05 12:05:24 +10:00
b10cf20eb1 Merge branch 'main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-08-04 18:28:18 -07:00
3d93851dba Installer should download fp16 models if user has specified 'auto' in config (#4129)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

At install time, when the user's config specified "auto" precision, the
installer was downloading the fp32 models even when an fp16 model would
be appropriate for the OS.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4127
2023-08-05 01:56:25 +03:00
9bacd77a79 Merge branch 'main' into bugfix/fp16-models 2023-08-05 01:42:43 +03:00
1b158f62c4 resolve vae overrides correctly 2023-08-04 18:24:47 -04:00
6ad565d84c folded in changes from 4099 2023-08-04 18:24:47 -04:00
04229082d6 Provide ti name from model manager, not from ti itself 2023-08-04 18:24:47 -04:00
03c27412f7 [WIP] Add sdxl lora support (#4097)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Add lora loading for sdxl.
NOT TESTED - I run only 2 loras, please check more(including lycoris if
they already exists).

## QA Instructions, Screenshots, Recordings
https://civitai.com/models/118536/voxel-xl

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/76a6abff-cb0a-43b4-b779-a0b0e5b46e56)


## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-04 16:12:22 -04:00
f0613bb0ef Fix merge conflict resolve - restore full/diff layer support 2023-08-04 19:53:27 +03:00
0e9f92b868 Merge branch 'main' into feat/sdxl_lora 2023-08-04 19:22:13 +03:00
7d0cc6ec3f chore: black 2023-08-05 02:04:22 +10:00
2f8b928486 Add support for diff/full lora layers 2023-08-05 02:04:22 +10:00
0d3c27f46c Fix typo
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-08-04 11:44:56 -04:00
cff91f06d3 Add lora apply in sdxl l2l node 2023-08-04 11:44:56 -04:00
1d5d187ba1 model probe detects sdxl lora models 2023-08-04 11:44:56 -04:00
1ac14a1e43 add sdxl lora support 2023-08-04 11:44:56 -04:00
cfc3a20565 autoAddBoardId should always be defined 2023-08-04 22:19:11 +10:00
05ae4e283c Stop checking for unet/model.onnx when a model_index.json is detected (#4132)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-03 22:10:37 -04:00
f06fee4581 Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-03 22:02:05 -04:00
9091e19de8 Add execution stat reporting after each invocation (#4125)
## What type of PR is this? (check all applicable)

- [X] Feature


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No

## Description

This PR adds execution time and VRAM usage reporting to each graph
invocation. The log output will look like this:

```
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Graph stats: c7764585-9c68-4d9d-a199-55e8186790f3                                                                                              
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Node                 Calls  Seconds  VRAM Used                                                                                                 
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> main_model_loader        1   0.005s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> clip_skip                1   0.004s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> compel                   2   0.512s     0.26G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> rand_int                 1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> range_of_size            1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> iterate                  1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> metadata_accumulator     1   0.002s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> noise                    1   0.002s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> t2l                      1   3.541s     1.93G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> l2i                      1   0.679s     0.58G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:  4.749s                                                                                                            
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> Current VRAM utilization 0.01G                                                                                                                 
```
On systems without CUDA, the VRAM stats are not printed.

The current implementation keeps track of graph ids separately so will
not be confused when several graphs are executing in parallel. It
handles exceptions, and it is integrated into the app framework by
defining an abstract base class and storing an implementation instance
in `InvocationServices`.
2023-08-03 20:05:21 -04:00
0a0b7141af Merge branch 'main' into feat/execution-stats 2023-08-03 19:49:00 -04:00
1deca89fde Merge branch 'main' into feat/select-vram-in-config 2023-08-03 19:27:58 -04:00
446fb4a438 blackify 2023-08-03 19:24:23 -04:00
ab5d938a1d use variant instead of revision 2023-08-03 19:23:52 -04:00
9942af756a Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-03 10:10:51 -04:00
06742faca7 Merge branch 'feat/execution-stats' of github.com:invoke-ai/InvokeAI into feat/execution-stats 2023-08-03 08:48:05 -04:00
d2bddf7f91 tweak formatting to accommodate longer runtimes 2023-08-03 08:47:56 -04:00
91ebf9f76e Merge branch 'main' into refactor/model_manager_instantiate 2023-08-02 19:01:21 -07:00
bf94412d14 feat: add multi-select to gallery
multi-select actions include:
- drag to board to move all to that board
- right click to add all to board or delete all

backend changes:
- add routes for changing board for list of image names, deleting list of images
- change image-specific routes to `images/i/{image_name}` to not clobber other routes (like `images/upload`, `images/delete`)
- subclass pydantic `BaseModel` as `BaseModelExcludeNull`, which excludes null values when calling `dict()` on the model. this fixes inconsistent types related to JSON parsing null values into `null` instead of `undefined`
- remove `board_id` from `remove_image_from_board`

frontend changes:
- multi-selection stuff uses `ImageDTO[]` as payloads, for dnd and other mutations. this gives us access to image `board_id`s when hitting routes, and enables efficient cache updates.
- consolidate change board and delete image modals to handle single and multiples
- board totals are now re-fetched on mutation and not kept in sync manually - was way too tedious to do this
- fixed warning about nested `<p>` elements
- closes #4088 , need to handle case when `autoAddBoardId` is `"none"`
- add option to show gallery image delete button on every gallery image

frontend refactors/organisation:
- make typegen script js instead of ts
- enable `noUncheckedIndexedAccess` to help avoid bugs when indexing into arrays, many small changes needed to satisfy TS after this
- move all image-related endpoints into `endpoints/images.ts`, its a big file now, but this fixes a number of circular dependency issues that were otherwise felt impossible to resolve
2023-08-03 11:46:59 +10:00
e080fd1e08 blackify 2023-08-03 11:25:20 +10:00
eeef1e08f8 restore ability to convert merged inpaint .safetensors files 2023-08-03 11:25:20 +10:00
b3b94b5a8d use correct prop 2023-08-03 11:01:21 +10:00
5c9787c145 add project-id header to requests 2023-08-03 11:01:21 +10:00
cf72eba15c Merge branch 'main' into feat/execution-stats 2023-08-03 10:53:25 +10:00
a6f9396a30 fix(db): retrieve metadata even when no session_id
this was unnecessarily skipped if there was no `session_id`.
2023-08-03 10:43:44 +10:00
118d5b387b deploy: refactor github workflows
Currently we use some workflow trigger conditionals to run either a real test workflow (installing the app and running it) or a fake workflow, disguised as the real one, that just auto-passes.

This change refactors the workflow to use a single workflow that can be skipped, using another github action to determine which things to run depending on the paths changed.
2023-08-03 10:32:50 +10:00
02d2cc758d Merge branch 'main' into refactor/model_manager_instantiate 2023-08-02 17:11:23 -07:00
db545f8801 chore: move PR template to .github/ dir (#4060)
## What type of PR is this? (check all applicable)

- [x] Refactor

## Have you discussed this change with the InvokeAI team?
- [x] No, because it's pretty minor

      
## Have you updated all relevant documentation?
- [x] No


## Description

This PR just moves the PR template to within the `.github/` directory
leading to a overall minimal project structure.

## Added/updated tests?

- [x] No : because this change doesn't affect or need a separate test
2023-08-03 10:08:17 +10:00
b0d72b15b3 Merge branch 'main' into patch-1 2023-08-03 10:04:47 +10:00
4e0949fa55 fix .swap() by reverting improperly merged @classmethod change 2023-08-03 10:00:43 +10:00
f028342f5b Merge branch 'main' into patch-1 2023-08-03 10:00:10 +10:00
7021467048 (ci) do not install all dependencies when running static checks (#4036)
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-02 23:46:02 +00:00
26ef5249b1 guard board switching in board context menu 2023-08-03 09:18:46 +10:00
87424be95d block auto add board change during generation. Switch condition to isProcessing 2023-08-03 09:18:46 +10:00
366952f810 fix localization 2023-08-03 09:18:46 +10:00
450e95de59 auto change board waiting for isReady 2023-08-03 09:18:46 +10:00
0ba8a0ea6c Board assignment changing on click 2023-08-03 09:18:46 +10:00
f4981f26d5 Merge branch 'main' into bugfix/fp16-models 2023-08-02 19:17:55 -04:00
6bc21984c6 Merge branch 'main' into feat/select-vram-in-config 2023-08-02 19:12:43 -04:00
43d6312587 Merge branch 'main' into feat/execution-stats 2023-08-02 19:12:08 -04:00
0d125bf3e4 chore: delete nonfunctional shell.nix
This was for v2.3 and is very broken. See `flake.nix`, thanks to @zopieux
2023-08-03 09:09:40 +10:00
921ccad04d added stats service to the cli_app startup 2023-08-02 18:41:43 -04:00
05c9207e7b Merge branch 'feat/execution-stats' of github.com:invoke-ai/InvokeAI into feat/execution-stats 2023-08-02 18:31:33 -04:00
3fc789a7ee fix unit tests 2023-08-02 18:31:10 -04:00
008362918e Merge branch 'main' into feat/execution-stats 2023-08-02 18:15:51 -04:00
8fc75a71ee integrate correctly into app API and add features
- Create abstract base class InvocationStatsServiceBase
- Store InvocationStatsService in the InvocationServices object
- Collect and report stats on simultaneous graph execution
  independently for each graph id
- Track VRAM usage for each node
- Handle cancellations and other exceptions gracefully
2023-08-02 18:10:52 -04:00
82d259f43b Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-02 16:35:46 -04:00
ec48779080 blackify 2023-08-02 14:28:19 -04:00
bc20fe4cb5 Merge branch 'main' into feat/select-vram-in-config 2023-08-02 14:27:17 -04:00
5de42be4a6 reduce VRAM cache default; take max RAM from system 2023-08-02 14:27:13 -04:00
818c55cd53 Refactor/cleanup root detection (#4102)
## What type of PR is this? (check all applicable)

- [ X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X] No, because: invisible change

      
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No


## Description

There was a problem in 3.0.1 with root resolution. If INVOKEAI_ROOT were
set to "." (or any relative path), then the location of root would
change if the code did an os.chdir() after config initialization. I
fixed this in a quick and dirty way for 3.0.1.post3.

This PR cleans up the code with a little refactoring.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-02 10:36:12 -04:00
0db1e97119 Merge branch 'main' into refactor/cleanup-root-detection 2023-08-02 09:46:46 -04:00
29ac252501 blackify 2023-08-02 09:44:06 -04:00
880727436c fix default vram cache size calculation 2023-08-02 09:43:52 -04:00
77c5c18542 add slider for VRAM cache 2023-08-02 09:11:24 -04:00
ed76250dba Stop checking for unet/model.onnx when a model_index.json is detected 2023-08-02 07:21:21 -04:00
4d22cafdad Installer should download fp16 models if user has specified 'auto' in config
- Closes #4127
2023-08-01 22:06:27 -04:00
1f9e984b0d Merge branch 'main' into refactor/model_manager_instantiate 2023-08-01 16:49:39 -07:00
8a4e5f73aa reset stats on exception 2023-08-01 19:39:42 -04:00
4599575e65 fix(ui): use const for wsProtocol, lint 2023-08-02 09:26:20 +10:00
242d860a47 fix https/wss behind reverse proxy 2023-08-02 09:26:20 +10:00
0c1a7e72d4 Fix manual installation documentation (#4107)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ]X Bug Fix
- [ ] Optimization
- [ X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X] No, because: obvious problem

      
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No


## Description

The manual installation documentation in both README.md and
020_MANUAL_INSTALL give an incomplete `invokeai-configure` command which
leaves out the path to the root directory to create. As a result, the
invokeai root directory gets created in the user’s home directory, even
if they intended it to be placed somewhere else.

This is a fairly important issue.
2023-08-01 18:55:53 -04:00
11a44b944d fix installation documentation 2023-08-01 18:52:17 -04:00
fd7b842419 add execution stat reporting after each invocation 2023-08-01 17:44:09 -04:00
5998509888 Merge branch 'main' into refactor/model_manager_instantiate 2023-08-01 11:09:43 -07:00
403a6e88f2 fix: flake: add opencv with CUDA, new patchmatch dependency. 2023-08-01 23:56:41 +10:00
c9d452b9d4 fix: Model Manager Tab Issues (#4087)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [x] Bug Fix
- [?] Optimization


## Have you discussed this change with the InvokeAI team?
- [x] No

     
## Description

- Fixed filter type select using `images` instead of `all` -- Probably
some merge conflict.
- Added loading state for model lists. Handy when the model list takes
longer than a second for any reason to fetch. Better to show this than
an empty screen.
- Refactored the render model list function so we modify the display
component in one area rather than have repeated code.

### Other Issues

- The list can get a bit laggy on initial load when you have a hundreds
of models / loras. This needs to be fixed. Will make another PR for
this.
2023-08-02 01:06:53 +12:00
dcc274a2b9 feat: Make ModelListWrapper instead of rendering conditionally 2023-08-01 22:50:10 +10:00
f404669831 fix: Rename loading vars for consistency 2023-08-01 22:42:05 +10:00
ce687b28ef fix: Model Manager Tab Issues 2023-08-01 22:41:32 +10:00
7292d89108 Merge branch 'main' into refactor/cleanup-root-detection 2023-08-01 22:14:56 +10:00
41d6a38690 Update lint-frontend.yml
The action should run on every PR. We can make this more efficient in the future.
2023-08-01 22:10:56 +10:00
fb8f218901 fix(ui): post-onnx fixes 2023-08-01 07:59:01 -04:00
437f45a97f do not depend on existence of /tmp directory 2023-08-01 00:41:35 -04:00
13ef33ed64 Merge branch 'refactor/cleanup-root-detection' of github.com:invoke-ai/InvokeAI into refactor/cleanup-root-detection 2023-08-01 00:19:55 -04:00
86d8b46fca Merge branch 'main' into refactor/cleanup-root-detection 2023-08-01 00:14:26 -04:00
e86925d424 Add onnxruntime to the main dependencies 2023-08-01 00:03:10 -04:00
df53b62048 get rid of dangling debug statements 2023-07-31 22:39:11 -04:00
55d3f04476 additional refactoring 2023-07-31 22:36:11 -04:00
72ebe2ce68 refactor root directory detection to be cleaner 2023-07-31 22:30:06 -04:00
7cd8b2f207 Refactor root detection code 2023-07-31 21:15:44 -04:00
52437205bb chore(ui): lint 2023-08-01 08:54:03 +10:00
ceebb501a4 try named export 2023-08-01 08:54:03 +10:00
cbe874b964 add chakra as peer dep 2023-08-01 08:54:03 +10:00
e2e5918ee2 export theme nad move chakra to peer dep 2023-08-01 08:54:03 +10:00
1b131e328a add optional projectId - unused so far 2023-08-01 08:54:03 +10:00
81654daed7 ONNX Support (#3562)
Note: this branch based on #3548, not on main

While find out what needs to be done to implement onnx, found that I can
do draft of it pretty quickly, so... here it is)
Supports LoRA and TI.
As example - cat with sadcatmeme lora:

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/dbd1a5df-0629-4741-94b3-8e09f4b4d5a3)

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/d918836c-fdc7-43c0-aa81-dde9182f2e0f)
2023-07-31 17:34:27 -04:00
746afcd235 Merge branch 'main' into feat/onnx 2023-07-31 16:56:34 -04:00
ae0f4efcca Add missing Optional on a few nullable fields (#4076)
## What type of PR is this? (check all applicable)

- [x] Refactor

## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: trivial

## Description

Adds a few obviously missing `Optional` on fields that default to
`None`.
2023-07-31 16:56:10 -04:00
23647336ce Merge branch 'main' into fix-optional 2023-07-31 16:55:57 -04:00
4ca54dd5fa Added a getting started guide & updated the user landing page flow (#4028)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: Just a documentation update

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Updated documentation with a getting started guide & a glossary of terms
needed to get started
Updated the landing page flow for users 

<img width="1430" alt="Screenshot 2023-07-27 at 9 53 25 PM"
src="https://github.com/invoke-ai/InvokeAI/assets/7254508/d0006ba7-2ed4-4044-a1bc-ca9a99df9397">

## Related Tickets & Documents

<!--
For pull requests that relate or
 close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-31 16:55:25 -04:00
d3a3067164 Merge branch 'main' into main 2023-07-31 16:54:48 -04:00
aeac557c41 Run python black, point out that onnx is an alpha feature in the installer 2023-07-31 16:47:48 -04:00
af4fd328a6 Merge branch 'main' into feat/onnx 2023-07-31 16:45:12 -04:00
c40c7424b6 Merge branch 'main' into fix-optional 2023-07-31 15:59:12 -04:00
a6b907150b Add python black check to pre-commit (#4094)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-31 15:58:20 -04:00
bacdf985f1 doc(model_manager): docstrings 2023-07-31 09:16:32 -07:00
e3519052ae Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate 2023-07-31 08:46:09 -07:00
b0e84c6497 Add python black check to pre-commit 2023-07-31 11:42:08 -04:00
f784e8412c Some cleanup after the merge 2023-07-31 11:23:43 -04:00
1bafbafdd3 Regen schema and rebuild frontend after merging main 2023-07-31 11:02:15 -04:00
f5ac73b091 Merge branch 'main' into feat/onnx 2023-07-31 10:58:40 -04:00
eb642653cb Add Nix Flake for development, which uses Python virtualenv. 2023-07-31 19:14:30 +10:00
2c07f54b6e Merge branch 'main' into fix-optional 2023-07-31 16:31:01 +10:00
0691e0a12a Few modifications to getting started doc 2023-07-31 15:35:20 +10:00
79afcbd07e Merge branch 'main' of https://github.com/invoke-ai/InvokeAI 2023-07-31 14:19:37 +10:00
f4ead5e07f fix keyerror bug that was causing merge script to crash 2023-07-30 19:25:44 -04:00
6d24ca7f52 3.0.1post3 (#4082)
This is a relatively stable release that corrects the urgent windows
install and model manager problems in 3.0.1. It still has two known
bugs:

1. Many inpainting models are not loading correctly.
2. The merge script is failing to start.
2023-07-30 18:03:35 -04:00
2164da8592 blackify 2023-07-30 16:25:06 -04:00
adfd1e52f4 refactor(model_manager): avoid copy/paste logic 2023-07-30 11:53:12 -07:00
0e48c98330 Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-07-30 11:33:13 -07:00
4121c261a0 fix missing models when INVOKEAI_ROOT="." 2023-07-30 13:37:18 -04:00
99823d5039 more fixes to update and install 2023-07-30 11:57:06 -04:00
0abceb0e7b Merge branch 'main' of github.com:invoke-ai/InvokeAI 2023-07-30 11:08:27 -04:00
83d3f2347e fix "unrecognized arguments: --yes" bug on unattended upgrade 2023-07-30 11:07:06 -04:00
73e25d8dbe Update communityNodes.md
- Remove FaceMask and add link FaceTools repository, which includes FaceMask, FaceOff, and FacePlace
- Move Ideal Size up from under the template entry
2023-07-30 10:59:56 -04:00
50e00feceb Add missing Optional on a few nullable fields. 2023-07-30 16:25:12 +02:00
03594c949a blackified 2023-07-30 10:18:39 -04:00
adb85036e6 dependency tweaks to avoid installing/uninstalling pkgs 2023-07-30 10:17:04 -04:00
7d7a9273ed Merge branch 'main' of github.com:invoke-ai/InvokeAI 2023-07-30 09:19:14 -04:00
f17ad227cf fix relative model paths to be against config.models_path, not root (#4061)
## What type of PR is this? (check all applicable)

- [ X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes - bug discovered by jpphoto
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not needed

## Description

The user can customize the location of the models directory by setting
configuration variable `models_dir`. However, the model manager and the
TUI installer were all treating model relative paths as relative to the
invokeai root rather than the designated models directory. This has been
fixed by changing path resolution calls from using `config.root_path` to
`config.models_path`

Unfortunately there were many instances that needed replacement, so this
needs a bit of functional testing - try adding models, removing models,
renaming them, converting checkpoints, etc.
2023-07-30 08:51:35 -04:00
f91d01eb38 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:25:37 -04:00
adfcb610b6 Installer tweaks (#4070)
## What type of PR is this? (check all applicable)


- [ X] Optimization

## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No


## Description

This PR does two things:

1. if the environment variable INVOKEAI_ROOT is defined at install time,
the zipfile installer will default to its value when asking the user
where to install the software
2. If the user has more than 72 models of any type installed, then the
list will be truncated in the TUI and the user given a warning. Anything
larger than this number of models causes the vertical space to overflow.
The only effect of truncation is that the user will not be able to see
and delete the models that were truncated. The message advises the user
to go to the Web Model Manager interface in this event.
2023-07-30 08:25:11 -04:00
cafcd16657 Merge branch 'main' into install/tui-tweaks 2023-07-30 08:19:45 -04:00
2537ff0280 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:17:36 -04:00
0f5f08e494 Merge branch 'bugfix/model-manager-rel-paths' of github.com:invoke-ai/InvokeAI into bugfix/model-manager-rel-paths 2023-07-30 08:17:21 -04:00
e20c4dc1e8 blackified 2023-07-30 08:17:10 -04:00
6dc4ddef1b Fix various bugs in ckpt to diffusers conversion script (#4065)
## What type of PR is this? (check all applicable)


- [X ] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No


## Description

This PR fixes several issues with the 3.0.0 conversion script:

- Handles checkpoint variants that don't put dots between fields in the
long state dict key names
- Handles ema, non-ema, pruned and non-pruned ckpts
- Does not add safety checker to converted checkpoints
- Respects precision of original checkpoint file
2023-07-30 08:16:37 -04:00
26af5ec341 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:08:17 -04:00
10b182f316 Merge branch 'main' into bugfix/convert-script 2023-07-30 08:07:51 -04:00
ac84a9f915 reenable display of autoloaded models 2023-07-30 08:05:05 -04:00
844578ab88 fix lora loading crash 2023-07-30 07:57:10 -04:00
ff1c40747e lint: formatting 2023-07-29 20:02:31 -07:00
dbfd1bcb5e Merge branch 'main' into refactor/model_manager_instantiate 2023-07-29 19:53:21 -07:00
444390617f rebuild front end 2023-07-29 22:00:16 -04:00
6cb40d9d7b bump version for hotfix 3.0.1post1 2023-07-29 21:58:57 -04:00
ca895a9cd0 Unpin pydantic and numpy in pyproject.toml (#4062)
## What type of PR is this? (check all applicable)

- [ X] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] Not needed

## Description

Windows users have been getting a lot of OSErrors while installing 3.0.1
during the pip dependency installation phase. Generally the errors have
involved just two packages, pydantic and numpy. Looking at the install
logs, I see that both of these packages are first installed under one
version number by a dependency, and then uninstalled and replaced by a
slightly different version specified in invoke's `pyproject.toml`. I
think this is the problem - maybe the earlier package is not completely
closed before it is uninstalled and reinstalled.

This PR relaxes pinning of numpy and pydantic in `pyproject.toml`.
Everything seems to install and run properly. Hopefully it will address
the windows install bug as well.
2023-07-29 21:57:21 -04:00
7d27c7b1a4 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 21:47:16 -04:00
6c82229910 fix recovery recipe 2023-07-29 20:00:06 -04:00
43b1eb8e84 wording changes 2023-07-29 19:49:58 -04:00
b10b07220e blackify code 2023-07-29 19:20:20 -04:00
c2eb50d1cd make installer use initial INVOKEAI_ROOT as default install location 2023-07-29 19:19:42 -04:00
73f3b7f84b remove dangling comment 2023-07-29 17:32:33 -04:00
bb18251fad Merge branch 'bugfix/convert-script' of github.com:invoke-ai/InvokeAI into bugfix/convert-script 2023-07-29 17:31:02 -04:00
348bee8981 blackified 2023-07-29 17:30:54 -04:00
078b33bda2 Merge branch 'main' into bugfix/convert-script 2023-07-29 17:30:40 -04:00
e82eb0b9fc add correct optional annotation to precision arg 2023-07-29 17:30:21 -04:00
ad976e5198 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-29 17:27:16 -04:00
0e28961e69 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 17:27:02 -04:00
6ce059f063 blackified again 2023-07-29 17:26:40 -04:00
1de783b1ce fix mistake in indexing flat_ema_key 2023-07-29 17:20:26 -04:00
3f9105be50 make convert script respect setting of use_ema in config file 2023-07-29 17:17:45 -04:00
781322a647 installer respects INVOKEAI_ROOT for default root location 2023-07-29 16:16:44 -04:00
9a1cfadd8b fix: SDXL Metadata not being retrieved (#4057)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Description

- SDXL Metadata was not being retrieved. This PR fixes it.
2023-07-29 15:37:02 -04:00
2a2d988928 convert script handles more ckpt variants 2023-07-29 15:28:39 -04:00
ccceb32a85 lint: formatting 2023-07-29 11:50:04 -07:00
72c519c6ad fix incorrect key construction 2023-07-29 13:51:47 -04:00
af12f67948 Merge branch 'lstein/no-pydantic-in-pyproject' of github.com:invoke-ai/InvokeAI into lstein/no-pydantic-in-pyproject 2023-07-29 13:28:38 -04:00
60f5606c2d downgrade torchmetrics to fix model import problem 2023-07-29 13:28:29 -04:00
24b19166dd further refactoring 2023-07-29 13:13:22 -04:00
0396bce4f9 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 13:06:30 -04:00
71768f5988 restore unpinned versions of pydantic and numpy 2023-07-29 13:04:34 -04:00
0fb7328022 blackify code 2023-07-29 13:00:43 -04:00
99daa97978 more refactoring; fixed place where rel conversion missed 2023-07-29 13:00:07 -04:00
21617e60e1 Merge remote-tracking branch 'origin/main' into refactor/model_manager_instantiate 2023-07-29 08:21:26 -07:00
982a568349 blackify pr 2023-07-29 10:47:55 -04:00
d79d5a4ff7 modest refactoring 2023-07-29 10:45:26 -04:00
9968ff2893 fix relative model paths to be against config.models_path, not root 2023-07-29 10:30:27 -04:00
35dd58e273 chore: move PR template to .github/ dir 2023-07-29 12:59:56 +05:30
6d82a1019a fix: Black linting 2023-07-29 17:34:43 +12:00
6ed1bf7084 Merge branch 'main' into metadata-fix 2023-07-29 17:33:30 +12:00
974175be45 fix: Prompt Node using incorrect output type (#4058)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-29 17:32:10 +12:00
86b8b69e88 internal(ModelManager): add instantiate method 2023-07-28 22:30:25 -07:00
bc9a5038fd refactor(ModelManager): factor out get_model_path 2023-07-28 22:29:36 -07:00
bee678fdd1 fix: Prompt Node using incorrect output type 2023-07-29 17:12:25 +12:00
c5caf1e8fe fix: SDXL Metadata not being retrieved 2023-07-29 17:03:19 +12:00
b163ae6a4d refactor(ModelManager): factor out get_model_config 2023-07-28 21:30:20 -07:00
dca685ac25 refactor(ModelManager): refactor rescan-on-miss to exists() method 2023-07-28 21:11:00 -07:00
72708eb53c Feat/Nodes: Change Input to Textbox (#3853)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
not yet, making pr to show
      
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No


## Description
Temp Change Node String input to a textbox, to allow easier input of
prompts and larger strings, it works for me but please tell me if I did
it wrong and if the size is ok

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-29 16:10:32 +12:00
aae1670080 fix: Incorrect Prompt Node output type 2023-07-29 16:04:19 +12:00
e70bedba7d refactor(ModelManager): factor out _get_implementation method 2023-07-28 21:03:27 -07:00
1e776d2523 chore: Regen types 2023-07-29 15:59:52 +12:00
8e06e6abbc feat: Update 'style' string input to also display text area 2023-07-29 15:52:59 +12:00
8a0e1b6cfc feat: Create Prompt Input Node 2023-07-29 15:52:37 +12:00
2d9bc79ca4 Merge branch 'main' into nodepromptsize 2023-07-29 12:43:29 +10:00
6886eb094d Make more Simple 2023-07-29 12:40:17 +10:00
6ca0c38ee3 Merge branch 'main' into feat/onnx 2023-07-28 22:06:28 -04:00
d633eb1612 remove pydantic and numpy from pyproject.toml 2023-07-28 21:56:22 -04:00
1bbf2f269d Update installer 2023-07-28 21:02:48 -04:00
ac22652686 rebuild front end 2023-07-28 18:21:05 -04:00
77cfec5cc8 Release 3.0.1 release candidate 3 (#4025)
Branch used to rebuild front end and make minor doc changes during
release process. Merge before next release.
2023-07-28 18:03:44 -04:00
3e4420c1ae bugfix: Float64 error for mps devices on set_timesteps (#4040)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: minor fix, let me know your thoughts

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No

## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue # https://github.com/invoke-ai/InvokeAI/issues/4017
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Requires mps device

## [optional] Are there any post deployment tasks we need to perform?

Please test on an MPS (M1/M2) device. 

Relevant code causing the error in #4017 


01b6ec21fa/src/diffusers/schedulers/scheduling_euler_discrete.py (L263C3-L268C75)

```
        self.sigmas = torch.from_numpy(sigmas).to(device=device)
        if str(device).startswith("mps"):
            # mps does not support float64
            self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
        else:
            self.timesteps = torch.from_numpy(timesteps).to(device=device)
```
2023-07-28 18:02:39 -04:00
f8181ab1b3 fix: Concat Link Styling (#4048)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Description

- Fix SDXL Concat Link animation not considering the fact that prompt
boxes can be resized.
- Also fixed a minor issue where the overlaying animation box would
block the prompt input resize slightly. Should be good now.
2023-07-28 18:02:22 -04:00
3dfeead9b8 Update troubleshooting guide with ~ydantic and SDXL unet issue advice (#4054)
## What type of PR is this? (check all applicable)


- [X ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [X ] Yes

      
## Have you updated all relevant documentation?
- [X ] Yes

## Description

Added solutions for installation issues related to large SDXL files and
Windows dependency glitches.
2023-07-28 18:02:04 -04:00
d3f6c7f983 Remove onnxruntime 2023-07-28 16:58:06 -04:00
390ce9f249 Fix onnx installer 2023-07-28 16:54:03 -04:00
3da0be7eb9 update troubleshooting guide with ~ydantic and SDXL unet issue workarounds 2023-07-28 16:42:57 -04:00
8935ae0ea3 Fix issues caused by merge 2023-07-28 14:00:32 -04:00
31e5f4bb0e Merge branch 'main' into set-timestep-mps-fix 2023-07-28 08:58:12 -07:00
2164674b01 Black format 2023-07-28 07:49:29 -07:00
8f2a646286 fix: Lint errors 2023-07-29 02:37:59 +12:00
5ff4dd26bb fix: Concat Link Styling 2023-07-29 02:30:10 +12:00
e342ca872f fix to work on non-MPS systems 2023-07-28 10:27:49 -04:00
a2aa66f43a Run Python black 2023-07-28 10:00:09 -04:00
da751da3dd Merge branch 'main' into feat/onnx 2023-07-28 09:59:35 -04:00
2b7b3dd4ba Run python black 2023-07-28 09:46:44 -04:00
dc1148106d Just install onnxruntime by default 2023-07-28 09:32:43 -04:00
062a369044 feat: Unify Promp Area Styling (#4033)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Making the prompt area styling match across all tabs / models and move
all prompt related components into a parent components for quick add.

Cherry picked stuff from the Styles PR coz we ain't gonna merge that.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-28 22:10:08 +12:00
e4a2f56ad1 feat(ui): tweak link colors
- make the `SDXLConcatLink` icon match existing colors in light mode
- make the link toggle button accent color when active (its not super obvious but at least there is *some* visual difference for the button)
2023-07-28 19:57:05 +10:00
1df30f7260 feat: Pulse Animate SDXL Concat Link 2023-07-28 20:45:39 +12:00
514722d67a Update definitions to be more accurate 2023-07-28 18:35:05 +10:00
5dbde2116f Merge branch 'invoke-ai:main' into main 2023-07-28 18:34:33 +10:00
14c4650801 fix: Lint Errors (returning possible null component) 2023-07-28 19:03:29 +12:00
f155b03eee feat: New animation for Concat Link 2023-07-28 18:55:59 +12:00
ddaf753f7b Merge branch 'set-timestep-mps-fix' of ssh://github.com/ZachNagengast/InvokeAI into set-timestep-mps-fix 2023-07-27 23:40:44 -07:00
e6d14c708c Fix variable name 2023-07-27 23:40:23 -07:00
7f81a95b20 Merge branch 'main' into set-timestep-mps-fix 2023-07-28 16:12:07 +10:00
6a49eec606 feat: Add Concat Link Animation
Might remove the lines. Just pushing to save changes for now.
2023-07-28 15:01:40 +12:00
fd67b18c9a Merge branch 'main' into unify-prompt 2023-07-28 14:48:13 +12:00
9affdbbaad chore: black 2023-07-28 11:38:52 +10:00
8d300bddd0 feat(ui): alias existing type for UpdateLoRAModelResponse 2023-07-28 11:38:52 +10:00
aa2c94be9e make LoRAs editable 2023-07-28 11:38:52 +10:00
4c79350300 persist LoRA model info in models.yaml 2023-07-28 11:38:52 +10:00
10e1d623c3 Add LoRAs to the model manager. 2023-07-28 11:38:52 +10:00
aa1f827271 Fix unet_info location, can have no device prop 2023-07-27 14:47:09 -07:00
fb113b9077 Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 16:24:29 -04:00
bb9460d278 Unify uvicorn and backend logging (#4034)
## What type of PR is this? (check all applicable)

- [ X] Feature

## Have you discussed this change with the InvokeAI team?
- [ X] Yes

      
## Have you updated all relevant documentation?
- [ X] Yes - this makes invokeai behave the way it is described in
LOGGING.md

## Description

Prior to this PR, the uvicorn embedded web server did all its logging
independently of the InvokeAI logging infrastructure, and none of the
command-line or `invokeai.yaml` configuration directives, such as
`log_level` had any effect on its output. This PR replaces the uvicorn
logger with InvokeAI's, simultaneously creating a more uniform output
experience, as well as a unified way to control the amount and
destination of the logs.

Here's what we used to see at startup:
```
[2023-07-27 07:29:48,027]::[InvokeAI]::INFO --> InvokeAI version 3.0.1rc2                                                                                                                               
[2023-07-27 07:29:48,027]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-main                                                                                                             
[2023-07-27 07:29:48,028]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 4070                                                                                                               
[2023-07-27 07:29:48,040]::[InvokeAI]::INFO --> Scanning /home/lstein/invokeai-main/models for new models                                                                                               
[2023-07-27 07:29:49,263]::[InvokeAI]::INFO --> Scanned 22 files and directories, imported 10 models                                                                                                    
[2023-07-27 07:29:49,271]::[InvokeAI]::INFO --> Model manager service initialized                                                                                                                       
INFO:     Application startup complete.                                                                                                                                                                 
INFO:     Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)                                                                                                                               
INFO:     127.0.0.1:44404 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7Pvd HTTP/1.1" 200 OK                                                                                                         
INFO:     127.0.0.1:44404 - "POST /socket.io/?EIO=4&transport=polling&t=OcN7Pw6&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK                                                                               
INFO:     ('127.0.0.1', 44418) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=SB-NsBKLSrW7YtM0AAAA" [accepted]                                                                                  
INFO:     connection open                                                                                                                                                                               
INFO:     127.0.0.1:44430 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7Pw9&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK                                                                                
INFO:     127.0.0.1:44404 - "GET /socket.io/?EIO=4&transport=polling&t=OcN7PwU&sid=SB-NsBKLSrW7YtM0AAAA HTTP/1.1" 200 OK                                                                                
INFO:     127.0.0.1:44404 - "GET /api/v1/images/?is_intermediate=true HTTP/1.1" 200 OK                                                                                                                  
INFO:     127.0.0.1:43448 - "GET / HTTP/1.1" 200 OK                                                                                                                                                     
INFO:     connection closed                                                                                                                                                                             
INFO:     127.0.0.1:43448 - "GET /assets/index-5a784cdd.js HTTP/1.1" 200 OK                                                                                                                             
INFO:     127.0.0.1:43458 - "GET /assets/favicon-0d253ced.ico HTTP/1.1" 304 Not Modified                                                                                                                
INFO:     127.0.0.1:43448 - "GET /locales/en.json HTTP/1.1" 200 OK                                                                                                                                      
```

And here's what we see with the new implementation:
```
[2023-07-27 12:05:28,810]::[uvicorn.error]::INFO --> Started server process [875161]
[2023-07-27 12:05:28,810]::[uvicorn.error]::INFO --> Waiting for application startup.
[2023-07-27 12:05:28,810]::[InvokeAI]::INFO --> InvokeAI version 3.0.1rc2
[2023-07-27 12:05:28,810]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-main
[2023-07-27 12:05:28,811]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 4070
[2023-07-27 12:05:28,823]::[InvokeAI]::INFO --> Scanning /home/lstein/invokeai-main/models for new models
[2023-07-27 12:05:29,970]::[InvokeAI]::INFO --> Scanned 22 files and directories, imported 10 models
[2023-07-27 12:05:29,977]::[InvokeAI]::INFO --> Model manager service initialized
[2023-07-27 12:05:29,980]::[uvicorn.error]::INFO --> Application startup complete.
[2023-07-27 12:05:29,981]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2023-07-27 12:05:32,140]::[uvicorn.access]::INFO --> 127.0.0.1:45236 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6ILb HTTP/1.1" 200
[2023-07-27 12:05:32,142]::[uvicorn.access]::INFO --> 127.0.0.1:45248 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6ILb HTTP/1.1" 200
[2023-07-27 12:05:32,154]::[uvicorn.access]::INFO --> 127.0.0.1:45236 - "POST /socket.io/?EIO=4&transport=polling&t=OcO6ILr&sid=13O4-5uLx5eP-NuqAAAA HTTP/1.1" 200
[2023-07-27 12:05:32,168]::[uvicorn.access]::INFO --> 127.0.0.1:45252 - "POST /socket.io/?EIO=4&transport=polling&t=OcO6IM0&sid=0KRqxmh7JLc8t7wZAAAB HTTP/1.1" 200
[2023-07-27 12:05:32,171]::[uvicorn.error]::INFO --> ('127.0.0.1', 45264) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=0KRqxmh7JLc8t7wZAAAB" [accepted]
[2023-07-27 12:05:32,172]::[uvicorn.error]::INFO --> connection open
[2023-07-27 12:05:32,174]::[uvicorn.access]::INFO --> 127.0.0.1:45276 - "GET /socket.io/?EIO=4&transport=polling&t=OcO6IM3&sid=0KRqxmh7JLc8t7wZAAAB HTTP/1.1" 200

```

You can also divert everything to a file with a `invokeai.yaml` entry
like this:
```
  Logging:
    log_handlers:
    - file=/home/lstein/invokeai/logs/access_log
    log_format: plain
    log_level: info
```

This works with syslog and other log handlers as well.
2023-07-27 15:56:42 -04:00
6edeb4e072 Pass device to set_timestep to avoid float64 error 2023-07-27 12:52:18 -07:00
2bb4e6d5aa Merge branch 'main' into feat/unify-logging 2023-07-27 15:48:06 -04:00
53028feb83 feat: Upgrade Diffusers to 0.19.0 (#4011)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

https://github.com/huggingface/diffusers/releases/tag/v0.19.0


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-27 15:39:02 -04:00
d983dd371c Merge branch 'diffusers-upgrade' of github.com:blessedcoolant/InvokeAI into diffusers-upgrade 2023-07-27 15:30:01 -04:00
17ee17a789 merge with main;resolve conflicts 2023-07-27 15:29:34 -04:00
6b3ec29480 Merge branch 'main' into diffusers-upgrade 2023-07-27 15:27:55 -04:00
4a30773d09 Merge branch 'main' into feat/unify-logging 2023-07-27 15:25:56 -04:00
006075483d Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 15:21:08 -04:00
1ea9ba84f5 Release session if applying ti or lora 2023-07-27 15:20:38 -04:00
64bd11541a Merge branch 'main' into feat/unify-logging 2023-07-27 15:20:07 -04:00
52bd29d484 Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 15:19:05 -04:00
41b13e83a5 Support Python 3.11 (#3966)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No

## Description

This updates InvokeAI's pyproject.toml to the minimum library versions
needed to support Python 3.11. It updates the installer to find and
allow for 3.11, and the documentation.

Between 3.10 and 3.11 there was a change to the handling of `enum`
interpolation into strings that caused the model manager to break. I
think I have fixed the places where this was a problem, but there may be
other instances in which this will cause problems. Please be alert for
errors involving `ModelType` or `BaseModelType`.

I also took the opportunity to add a `SilenceWarnings()` context to the
t2i and i2i invocations. This quenches nags from diffusers about the
HuggingFace NSFW library.

I have tested basic functionality (t2i, i2i, inpaint, lora, controlnet,
nodes) on 3.10 and 3.11 and all seems good. Please test more
extensively!

## Added/updated tests?

- [ X ] Yes - existing tests run to completion
- [ ] No

## [optional] Are there any post deployment tasks we need to perform?

Should be a drop-in replacement.
2023-07-27 15:18:21 -04:00
0d8f9cbe55 resolved conflicts with main 2023-07-27 15:11:25 -04:00
fd75a1dd10 reformat with black 2023-07-27 15:01:00 -04:00
bfdc8c80f3 Testing caching onnx sessions 2023-07-27 14:13:29 -04:00
3bb81bedbd Merge branch 'main' into unify-prompt 2023-07-28 05:36:04 +12:00
e191f6d4b2 prevent resize error (#4031)
* add upper bound for minWidth to prevent crash with cypress

* add fallback so UI doesnt crash when backend isnt running

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-27 17:30:31 +00:00
00988e4972 (installer) check that the found Python executable is actually operational
when multiple python versions are installed with `pyenv`, the executable
(shim) exists, but returns an error when trying to run it
unless activated with `pyenv`. This commit ensures the python
executable is usable.
2023-07-27 13:28:00 -04:00
7d458eb1ac Dev/black (#3840)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature (dev feature and reformatting)
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Description
Introducing black to the code base as a first step towards this:
https://github.com/invoke-ai/InvokeAI/discussions/3721

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Not applicable

## [optional] Are there any post deployment tasks we need to perform?
All active branches will be affected by this and will need to be
updated.

This PR adds a new github workflow for black as well as config for
pre-commit hooks to those who wish to use it
2023-07-27 12:59:47 -04:00
b8b46aec09 Revert "fix: Lint Errors"
This reverts commit f057d5c85b.
2023-07-28 04:34:41 +12:00
4d2b87ea01 fix(ui): fix types for controlnet models
`ControlNetModelConfig` was split into `ControlNetModelCheckpointConfig` and `ControlNetModelDiffusersConfig`, need to update the UI types
2023-07-28 04:34:29 +12:00
8023a23cec beat uvicorn access log into submission 2023-07-27 12:05:17 -04:00
e4c0102b3c unified uvicorn access log entries too 2023-07-27 11:59:29 -04:00
16d044336f (meta) hide the 'black' formatting commit from git blame
also remove lib/ from gitignore as it is hiding the installer code
from 'black'
2023-07-27 11:29:22 -04:00
c4a2808a4b use same logging infrastructure for uvicorn and backend 2023-07-27 11:24:07 -04:00
59716938bf Remove TensorRT support at the current time until we validate it works, remove time step recorder 2023-07-27 11:18:50 -04:00
611f31c057 fix: Adjust embedding button on PositivePrompt for new changes 2023-07-28 03:07:50 +12:00
b60adc31d0 feat: Unify Prompt Area Design Between SDXL and Regular Models 2023-07-28 03:07:50 +12:00
a98ed3a5ba fix: TextArea Resizer styling when disabled 2023-07-28 03:06:31 +12:00
f057d5c85b fix: Lint Errors 2023-07-28 03:06:31 +12:00
918a0dedc0 Always install onnx 2023-07-27 11:00:40 -04:00
218b6d0546 Apply black 2023-07-27 10:54:01 -04:00
2183dba5c5 Remove whitespace and commented out pre-commit hooks 2023-07-27 10:53:27 -04:00
a491e326c5 This is no longer needed 2023-07-27 10:52:36 -04:00
f7bb4c3f05 Remove more files no longer needed in main 2023-07-27 10:49:43 -04:00
57271ad125 Move onnx to optional dependencies 2023-07-27 10:28:26 -04:00
33245b37ad Removed things no longer needed in main 2023-07-27 10:23:55 -04:00
81d8fb8762 Removed things no longer needed in main 2023-07-27 10:14:55 -04:00
fc9dacd082 Black/flake8 line length 100->120 2023-07-27 10:12:25 -04:00
8b4af69d87 Black config, pre-commit and GHA 2023-07-27 10:09:04 -04:00
989d3d7f3c Remove onnx changes from canvas img2img, inpaint, and linear image2image 2023-07-27 10:08:45 -04:00
d2a46b4308 Fix dist and schema after merge 2023-07-27 09:55:28 -04:00
eb1ba8d74b Merge branch 'main' into feat/onnx 2023-07-27 09:54:30 -04:00
4ebde013ea Allow deleting onnx models in model manager ui 2023-07-27 09:50:20 -04:00
024f92f9a9 Add onnx models to the model manager UI 2023-07-27 09:37:37 -04:00
562c937a14 Updated new user flow 2023-07-27 21:46:39 +10:00
5300e353d8 updated community nodes doc 2023-07-27 18:58:44 +10:00
d78c97f8a8 Updated getting started guide and links 2023-07-27 18:51:48 +10:00
52f61698e9 added getting started with Invoke guide 2023-07-27 18:29:12 +10:00
6f54fe9003 fix(ui): fix types for controlnet models
`ControlNetModelConfig` was split into `ControlNetModelCheckpointConfig` and `ControlNetModelDiffusersConfig`, need to update the UI types
2023-07-27 15:46:50 +10:00
895917c3ab Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 01:02:38 -04:00
be00a837cc hotfix to remove duplicate key in INITIAL_MODELS 2023-07-27 00:38:18 -04:00
dcb85b0097 rebuild frontend; bump version 2023-07-27 00:37:23 -04:00
5956c601f7 Restore ability to convert SDXL checkpoints to diffusers (#4021)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] Not needed


## Description

This bugfix enables InvokeAI to convert sd-1, sd-2 and sdxl base model
checkpoints (.safetensors) to diffusers.
2023-07-27 00:29:13 -04:00
b67041dd29 Merge branch 'main' into bugfix/convert-sdxl-models 2023-07-27 00:24:37 -04:00
5b62d97a47 install SDXL "fixed" VAE (#4020)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] No


## Description

This PR causes the installer to install, by default, the fine-tuned
SDXL-1.0 VAE located at
https://huggingface.co/madebyollin/sdxl-vae-fp16-fix.

Although this VAE is supposed to run at fp16 resolution, currently it
only works in InvokeAI at fp32. However, because it is a fine tune, it
may have fewer of the watermark-related artifacts that we see with the
SDXL-1.0 VAE.
2023-07-27 00:14:58 -04:00
c02b9db064 Merge branch 'main' into bugfix/convert-sdxl-models 2023-07-27 00:08:15 -04:00
2e19b23eed Merge branch 'main' into feat/install-finetune-sdxl-vae 2023-07-27 00:06:00 -04:00
f7f20fdfe4 Configure script should not overwrite models.yaml if it is well formed (#4019)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not necessary


## Description

When adding new core models to a 3.0.0 root directory needed to support
SDXL, the configure script was (under some conditions) overwriting
models.yaml. This PR corrects the problem.
2023-07-27 00:03:51 -04:00
61aff8540c fix refiner conversion 2023-07-27 00:02:10 -04:00
2b7807e6a0 Merge branch 'main' into fix/yaml-file-delete 2023-07-26 23:45:43 -04:00
fc19624bd8 Rework configure/install TUI to require less space (#3989)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No


## Description

I have reworked the console TUIs for the configure and model install
scripts to require much less vertical space. In the event that the
"NEXT" button is still missing and "page 1/2" is displayed, scrolling
beyond the last checkbox will now automatically move to page 2 where the
buttons are displayed. This is not ideal, but will no longer block user
completely.

If users continue to have problems after this, I'll get rid of the TUI
altogether and replace with a web form.

## Added/updated tests?

- [ ] Yes
- [X ] No : not needed

## [optional] Are there any post deployment tasks we need to perform?
2023-07-26 23:44:50 -04:00
77946bfea5 restore ability to convert SDXL checkpoints to diffusers 2023-07-26 23:28:58 -04:00
d4d4d749f2 Merge branch 'release/invokeai-3-0-1' 2023-07-26 23:15:26 -04:00
43fe8b1dda Merge branch 'main' into fix/reduce-configure-vertical 2023-07-26 23:12:25 -04:00
3e441f773f Documentation updates for SDXL license terms, invisible watermark (#4012)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X ] No, because they trust me

      
## Have you updated all relevant documentation?
- [ X] Yes
- [ ] No


## Description

* Added the RAIL++ license for SDXL
* Updated configure script with URLs for both the original RAIL-M and
RAIL++ licenses
* Added invisible watermark documentation and renamed doc file
* Updated documentation for installation
* Updated documentation on settings in invokeai.yaml
2023-07-26 23:11:58 -04:00
9c4acb9d3f install SDXL "fixed" VAE 2023-07-26 23:06:27 -04:00
451b8c96e5 do not overwrite models.yaml if it is well formed 2023-07-26 22:29:39 -04:00
b8376a4932 Merge branch 'main' into fix/reduce-configure-vertical 2023-07-26 22:16:38 -04:00
0d344872f1 fix: Metadata Not Being Saved (#4009)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Metadata was not getting saved coz the accumulator was not plugged in if
watermark or nsfw nodes were turned off.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-26 22:15:32 -04:00
4bfbdb0d97 chore(ui): lint 2023-07-27 11:58:59 +10:00
049e666412 fix(ui): revise metadata edges in linear graphs
- always add metadata to l2i nodes
- no metadata handling for inpaint, removed
2023-07-27 09:43:45 +10:00
83a981b585 merge with main; fix SDXL repo_ids 2023-07-26 17:38:06 -04:00
049645d66e updated LICENSE files and added information about watermarking 2023-07-26 17:27:33 -04:00
4d732e06de Remove onnx models from img2img and unified canvas 2023-07-26 16:30:02 -04:00
c90c4a32ee Merge branch 'main' into metadata-fix 2023-07-27 08:08:11 +12:00
3ff8c87c09 feat: Upgrade Diffusers to 0.19.0 2023-07-27 08:00:12 +12:00
f26a423e95 Fix merge issue 2023-07-26 15:32:28 -04:00
0100ac8f2d Merge branch 'main' into release/invokeai-3-0-1 2023-07-26 15:27:06 -04:00
6a3a776f4e Bugfix/checkpoint conversion (#4010)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ x] No, because there was no time!

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X ] No


## Description

Hotfix for issue of SD1 and SD2 legacy safetensors models not converting
in 3.0.1rc1.
2023-07-26 15:21:16 -04:00
020031f376 add all legacy model .yaml files to configs directory unconditionally 2023-07-26 15:17:00 -04:00
7053347559 fix: Metadata Not Being Saved 2023-07-27 07:09:51 +12:00
bf1f6619df fix conversion for sd1 and sd2 models 2023-07-26 15:02:32 -04:00
6bdcc32414 rebuild frontend for rc1 release (again) 2023-07-26 13:36:42 -04:00
4f39c81dec Merge branch 'main' into release/invokeai-3-0-1 2023-07-26 13:33:15 -04:00
3376968cbb fix: Prompt Drawer Unpinned not having SDXL UI 2023-07-26 13:30:43 -04:00
0420d75d2b fix: Improve Styling of SDXL Prompt Area 2023-07-26 13:30:43 -04:00
3bd9c27a79 feat: Add SDXL Style Prompt Concat Toggle 2023-07-26 13:30:43 -04:00
b6522cf2cf fix: SDXL - Concat Prompt and Style for Style Prompt 2023-07-26 13:30:43 -04:00
861c0fe76b Correct issues caused by merging main 2023-07-26 12:25:46 -04:00
13ac5c6899 enable hide localization toggle (#4004)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-27 03:01:52 +12:00
05070304ff Merge branch 'release/invokeai-3-0-1' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-1
- fix log message
2023-07-26 11:00:57 -04:00
af8fc6ff82 final polish before release candidate
- Fix issue that prevented web ui from starting if
  ROOT/databases/invokeai.db not found.

- Rebuild front end
2023-07-26 10:59:23 -04:00
f86d0d1b69 hide localization toggle 2023-07-26 10:55:38 -04:00
e6741cee75 rebuid front end 2023-07-26 10:47:37 -04:00
c16da75ac7 Merge branch 'main' into feat/onnx 2023-07-26 10:42:31 -04:00
575ebaeb75 Merge PR #3944 2023-07-26 10:25:59 -04:00
385483ff8e Download all model types. (#3944) 2023-07-26 10:24:37 -04:00
c7f883d22a Merge branch 'main' into patch 2023-07-26 10:19:02 -04:00
58ff5d3f5b Merge branch 'main' into release/invokeai-3-0-1
- this includes the final set of PRs going into 3.0.1
2023-07-26 10:17:32 -04:00
f060e321eb NSFW checker and watermark nodes (#3923)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [] No

## Description

This PR adds NSFW checker and invisible watermark fields. The NSFW
checker takes an image input and produces an image output. If NSFW
content is detected, the output image will be blurred and a "caution"
icon pasted into its upper left corner. A boolean `active` field
controls whether the checker is active. If turned off it simply returns
a copy of the image.

The invisible watermark node adds an invisible text to the image,
defaulting to "InvokeAI". To decode the watermark use the
`invisible-watermark` command, which is part of the
`invisible-watermark` library:

```
$ invisible-watermark -v -a decode -t bytes -m dwtDct -l 64 ./bluebird-watermark.png 
decode time ms: 14.129877090454102
InvokeAI
```

Note that the `-l` (length) argument is mandatory. It is set to 64 here
because the watermark `InvokeAI` is 8 bytes/64 bits long. The length
must match in order for the watermark to be decoded correctly.

Both nodes are now incorporated into the linear Text2Image and
Image2Image UIs, including the canvas. They are not implemented for
inpaint currently.

The nodes can be disabled with configuration options:
```
invisible_watermark: false
nsfw_checker: false
```
or at launch time with `--no-invisible_watermark` and
`--no-nsfw_checker`.
2023-07-26 10:14:10 -04:00
dc8c3d8073 feat(ui): tweak menu style, increase icon size
feat(ui) use `as` for menuitem links

I had requested this be done with the chakra `Link` component, but actually using `as` is correct according to the docs. For other components, you are supposed to use `Link` but looks like `MenuItem` has this built in.

Fixed in all places where we use it.

Also:
- fix github icon
- give menu hamburger button padding
- add menu motion props so it animates the same as other menus

feat(ui): restore ColorModeButton

@maryhipp

chore(ui): lint

feat(ui): remove colormodebutton again

sry
2023-07-27 00:12:23 +10:00
819136c345 chore(ui): bump chakra versions
exposes more menu theming config
2023-07-27 00:12:23 +10:00
989b68c772 fix: Remove menu tooltip and fix incorrect issues page link 2023-07-27 00:12:23 +10:00
a6347a1d3c revert: Translation strings
These needs to be done through weblate. Only en.json needs to updated via the repo
2023-07-27 00:12:23 +10:00
a00d1e87e4 fix: Update Links to Links from Menu Items 2023-07-27 00:12:23 +10:00
c7d24081e2 fix: Scheduler list in Settings not displaying labels 2023-07-27 00:12:23 +10:00
17900e5140 fix: Fix Settings dropdown menu icons being too small 2023-07-27 00:12:23 +10:00
6fa42cb10c feat: consolidated app nav to settings & dropdown 2023-07-27 00:12:23 +10:00
4bea846199 Merge branch 'main' into feat/safety-checker-node 2023-07-26 10:04:23 -04:00
3dccc4d61e Add support for controlnet & sdxl checkpoint conversion (#3905)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No - not yet WIP


## Description

This PR adds support for loading and converting checkpoint-format
ControlNet and SDXL models. The SDXL and SDXL-refiner model conversions
are working; however saving the unet in safetensors format leads to
corrupted model files, so currently is saving in .bin format (after
scanning the input model).

ControlNet conversion seems to be working but needs further testing.

To use this PR, you will need to copy the files
`invokeai/configs/stable-diffusion/sd_xl_base.yaml` and
`invokeai/configs/stable-diffusion/sd_xl_refiner.yaml` into
`INVOKEAI/configs/stable-diffusion`. You will also need to run
`invokeai-configure --yes --skip-sd` in order to install additional core
model files needed by the converter.
2023-07-27 01:50:38 +12:00
bf0587da5f set defaults for watermark and NSFW checker to FALSE 2023-07-26 09:09:46 -04:00
58c0bee325 improved error message for running configure 2023-07-26 08:30:01 -04:00
b8f43f444a implemented startup sanity checks on core models 2023-07-26 08:26:29 -04:00
da76f6fee4 compress height needed by configure script 2023-07-26 08:00:19 -04:00
c4f064bbf3 Merge branch 'main' into feat/controlnet-and-sdxl-convert 2023-07-26 07:30:22 -04:00
0ce8472562 adjust unit test to account for nsfw always being true now 2023-07-26 07:29:33 -04:00
3e206d4d6a removed nsfw/watermark from invokeai.yaml 2023-07-26 06:53:35 -04:00
ce7fa96dbc Merge branch 'main' into feat/safety-checker-node 2023-07-26 06:39:46 -04:00
a705461c04 merge with recent main changes 2023-07-26 06:39:21 -04:00
fda7e0a71a 3.0.1 - Pre-Release UI Fixes (#4001)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes

      
## Description

- Update the Aspect Ratio tags to show the aspect ratio values rather
than Wide / Square and etc.
- Updated Lora Input to take values between -50 and 50 coz I found some
LoRA that are actually trained to work until -25 and +15 too. So these
input caps should mostly suffice. If there's ever a LoRA that goes
bonkers on that, we can change it.
- Fixed LoRA's being sorted the wrong way in Lora Select.
- Fixed Embeddings being sorted the wrong way in Embedding Select.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-26 21:22:33 +12:00
36455f6cac Merge branch 'main' into nodepromptsize 2023-07-26 18:54:54 +10:00
513b223ef6 fix(test): fix test_graph_subgraph_t2i
needed to be updated after adding the nsfw checker node to the graph
2023-07-26 18:49:29 +10:00
db05445103 fix(tests): fix test_path
- assets path has changed
2023-07-26 18:48:43 +10:00
30c3b7a6fc fix(ui): fix invoke button being disabled 2023-07-26 18:40:17 +10:00
2d0f932737 Lint Code 2023-07-26 18:35:04 +10:00
9e9dce44b4 fix: Embeddings not being sorted alphabetically 2023-07-26 20:34:14 +12:00
6fd8543e69 fix: LoRA's not being sorted alphabetically 2023-07-26 20:33:59 +12:00
db48f3230b feat(ui): add nsfw & watermark to linear ui
- add `addNSFWCheckerToGraph` and `addWatermarkerToGraph` functions
- use them in all linear graph creation
- add state & toggles to settings modal to enable these
- trigger queries for app config on socket connect
- disable the nsfw/watermark booleans if we get the app config and they are not available
2023-07-26 18:20:20 +10:00
397604a094 feat: Allow LoRA weights to be more than sliders via input
Found some LoRA's that need it.
2023-07-26 19:20:42 +12:00
f5139b174a fix(ui): Rename Aspect Ratio labels to their aspect ratios 2023-07-26 18:56:52 +12:00
050e5091db feat: Enable the Conversion button for SDXL Models 2023-07-26 17:32:50 +12:00
2c5b539d3a esrgan and its models are now nested in app config route 2023-07-26 15:27:04 +10:00
85ad5ef204 refactored code; added watermark and nsfw facilities to app config route 2023-07-26 15:27:04 +10:00
5beb11f4e2 tweaks in response to psychedelicious review of PR 2023-07-26 15:27:04 +10:00
844d37c642 rebuild schema 2023-07-26 15:27:04 +10:00
b3723d1ccf update documentation 2023-07-26 15:27:04 +10:00
bd43751323 update linear graphs to perform safety checking and watermarking 2023-07-26 15:27:04 +10:00
e32cd794f7 add safetychecker and watermark nodes 2023-07-26 15:26:45 +10:00
761fc4beb8 Temp fix for is intermediate switch for l2i 2023-07-26 15:17:59 +10:00
531bc40d3f feat: Add SDXL To Linear UI (#3973)
## What type of PR is this? (check all applicable)

- [x] Feature


## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Description

This PR adds support for SDXL Models in the Linear UI

### DONE

- SDXL Base Text To Image Support
- SDXL Base Image To Image Support
- SDXL Refiner Support
- SDXL Relevant UI


## [optional] Are there any post deployment tasks we need to perform?

Double check to ensure nothing major changed with 1.0 -- In any case
those changes would be backend related mostly. If Refiner is scrapped
for 1.0 models, then we simply disable the Refiner Graph.
2023-07-26 17:05:39 +12:00
676051edb9 fix(ui): fix missing args for model queries 2023-07-26 14:56:51 +10:00
de65b82569 chore: Fix lint errors 2023-07-26 16:51:58 +12:00
934f9afd7e feat(ui): Do not show SDXL Models in Canvas 2023-07-26 14:46:38 +10:00
1c01a31ee8 feat(ui): setActiveTab only works with tab names 2023-07-26 14:46:38 +10:00
c5389b3298 fix(ui): fix refiner steps math again 2023-07-26 14:46:38 +10:00
fdbab5ffa9 feat(ui): hide sync models button if feature is disabled 2023-07-26 14:46:38 +10:00
a6e544ebd5 fix(ui): fix refiner steps calculation for edge case of start = 1 2023-07-26 14:46:38 +10:00
75b0507434 feat(nodes): change denoising start/end min/max to 0/1 2023-07-26 14:46:38 +10:00
59c2556e6b feat: Move SDXL Image Denoising to own component 2023-07-26 14:46:38 +10:00
4fe889bbf8 fix: Possible fix to image to image / refiner setting sync
The main goal is to avoid noisy output no matter what the slider values are.
2023-07-26 14:46:38 +10:00
cbcd416b70 fix(ui): fix refiner missing from model manager
Rolled back the earlier split of the refiner model query.

Now, when you use `useGetMainModelsQuery()`, you must provide it an array of base model types.

They are provided as constants for simplicity:
- ALL_BASE_MODELS
- NON_REFINER_BASE_MODELS
- REFINER_BASE_MODELS

Opted to just use args for the hook instead of wrapping the hook in another hook, we can tidy this up later if desired.
2023-07-26 14:46:38 +10:00
6fa244a343 feat(ui): add vae precision select 2023-07-26 14:46:38 +10:00
e5a660930c feat(ui): add zod schemas for precision parameters 2023-07-26 14:46:38 +10:00
61291ea105 feat: sdxl metadata
- update `CoreMetadata` class & `MetadataAccumulator` with fields for SDXL-specific metadata
- update the linear UI graphs to populate this metadata
2023-07-26 14:46:38 +10:00
840205496a feat(nodes): fix model load events on sdxl nodes
they need the `context` to be provided to emit socket events
2023-07-26 14:46:38 +10:00
016797c890 feat(ui): add vaePrecision setting
no UI element for it yet
2023-07-26 14:46:38 +10:00
00e69d5d12 feat(ui): adjust seed param styling 2023-07-26 14:46:38 +10:00
8e90f9024d feat(ui): remove isRefinerAvailable state, update refiner node
We can derive `isRefinerAvailable` from the query result (eg are there any refiner models installed). This is a piece of server state, so by using the list models response directly, we can avoid needing to manually keep the client in sync with the server.

Created a `useIsRefinerAvailable()` hook to return this boolean wherever it is needed.

Also updated the main models & refiner models endpoints to only return the appropriate models. Now we don't need to filter the data on these endpoints.
2023-07-26 14:46:38 +10:00
751c4407e4 feat(ui): add node type to invocation started 2023-07-26 14:46:38 +10:00
6c46304eb8 fix: Replug Image To Latents VAE back in the Refiner graph for img2img 2023-07-26 14:46:38 +10:00
0eb31c5710 fix: Cyclic push in the graph 2023-07-26 14:46:38 +10:00
6295e56d96 feat: Add SDXL Refiner to Linear UI 2023-07-26 14:46:38 +10:00
5202610160 feat: Move SDXL Refiner to own route & set appropriate disabled statuses 2023-07-26 14:46:38 +10:00
8d1b8179af feat: Create UI for SDXL Refiner Options 2023-07-26 14:46:38 +10:00
3bdb059eb7 wip: SDXL Refiner UI Data 2023-07-26 14:46:38 +10:00
b0ebd148fa feat: Add Style Prompts to Linear UI 2023-07-26 14:46:38 +10:00
9f94d0e52a feat: Create SDXL Slice 2023-07-26 14:46:38 +10:00
9c180da58a feat: Add SDXL Image To Image to Linear UI 2023-07-26 14:46:38 +10:00
57d833035d feat: Add SDXL Base To Linear Text To Image 2023-07-26 14:46:38 +10:00
c145681488 bump version number; add SDXL-1.0 to installer 2023-07-26 00:17:00 -04:00
3eaf8c3b2f Update stale issues action (#3960)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Updated script to close stale issues with the newest version of the
actions/stale

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Not sure how this script gets kicked off
2023-07-26 14:08:22 +10:00
d9527bf445 Merge branch 'main' into main 2023-07-26 14:08:00 +10:00
032e9c8165 Merge branch 'main' into patch 2023-07-25 22:24:36 -04:00
dbc3d42afc install all recommended models with --yes; don't alter starter model screen 2023-07-25 22:24:03 -04:00
d5998ad3ef update images to link from docs/assets/nodes/ 2023-07-25 21:48:48 -04:00
a4c8d86faa add NODES.md image assets to docs/assets/nodes/ 2023-07-25 21:48:48 -04:00
f4da66aa0f Update NODES.md 2023-07-25 21:48:48 -04:00
7f5a89f567 add option to disable model syncing in UI 2023-07-26 11:18:38 +10:00
2db9b3b2ae Merge branch 'main' into patch 2023-07-25 16:27:10 -04:00
77107dfcbc Merge branch 'main' into main 2023-07-25 16:26:37 -04:00
e43e198102 rework configure/install TUI to require less space 2023-07-25 11:25:26 -04:00
2aefa921fe fix "unknown model type" error when rebasing a model with API
- Add command-line model probing script for dev use
- Minor documentation tweak
2023-07-25 08:36:57 -04:00
11e6ecc1bf Merge branch 'main' into feat/controlnet-and-sdxl-convert 2023-07-25 08:05:17 -04:00
7d337dccc2 docs generation: fix typo and remove trailing white space (#3972)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: This is a minor fix that I happened upon while
reading

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Within the `mkdocs.yml` file, there's a typo where `Model Merging` is
spelled as `Model Mergeing`. I also found some unnecessary white space
that I removed.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Not big enough of a change to require tests (unless it is)

## [optional] Are there any post deployment tasks we need to perform?
Might need to re-run the yml file for docs to regenerate, but I'm hardly
familiar with the codebase so 🤷
2023-07-24 23:11:37 -04:00
91e903c8ab esrgan and its models are now nested in app config route 2023-07-24 22:17:22 -04:00
efa615a8fd refactored code; added watermark and nsfw facilities to app config route 2023-07-24 22:02:57 -04:00
cf10852ee3 uses v8 actions/stale@v8 2023-07-25 11:23:00 +10:00
437532f2f9 fix: ✏️ fix docs generation typo and remove trailing white space 2023-07-24 17:42:01 -06:00
8c449c4756 update documentation and installer to accept 3.11 2023-07-24 17:21:56 -04:00
fc4e104c61 tested on 3.11 and 3.10 2023-07-24 17:13:32 -04:00
4194a0ed99 tweaks in response to psychedelicious review of PR 2023-07-24 09:23:51 -04:00
7ce5b6504f rebuild schema 2023-07-24 08:25:39 -04:00
aea8ad5670 Update close-inactive-issues.yml with latest stale version 2023-07-24 20:52:34 +10:00
97f4475fdf Update close-inactive-issues.yml 2023-07-24 20:50:33 +10:00
4f9c728db0 feat(ui): display canvas generation mode in status text (#3915)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: n/a

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No n/a


## Description

Add a generation mode indicator to canvas.

- use the existing logic to determine if generation is txt2img, img2img,
inpaint or outpaint
- technically `outpaint` and `inpaint` are the same, just display
"Inpaint" if its either
- debounce this by 1s to prevent jank

I was going to disable controlnet conditionally when the mode is inpaint
but that involves a lot of fiddly changes to the controlnet UI
components. Instead, I'm hoping we can get inpaint moved over to latents
by next release, at which point controlnet will work.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->


https://github.com/invoke-ai/InvokeAI/assets/4822129/87464ae9-4136-4367-b992-e243ff0d05b4

## Added/updated tests?

- [ ] Yes
- [x] No : n/a

## [optional] Are there any post deployment tasks we need to perform?

n/a
2023-07-24 20:37:45 +12:00
7ea477abef Merge branch 'main' into feat/canvas-generation-mode 2023-07-24 20:34:25 +12:00
d42c394ab7 feat(nodes,ui): fix soft locks on session/invocation retrieval (#3910)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No, n/a


## Description

When a queue item is popped for processing, we need to retrieve its
session from the DB. Pydantic serializes the graph at this stage.

It's possible for a graph to have been made invalid during the graph
preparation stage (e.g. an ancestor node executes, and its output is not
valid for its successor node's input field).

When this occurs, the session in the DB will fail validation, but we
don't have a chance to find out until it is retrieved and parsed by
pydantic.

This logic was previously not wrapped in any exception handling.

Just after retrieving a session, we retrieve the specific invocation to
execute from the session. It's possible that this could also have some
sort of error, though it should be impossible for it to be a pydantic
validation error (that would have been caught during session
validation). There was also no exception handling here.

When either of these processes fail, the processor gets soft-locked
because the processor's cleanup logic is never run. (I didn't dig deeper
into exactly what cleanup is not happening, because the fix is to just
handle the exceptions.)

This PR adds exception handling to both the session retrieval and node
retrieval and events for each: `session_retrieval_error` and
`invocation_retrieval_error`.

These events are caught and displayed in the UI as toasts, along with
the type of the python exception (e.g. `Validation Error`). The events
are also logged to the browser console.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

Closes #3860 , #3412

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

Create an valid graph that will become invalid during execution. Here's
an example:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/50aa824c-fb0c-4bd9-82f4-38a4c89436f9)

This is valid before execution, but the `width` field of the `Noise`
node will end up with an invalid value (`0`). Previously, this would
soft-lock the app and you'd have to restart it.

Now, with this graph, you will get an error toast, and the app will not
get locked up.

## Added/updated tests?

- [x] Yes (ish)
- [ ] No

@Kyle0654  @brandonrising 
It seems because the processor runs in its own thread, `pytest` cannot
catch exceptions raised in the processor.

I added a test that does work, insofar as it does recreate the issue.
But, because the exception occurs in a separate thread, the test doesn't
see it. The result is that the test passes even without the fix.

So when running the test, we see the exception:
```py
Exception in thread invoker_processor:
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/processor.py", line 50, in __process
    self.__invoker.services.graph_execution_manager.get(
  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/sqlite.py", line 79, in get
    return self._parse_item(result[0])

  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/sqlite.py", line 52, in _parse_item
    return parse_raw_as(item_type, item)
  File "pydantic/tools.py", line 82, in pydantic.tools.parse_raw_as
  File "pydantic/tools.py", line 38, in pydantic.tools.parse_obj_as
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
```

But `pytest` doesn't actually see it as an exception. Not sure how to
fix this, it's a bit beyond me.

## [optional] Are there any post deployment tasks we need to perform?

nope don't think so
2023-07-24 20:17:39 +12:00
61fa960a18 feat(ui): make generation mode calculation more granular 2023-07-24 18:16:15 +10:00
1969afd038 Merge branch 'main' into feat/fix-soft-locks 2023-07-24 20:12:10 +12:00
2b65e40896 Fix incorrect use of a singleton list (#3914)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
      
## Description

`search_for_models` is explicitly typed as taking a singular `Path` but
was given a list because some later function in the stack expects a
list. Fixed that to be compatible with the paths. This is the only use
of that function.

The `list()` call is unrelated but removes a type warning since it's
supposed to return a list, not a set. I can revert it if requested.

This was found through pylance type errors. Go types!
2023-07-24 20:08:21 +12:00
d6bf6513ef Merge branch 'main' into fix-types-2 2023-07-24 20:01:48 +12:00
14659277e7 Add missing import (#3917)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

This import is missing and used later in the file.
2023-07-24 20:01:12 +12:00
cbb90cbdbb Download all model types. 2023-07-24 10:59:59 +03:00
9c59083406 Merge branch 'main' into fix-types-1 2023-07-24 19:52:46 +12:00
86b62cfccc fix: Generate random seed using the generator instead of RandomState (#3940)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-24 19:52:04 +12:00
e766ddbcf4 fix: Generate random seed using the generator instead of RandomState 2023-07-24 19:38:21 +12:00
374b4a1b12 Merge branch 'main' into pr/3917 2023-07-24 18:58:34 +12:00
0cf7a10c5c fix: Other lora missing type 2023-07-24 18:58:24 +12:00
1c44a0feba feat: increase seed from int32 to uint32 (#3933)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No: n/a


## Description

At some point I typo'd this and set the max seed to signed int32 max. It
should be *un*signed int32 max.

This restored the seed range to what it was in v2.3.

Also fixed a bug in the Noise node which resulted in the max valid seed
being one less than intended.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issues
#2843 is against v2.3 and increases the range of valid seeds
substantially. Maybe we can explore this in the future but as of v3.0,
we use numpy for a RNG in a few places, and it maxes out at the max
`uint32`. I will close this PR as this supersedes it.
- Closes #3866

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

You should be able to use seeds up to and including `4294967295`.

## Added/updated tests?

- [ ] Yes
- [x] No : don't think we have any relevant tests

## [optional] Are there any post deployment tasks we need to perform?

nope!
2023-07-24 18:55:35 +12:00
66cdeba8a1 fix(nodes): fix seed modulus operation
This was incorect and resulted in the max seed being one less than intended.
2023-07-24 16:44:32 +10:00
d5a75eb833 feat: increase seed from int32 to uint32
At some point I typo'd this and set the max seed to signed int32 max. It should be *un*signed int32 max.

This restored the seed range to what it was in v2.3.
2023-07-24 16:34:50 +10:00
8eab96c441 update documentation 2023-07-23 23:41:44 -04:00
4754a94102 update linear graphs to perform safety checking and watermarking 2023-07-23 23:32:08 -04:00
5c6f417471 add safetychecker and watermark nodes 2023-07-23 16:24:34 -04:00
0beec08d38 Add missing import. 2023-07-23 16:40:05 +02:00
02618a701d fix: Fix app crashing when you upload an incorrect JSON to node editor (#3911)
## What type of PR is this? (check all applicable)

- [x] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [x] Yes, we feel very passionate about this.     

## Description

Uploading an incorrect JSON file to the Node Editor would crash the app.

While this is a much larger problem that we will tackle while refining
the Node Editor, this is a fix that should address 99% of the cases out
there.

When saving an InvokeAI node graph, there are three primary keys.

1. `nodes` - which has all the node related data.
2. `edges` - which has all the edges related data
3. `viewport` - which has all the viewport related data.

So when we load back the JSON, we now check if all three of these keys
exist in the retrieved JSON object. While the `viewport` itself is not a
mandatory key to repopulate the graph, checking for it will allow us to
treat it as an additional check to ensure that the graph was saved from
InvokeAI.

As a result ...

- If you upload an invalid JSON file, the app now warns you that the
JSON is invalid.
- If you upload a JSON of a graph editor that is not InvokeAI, it simply
warns you that you are uploading a non InvokeAI graph.

So effectively, you should not be able to load any graph that is not
generated by ReactFlow.

Here are the edge cases:

- What happens if a user maintains the above key structure but tampers
with the data inside them? Well tested it. Turns out because we validate
and build the graph based on the JSON data, if you tamper with any data
that is needed to rebuild that node, it simply will skip that and load
the rest of the graph with valid data.
- What happens if a user uploads a graph that was made by some other
random ReactFlow app? Well, same as above. Because we do not have to
parse that in our setup, it simply will skip it and only display what
are setup to do.

I think that just about covers 99% of the cases where this could go
wrong. If there's any other edges cases, can add checks if need be. But
can't think of any at the moment.

## Related Tickets & Documents

### Closes
- #3893 
- #3881

## [optional] Are there any post deployment tasks we need to perform?

Yes. Making @psychedelicious a little bit happier. :P
2023-07-24 02:15:46 +12:00
f2a6f0cf21 SDXL & SDXL-refiner models convert correctly 2023-07-23 09:31:14 -04:00
07a90c0198 Fix incorrect use of a singleton list.
This was found through pylance type errors. Go types!
2023-07-23 15:28:05 +02:00
28031ead70 feat(ui): display canvas generation mode in status text
- use the existing logic to determine if generation is txt2img, img2img, inpaint or outpaint
- technically `outpaint` and `inpaint` are the same, just display
"Inpaint" if its either
- debounce this by 1s to prevent jank
2023-07-23 23:22:59 +10:00
4b334be7d0 feat(nodes,ui): fix soft locks on session/invocation retrieval
When a queue item is popped for processing, we need to retrieve its session from the DB. Pydantic serializes the graph at this stage.

It's possible for a graph to have been made invalid during the graph preparation stage (e.g. an ancestor node executes, and its output is not valid for its successor node's input field).

When this occurs, the session in the DB will fail validation, but we don't have a chance to find out until it is retrieved and parsed by pydantic.

This logic was previously not wrapped in any exception handling.

Just after retrieving a session, we retrieve the specific invocation to execute from the session. It's possible that this could also have some sort of error, though it should be impossible for it to be a pydantic validation error (that would have been caught during session validation). There was also no exception handling here.

When either of these processes fail, the processor gets soft-locked because the processor's cleanup logic is never run. (I didn't dig deeper into exactly what cleanup is not happening, because the fix is to just handle the exceptions.)

This PR adds exception handling to both the session retrieval and node retrieval and events for each: `session_retrieval_error` and `invocation_retrieval_error`.

These events are caught and displayed in the UI as toasts, along with the type of the python exception (e.g. `Validation Error`). The events are also logged to the browser console.
2023-07-23 21:41:01 +10:00
de73e4f5b9 Merge branch 'main' into nodepromptsize 2023-07-23 18:28:25 +10:00
af4579b4d4 feat: Add more sanity checks for graph loading 2023-07-23 18:12:25 +12:00
35acb5de76 Merge branch 'main' into json-crash-fix 2023-07-23 16:50:36 +12:00
225f608556 fix: Add more sanity checks & rename buttons to Graphs 2023-07-23 16:49:52 +12:00
00d3cd4aed Fix 'Del' hotkey to delete current image. 2023-07-23 14:16:32 +10:00
5e59edfaf1 SDXL checkpoint models now convert and load; needs refactor 2023-07-23 00:00:31 -04:00
fdc444ed61 fix: Fix app crashing when you upload an incorrect JSON to node editor 2023-07-23 15:24:04 +12:00
075f9b3a7a ui: pay back tech debt (#3896)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: n/a

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No n/a


## Description

Big cleanup:
- improve & simplify the app logging
- resolve all TS issues
- resolve all circular dependencies
- fix all lint/format issues

## QA Instructions, Screenshots, Recordings

`yarn lint` passes:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7b763922-f00c-4b17-be23-2432da50f816)
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : n/a

## [optional] Are there any post deployment tasks we need to perform?

bask in the glory of what *should* be a fully-passing frontend lint on
this PR
2023-07-23 13:57:43 +12:00
b1d7c9b306 save text_encoder_2 config, not whole model 2023-07-22 21:33:40 -04:00
5607794dbb add support for controlnet & sdxl conversion - not fully working 2023-07-22 20:12:16 -04:00
c5147d0f57 fix(ui): fix all eslint & prettier issues 2023-07-22 23:45:24 +10:00
6452d0fc28 fix(ui): fix all circular dependencies 2023-07-22 22:48:39 +10:00
5468d9a9fc fix(ui): resolve all typescript issues 2023-07-22 21:38:50 +10:00
75863e7181 feat(ui): logging cleanup
- simplify access to app logger
- spruce up and make consistent log format
- improve messaging
2023-07-22 21:12:51 +10:00
0689e36390 Merge branch 'main' into nodepromptsize 2023-07-22 07:20:28 +10:00
907ff165be Update communityNodes.md (#3873)
Added the Ideal Size node

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: It's a community node addition

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

Added a reference to my community node that calculates the ideal size
for initial latent generation that avoids duplication. This is the logic
that was present in 2.3.5's first pass of high-res optimization.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : This is a documentation change that references my community
node.

## [optional] Are there any post deployment tasks we need to perform?
2023-07-21 15:17:28 -04:00
53c8c3b4f5 Merge branch 'main' into JPPhoto-add-ideal-size 2023-07-21 15:17:06 -04:00
8262c31866 Update communityNodes.md (#3876)
Add Face Mask to communityNodes.md

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Add Face Mask to communituNodes.md list.
2023-07-21 15:16:41 -04:00
b940ae8dbb Merge branch 'main' into facemask/communitynodes 2023-07-21 15:16:14 -04:00
845d1524ad warn, do not crash, when duplicate models encountered 2023-07-21 15:00:55 -04:00
6c82b694a7 Update communityNodes.md
Add Face Mask to communityNodes.md
2023-07-21 19:05:37 +02:00
f1fcc3fb74 fix pypi helper for correct pypi updating 2023-07-21 12:36:09 -04:00
2dd59d31d0 fix mkdocs push 2023-07-21 12:27:53 -04:00
78750042f5 Pass in dim overrides 2023-07-21 12:16:24 -04:00
3f79812dc6 fix: mps attention fix for sd2 2023-07-21 09:22:37 -04:00
055b2207cb Update CONTRIBUTORS.md 2023-07-21 08:24:17 -04:00
19cdd5a99b rebuild frontend for release 2023-07-21 07:48:30 -04:00
5db66e00b6 Update communityNodes.md
Added the Ideal Size node
2023-07-21 06:38:42 -05:00
76337e13f5 Last 3.0.0 tweaks (#3872)
Updated contributors
2023-07-21 07:38:28 -04:00
eb4ca4042e Merge branch 'main' into release/3-0-0 2023-07-21 07:38:02 -04:00
594bf6fef1 fix(api,ui): fix canvas saved images have extra transparent regions
- add `crop_visible` param to upload image & set to true only for canvas saves
2023-07-21 07:26:12 -04:00
6f2e8d5217 chore(ui): regen types 2023-07-21 07:26:12 -04:00
52ae15c167 fix(ui): fix console error related to css 2023-07-21 07:26:12 -04:00
2c4128d44e fix(ui): deleting board does not reset selected board/image 2023-07-21 07:26:12 -04:00
01b106d939 fix(ui): fix no image selected on first load 2023-07-21 07:26:12 -04:00
68f1f87c6f feat(ui): board styles 2023-07-21 07:26:12 -04:00
c2c99b8650 feat(ui): fix more caching bugs 2023-07-21 07:26:12 -04:00
896b77cf56 feat(api,db): allow creating an image with a board_id 2023-07-21 07:26:12 -04:00
6f7d221f57 Couple doc tweaks (#3870)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: just updated docs to try to help lead new users to
installs a little easier

      
## Have you updated relevant documentation?
- [x] Yes
- [ ] No


## Description
Some minor docs tweaks

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-21 06:43:03 -04:00
fba4085939 ui: boards 2: electric boogaloo (#3869)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:


## Description

Revised boards logic and UI

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue # discord convos
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : n/a

## [optional] Are there any post deployment tasks we need to perform?
2023-07-21 06:42:16 -04:00
13e7614508 add text so string node uses textarea 2023-07-21 19:36:27 +10:00
48ad005732 Couple doc tweaks 2023-07-21 16:35:41 +10:00
9ce4bd1182 fix: Simplify gallery board name layout 2023-07-21 18:15:55 +12:00
39b7ace273 fix: Differentiate no boards from the user boards 2023-07-21 18:15:12 +12:00
319c56f844 fix: Make auto add icon be a tad bit smaller 2023-07-21 18:14:57 +12:00
389a0d2810 feat(ui): use badge for autoadd 2023-07-21 16:01:40 +10:00
fe33acedad fix(ui): fix crash when removing last image 2023-07-21 15:57:09 +10:00
eab18c7385 fix(ui): fix incorrect gallery tab 2023-07-21 15:56:50 +10:00
8e98085530 fix(ui): fix missing 'none' on no-board cache updates 2023-07-21 15:53:41 +10:00
5396e998b3 feat(ui): simplify auto-add context menu 2023-07-21 15:47:12 +10:00
fc98089960 fix(ui): debounce metadata query on context menu 2023-07-21 15:37:33 +10:00
dd0b4dc744 fix(ui): fix next prev buttons 2023-07-21 15:37:20 +10:00
ddeba190bc fix(ui): really fixed autoadd context menu 2023-07-21 15:18:48 +10:00
3a610e1a65 fix(ui): more fixing of auto-add 2023-07-21 15:00:07 +10:00
e10e22440d fix(ui): restore auto-add to board functionality 2023-07-21 14:29:42 +10:00
f4e8a91bcf fix(ui): update boardIdSelected 2023-07-21 14:22:18 +10:00
ce7fbdb01d bump version; update contributors list 2023-07-21 00:17:21 -04:00
4da6623700 fix(ui): fix deleteboard cache changes 2023-07-21 14:16:19 +10:00
4e1786d9ae Remove Resize: none 2023-07-21 13:55:40 +10:00
0e3ca59e49 feat(ui): refactor boards hierarchy 2023-07-21 13:48:15 +10:00
e06f2229ac Replace SlicedAttnProcessor with patched to chunk memory on mps (#3868)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description
On mps generating images with resolution above ~1536x1536 results in
"fried" output. Main problem that such resolution results in tensors in
size more then 4gb. Looks like that some of mps internals can't handle
properly this, so to mitigate it I break attention calculation in
chunks.

## QA Instructions, Screenshots, Recordings
Example of bad output:

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/cd373458-c0a5-4a2f-8ea5-402020de5b4b)
2023-07-20 23:32:29 -04:00
5962d96f27 Merge branch 'main' into fix/long_tensors_mps 2023-07-20 23:24:47 -04:00
d4854c4fac Release 3.0.0 RC Series (#3844)
## What type of PR is this? (check all applicable)

- [ X] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

## Description

This is a WIP to collect documentation enhancements and other polish
prior to final 3.0.0 release. Minor bug fixes may go in here if
non-controversial. It should be merged into main prior to the final
release.
2023-07-20 23:22:40 -04:00
585520d8d2 Only apply Textaera to Prompt 2023-07-21 13:17:27 +10:00
46801c076f Merge branch 'main' into release/invokeai-3-0-rc 2023-07-20 23:16:05 -04:00
9370572169 prettify startup messages 2023-07-20 22:45:35 -04:00
ace65325ff Update FoundModelsList.tsx (#3867)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-21 13:14:32 +12:00
e6d890888c Replace SlicedAttnProcessor with patched to chunk memory consumption less then 4gb in each attention calculation pass 2023-07-21 04:08:49 +03:00
8e7f581065 Update FoundModelsList.tsx 2023-07-20 20:51:54 -04:00
98b2734240 Merge branch 'main' into nodepromptsize 2023-07-21 08:07:55 +10:00
7b428b5240 Make height smaller and allow width to change with node 2023-07-21 08:03:01 +10:00
85ef3f51e7 extra check for empty hftoken 2023-07-20 15:16:06 -04:00
ce08aa350c Allow controlnet passthrough for now 2023-07-20 14:14:04 -04:00
ba1a934297 Fix Lora typings 2023-07-20 14:02:23 -04:00
4e90376d11 Allow passing in of precision, use available providers if none provided 2023-07-20 13:15:45 -04:00
8fdc8a8da5 fix: No board name being displayed if it is empty (#3863)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Desc

Fixes a bug where the board name is not displayed in the header if there
are no images in it.
2023-07-21 05:10:11 +12:00
52d56e96a5 fix: No board name being displayed if it is empty 2023-07-21 05:07:50 +12:00
c013fe5b5d Merge branch 'main' into release/invokeai-3-0-rc 2023-07-20 12:22:27 -04:00
ddf7ddc2c1 Add sdxl generation preview (#3862)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:


## Description
Add progress preview for sdxl generation nodes
2023-07-20 12:21:57 -04:00
4a0774b260 Use scale from vae 2023-07-20 18:54:51 +03:00
17e401cb8c rebuild frontend 2023-07-20 11:47:04 -04:00
29a590cced Add sdxl generation preview 2023-07-20 18:45:54 +03:00
7deafa838b merge with main 2023-07-20 11:45:54 -04:00
20757d1c02 Add get_log_level and set_log_level operations to the app route (#3858)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated relevant documentation?
- [ X] Yes (swagger)
- [ ] No


## Description

This add new routes for getting and setting the command line console
logging level.
2023-07-20 11:36:47 -04:00
5134de7cfa Merge branch 'main' into lstein/logger-route 2023-07-20 11:29:48 -04:00
b1a6ba552b reinitialize models.yaml if corrupt or missing 2023-07-20 11:26:20 -04:00
cd21d2f2b6 fix(ui): fix no_board cache not updating
two areas marked TODO were not TODONE!
2023-07-20 23:50:14 +10:00
9dc28373d8 use brackets 2023-07-20 23:45:49 +10:00
ffe7d5785b if updating intermediate, dont add to gallery list cache 2023-07-20 23:45:49 +10:00
a2e2f0858d bump version number 2023-07-20 09:42:02 -04:00
f73c70ca96 feat: ControlNet Resize Mode (#3854)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes  Discussed with @hipsterusername yesterday
- [ ] No, because:

      
## Have you updated relevant documentation?
- [ ] Yes 
- [X] No Not yet (but change to default ControlNet resizing doesn't
require any user documentation)


## Description
This PR adds resize modes (just_resize, crop_resize, fill_resize) to
InvokeAI's ControlNet node. The implementation is largely based on
lllyasviel's, which includes a high quality resizer specifically
intended to handle common ControlNet preprocessor outputs, such as
binary (black/white) images, grayscale images, and binary or grayscale
thin lines. Previously the InvokeAI ControlNet implementation only did a
simple resize with independent x/y scaling to match noise latent.

### "just_resize" mode (the default setting)
With the new implementation, using the default "just_resize" mode,
ControlNet images are still resized with independent x/y scaling to
match the noise latent resolution, but with the high quality resizer. As
a result, images generated in InvokeAI now look much closer to
counterparts generated via sd-webui-controlnet. See example below. All
inference runs are using prompt="old man", same ControlNet canny edge
detection preprocessor and model and control image, identical other
parameters except for control_mode. The top row is previous simple
resize implementation, the bottom row is with new high quality resizer
and "just_resize" mode. Control_mode is: left="balanced", middle="more
prompt", right="more control". The high quality resize images are
identical (at least by eye) to output from sd-webui-controlnet with same
settings.


![just_resize_simple_vs_just_resize_lvmin](https://github.com/invoke-ai/InvokeAI/assets/303100/5fe02121-616a-4531-b2a4-b423cc054b99)

## "crop_resize" and "fill_resize" modes
The other two resize modes are "crop_resize" and "fill_resize". Whereas
"just_resize" ignores any aspect ratio mismatch between the ControlNet
image and the noise latent, these other modes preserve the aspect ratio
of the ControlNet image. The "crop_resize" mode does this by cropping
the image, and the "fill_resize" option does this by expanding the image
(adding fill pixels). See example below. In this case all inference runs
are using prompt="old man", the ControlNet Midas depth detection
preprocessor and depth model, same control image of size 512x512,
control_mode="balanced", and identical other parameters except for
resize_mode and noise latent dimensions. For top row noise latent size
is 768x512, and for bottom row noise latent size is 512x768. Resize_mode
is: left="just_resize", middle="crop_resize", right="fill_resize"

![Screenshot from 2023-07-20
02-09-22](https://github.com/invoke-ai/InvokeAI/assets/303100/7b4df456-2a5e-4ec4-bce1-fafdba52f025)

## Are there any post deployment tasks we need to perform?
To use "just_resize" mode in linear UI, no post deployment work is
needed. The default is switched from old resizer to new high quality
resizer.

To use "just_resize", "crop_resize", and "fill_resize" modes in node UI,
no post deployment work is needed. There is also an additional option
"just_resize_simple" that uses old resizer, mainly left in for testing
and for anyone curious to see the difference.

To use "crop_resize" and "fill_resize" in linear UI, there will need to
be some work to incorporate choice of three modes in ControlNet UI
(probably best to not expose "just_resize_simple" in linear UI, it just
confuses things).
2023-07-21 01:31:52 +12:00
e2240feae4 fix: Chevron icon styling 2023-07-21 01:21:04 +12:00
e06348bfab fix: Expand chevron icon being too small 2023-07-21 01:14:19 +12:00
8fb970d436 fix: Use layout gap to control layout instead of margin 2023-07-21 01:07:00 +12:00
15256ed3a4 fix: Layout shift on the ControlNet Panel 2023-07-21 01:04:16 +12:00
89a15f78dd collapse all autoimport directories into a single folder 2023-07-20 09:01:49 -04:00
8fc20c837b Merge branch 'main' into feat/controlnet-resize-mode 2023-07-21 00:58:28 +12:00
8dfe196c4f feat: Add Image Count to Board Name 2023-07-20 22:56:52 +10:00
9e27fd9b90 feat(ui): color tweak on board 2023-07-20 22:56:52 +10:00
2771328853 feat(ui): reduce saturation by 8% for 1337 contrast 2023-07-20 22:56:52 +10:00
a481607d3f feat(ui): boards are only punch-you-in-the-face-purple if selected 2023-07-20 22:56:52 +10:00
1e3cebbf42 feat(ui): add useBoardTotal hook to get total items in board
actually not using it now, but it's there
2023-07-20 22:56:52 +10:00
d523556558 fix: Truncate board name if longer than 20 chars 2023-07-20 22:56:52 +10:00
da523fa32f fix: Editable text aligning left instead of inplace. 2023-07-20 22:56:52 +10:00
ab9b5f3b95 fix: Possible fix to the name plate getting displaced 2023-07-20 22:56:52 +10:00
f32bd5dd10 fix: Minor color tweaks to the name plate on boards 2023-07-20 22:56:52 +10:00
190ba5af59 feat(ui): boards styling 2023-07-20 22:56:52 +10:00
cb29ac63a8 prevent crashes on quick install when hftoken not defined 2023-07-20 08:38:37 -04:00
603989dc0d added get_log_level and set_log_level operations to the app route 2023-07-20 08:33:01 -04:00
2872ae2aab fix: Adjust layout of Resize Mode dropdown
Moved it next to ControlMode to make it more compact
2023-07-20 22:53:45 +12:00
b7cdda0781 feat: Add ControlNet Resize Mode to Linear UI 2023-07-20 22:48:35 +12:00
267940a77e Merge branch 'main' into feat/controlnet-resize-mode 2023-07-20 22:24:11 +12:00
f73b45bcb5 Feat: Change Input to Textbox 2023-07-20 19:11:18 +10:00
8d77c5ca96 feat: Add Sync Models (#3850)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Description

This changes the "sync" route from a GET to POST method, in keeping with
the Representational Existential(?) State Transfer (REST) protocol.
2023-07-20 20:26:10 +12:00
0795d8764f Merge branch 'main' into fix/post-model-sync 2023-07-20 20:16:14 +12:00
2db56306e4 Merge branch 'feat/controlnet-resize-mode' of github.com:invoke-ai/InvokeAI into feat/controlnet-resize-mode 2023-07-20 00:45:29 -07:00
70fec9ddab Added pixel_perfect_resolution() method to controlnet_utils.py, but not using yet. To be usable this will likely require modification of ControlNet preprocessors 2023-07-20 00:41:49 -07:00
909f538fb5 Switching over to controlnet_utils prepare_control_image(), with added resize_mode. 2023-07-20 00:41:49 -07:00
bab8b6d240 Removed diffusers_pipeline prepare_control_image() -- replaced with controlnet_utils.prepare_control_image()
Added resize_mode to ControlNetData class.
2023-07-20 00:41:49 -07:00
f2f49bd8d0 Added resize_mode param to ControlNet node 2023-07-20 00:41:49 -07:00
b8e0810ed1 Added revised prepare_control_image() that leverages lvmin high quality resizing 2023-07-20 00:41:49 -07:00
6cb9167a1b Added controlnet_utils.py with code from lvmin for high quality resize, crop+resize, fill+resize 2023-07-20 00:41:49 -07:00
09dfcc4277 Added pixel_perfect_resolution() method to controlnet_utils.py, but not using yet. To be usable this will likely require modification of ControlNet preprocessors 2023-07-20 00:38:20 -07:00
82eb1f1075 feat: Add Sync Models to UI 2023-07-20 18:50:43 +12:00
187cf906fa ui: enhance intermediates clear, enhance board auto-add (#3851)
* feat(ui): enhance clear intermediates feature

- retrieve the # of intermediates using a new query (just uses list images endpoint w/ limit of 0)
- display the count in the UI
- add types for clearIntermediates mutation
- minor styling and verbiage changes

* feat(ui): remove unused settings option for guides

* feat(ui): use solid badge variant

consistent with the rest of the usage of badges

* feat(ui): update board ctx menu, add board auto-add

- add context menu to system boards - only open is select board. did this so that you dont think its broken when you click it
- add auto-add board. you can right click a user board to enable it for auto-add, or use the gallery settings popover to select it. the invoke button has a tooltip on a short delay to remind you that you have auto-add enabled
- made useBoardName hook, provide it a board id and it gets your the board name
- removed `boardIdToAdTo` state & logic, updated workflows to auto-switch and auto-add on image generation

* fix(ui): clear controlnet when clearing intermediates

* feat: Make Add Board icon a button

* feat(db, api): clear intermediates now clears all of them

* feat(ui): make reset webui text subtext style

* feat(ui): board name change submits on blur

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
2023-07-20 17:44:22 +12:00
82554b25fe Updated documentation (#3832)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: documentation update that needs review from the team
before going live

      
## Description

I updated the contribution guidelines, adding more structure and a
getting started guide. Also re-organized the tabs to be in the order of
most commonly used.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
run `mkdocs serve` to check it out


## Added/updated tests?

- [ ] Yes
- [X ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-20 14:27:50 +10:00
039091c5d4 Updated frontend docs to be more accurate 2023-07-20 13:16:55 +10:00
d76bf4444c Update invokeai/app/api/routers/models.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-07-19 22:46:49 -04:00
82496fee14 Merge branch 'main' into main 2023-07-19 22:43:18 -04:00
c2b99e7545 Switching over to controlnet_utils prepare_control_image(), with added resize_mode. 2023-07-19 19:26:49 -07:00
e918168f7a Removed diffusers_pipeline prepare_control_image() -- replaced with controlnet_utils.prepare_control_image()
Added resize_mode to ControlNetData class.
2023-07-19 19:21:17 -07:00
6e36c275c9 feat: Add Setting Switch Component (#3847) 2023-07-20 14:17:51 +12:00
6affe42310 Added resize_mode param to ControlNet node 2023-07-19 19:17:24 -07:00
170bbd7da3 change GET to POST method for model synchronization route 2023-07-19 22:16:56 -04:00
f6d5e93020 fix: Model List not scrolling through checkpoints (#3849) 2023-07-20 14:16:32 +12:00
f2515d9480 fix v1-finetune.yaml is not in the subpath of "" (#3848)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-07-20 14:13:56 +12:00
4d8f17c69d fix v1-finetune.yaml is not in the subpath of "" 2023-07-19 22:06:55 -04:00
3a987b2e72 Added revised prepare_control_image() that leverages lvmin high quality resizing 2023-07-19 19:01:14 -07:00
4e3f58552c Added controlnet_utils.py with code from lvmin for high quality resize, crop+resize, fill+resize 2023-07-19 18:52:30 -07:00
77d9657980 don't write root into invokeai.yaml 2023-07-19 21:12:52 -04:00
12cae33dcd fix inpaint model detection (#3843)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-07-20 12:57:14 +12:00
1e5310793c Updated PR template 2023-07-20 09:46:05 +10:00
a0b5930340 Updated Code of Conduct URL 2023-07-20 09:35:09 +10:00
53ed252168 Fixed typos in docs 2023-07-20 09:34:16 +10:00
a683379dda Updated docs to be more accurate based on Lincoln's feedback 2023-07-20 09:28:21 +10:00
899aa1d251 Merge branch 'invoke-ai:main' into main 2023-07-20 09:22:26 +10:00
23f4a4ea1a Fix dist 2023-07-19 18:27:51 -04:00
6aab8f16ce Fix issue from merge 2023-07-19 18:27:15 -04:00
5f940bf3b3 default precision to "auto" 2023-07-19 18:23:00 -04:00
8f61413865 Setup dist folder 2023-07-19 17:49:27 -04:00
43b6a077fb io binding seems to be massively resource intensive compared to session.run 2023-07-19 17:42:28 -04:00
1cd814cba0 fix readme in preparation for RC 2023-07-19 14:57:26 -04:00
a1251c8e04 fix inpaint model detection 2023-07-19 13:30:00 -04:00
509514f11d feat(api): display warning when port is in use 2023-07-19 13:29:31 -04:00
c557402dbb feat(api): use next available port
Resolves #3515

@ebr @brandonrising can't imagine this would cause issues but just FYI
2023-07-19 13:29:31 -04:00
495df9fd1b bump version to 3.0.0rc1 2023-07-19 12:36:39 -04:00
3db9a07eea Beta branch containing documentation enhancements, minor bug fix (#3831)
The HF access token was not being saved by the configure script. This
fixes that.
2023-07-19 12:22:21 -04:00
9fd7eb2e0e Merge branch 'main' into release/invokeai-3-0-beta 2023-07-19 12:18:56 -04:00
9263f1090e Changing ImageToLatentsInvocation node to default to detected precision (#3838)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Description
ImageToLatentsInvocation defaulted to float16 rather than detect the
requested precision from configs.
This caused an exception to be raised on systems that don't support
float16 (e.g. CPU).


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-19 12:17:59 -04:00
135ab0a3e8 Merge branch 'release/invokeai-3-0-beta' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-beta 2023-07-19 12:16:56 -04:00
b9b89ad210 additional tweaks to controlnet documentation 2023-07-19 12:16:16 -04:00
72c19987d5 discuss issues with adding controlnet models 2023-07-19 12:16:03 -04:00
8439e30798 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-19 12:09:32 -04:00
84d6578855 Merge branch 'main' into bugfix/ImageToLatentsInvocation_fp32_precision 2023-07-19 12:08:58 -04:00
0073fc8619 add toggle for isNodesEnabled in settings (#3839)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-19 16:08:28 +00:00
2fbc6dc315 Merge branch 'main' into bugfix/ImageToLatentsInvocation_fp32_precision 2023-07-19 12:08:04 -04:00
be95fd753e add missing screenshot 2023-07-19 12:07:07 -04:00
0724eb9e0a feat(ui): another go at gallery (#3791)
* feat(ui): migrate listImages to RTK query using createEntityAdapter

- see comments in `endpoints/images.ts` for explanation of the caching
- so far, only manually updating `all` images when new image is generated. no other manual cache updates are implemented, but will be needed.
- fixed some weirdness with loading state components (like the spinners in gallery)
- added `useThumbnailFallback` for `IAIDndImage`, this displays the tiny webp thumbnail while the full-size images load
- comment out some old thunk related stuff in gallerySlice, which is no longer needed

* feat(ui): add manual cache updates for board changes (wip)

- update RTK Query caches when adding/removing single image to/from board
- work more on migrating all image-related operations to RTK Query

* update AddImagesToBoardContext so that it works when user uses context menu + modal

* handle case where no image is selected

* get assets working for main list and boards - dnd only

* feat(ui): migrate image uploads to RTK Query

- minor refactor of `ImageUploader` and `useImageUploadButton` hooks, simplify some logic
- style filesystem upload overlay to match existing UI
- replace all old `imageUploaded` thunks with `uploadImage` RTK Query calls, update associated logic including canvas related uploads
- simplify `PostUploadAction`s that only need to display user input

* feat(ui): remove `receivedPageOfImages` thunks

* feat(ui): remove `receivedImageUrls` thunk

* feat(ui): finish removing all images thunks

stuff now broken:
- image usage
- delete board images
- on first load, no image selected

* feat(ui): simplify `updateImage` cache manipulation

- we don't actually ever change categories, so we can remove a lot of logic

* feat(ui): simplify canvas autosave

- instead of using a network request to set the canvas generation as not intermediate, we can just do that in the graph

* feat(ui): simplify & handle edge cases in cache updates

* feat(db, api): support `board_id='none'` for `get_many` images queries

This allows us to get all images that are not on a board.

* chore(ui): regen types

* feat(ui): add `All Assets`, `No Board` boards

Restructure boards:
- `all images` is all images
- `all assets` is all assets
- `no board` is all images/assets without a board set
- user boards may have images and assets

Update caching logic
- much simpler without every board having sub-views of images and assets
- update drag and drop operations for all possible interactions

* chore(ui): regen types

* feat(ui): move download to top of context menu

* feat(ui): improve drop overlay styles

* fix(ui): fix image not selected on first load

- listen for first load of all images board, then select the first image

* feat(ui): refactor board deletion

api changes:
- add route to list all image names for a board. this is required to handle board + image deletion. we need to know every image in the board to determine the image usage across the app. this is fetched only when the delete board and images modal is opened so it's as efficient as it can be.
- update the delete board route to respond with a list of deleted `board_images` and `images`, as image names. this is needed to perform accurate clientside state & cache updates after deleting.

db changes:
- remove unused `board_images` service method to get paginated images dtos for a board. this is now done thru the list images endpoint & images service. needs a small logic change on `images.delete_images_on_board`

ui changes:
- simplify the delete board modal - no context, just minor prop drilling. this is feasible for boards only because the components that need to trigger and manipulate the modal are very close together in the tree
- add cache updates for `deleteBoard` & `deleteBoardAndImages` mutations
- the only thing we cannot do directly is on `deleteBoardAndImages`, update the `No Board` board. we'd need to insert image dtos that we may not have loaded. instead, i am just invalidating the tags for that `listImages` cache. so when you `deleteBoardAndImages`, the `No Board` will re-fetch the initial image limit. i think this is more efficient than e.g. fetching all image dtos to insert then inserting them.
- handle image usage for `deleteBoardAndImages`
- update all (i think/hope) the little bits and pieces in the UI to accomodate these changes

* fix(ui): fix board selection logic

* feat(ui): add delete board modal loading state

* fix(ui): use thumbnails for board cover images

* fix(ui): fix race condition with board selection

when selecting a board that doesn't have any images loaded, we need to wait until the images haveloaded before selecting the first image.

this logic is debounced to ~1000ms.

* feat(ui): name 'No Board' correctly, change icon

* fix(ui): do not cache listAllImageNames query

if we cache it, we can end up with stale image usage during deletion.

we could of course manually update the cache as we are doing elsewhere. but because this is a relatively infrequent network request, i'd like to trade increased cache mgmt complexity here for increased resource usage.

* feat(ui): reduce drag preview opacity, remove border

* fix(ui): fix incorrect queryArg used in `deleteImage` and `updateImage` cache updates

* fix(ui): fix doubled open in new tab

* fix(ui): fix new generations not getting added to 'No Board'

* fix(ui): fix board id not changing on new image when autosave enabled

* fix(ui): context menu when selection is 0

need to revise how context menu is triggered later, when we approach multi select

* fix(ui): fix deleting does not update counts for all images and all assets

* fix(ui): fix all assets board name in boards list collapse button

* fix(ui): ensure we never go under 0 for total board count

* fix(ui): fix text overflow on board names

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-19 12:06:38 -04:00
6a4440e52b Merge branch 'main' into bugfix/ImageToLatentsInvocation_fp32_precision 2023-07-19 11:56:07 -04:00
07c48b2fd1 Moving detected precision to DEFAULT_PRECISION constant 2023-07-19 11:55:37 -04:00
055f5b2d4b clear canvas alongside intermediates 2023-07-19 11:39:24 -04:00
fface339ae Same fix for ImageToLatentsInvocation 2023-07-19 11:38:13 -04:00
2ec9dab595 Changing ImageToLatentsInvocation node to default to detected precision instead of fp16 2023-07-19 11:16:00 -04:00
9f00e055ac Maryhipp/clear intermediates (#3820)
* new route to clear intermediates

* UI to clear intermediates from settings modal

* cleanup

* PR feedback

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-19 10:55:29 -04:00
aca5c6de9a [WIP] Load text_model.embeddings.position_ids outsude state_dict (#3829)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
      
## Description
In transformers 4.31.0 `text_model.embeddings.position_ids` no longer
part of state_dict.
Fix untested as can't run right now but should be correct. Also need to
check how transformers 4.30.2 works with this fix.

## Related Tickets & Documents


8e5d1619b3 (diff-7f53db5caa73a4cbeb0dca3b396e3d52f30f025b8c48d4daf51eb7abb6e2b949R191)

https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.register_buffer

## QA Instructions, Screenshots, Recordings

```
  File "C:\Users\artis\Documents\invokeai\.venv\lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 844, in convert_ldm_clip_checkpoint
    text_model.load_state_dict(text_model_dict)
  File "C:\Users\artis\Documents\invokeai\.venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
        Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".
```
2023-07-19 09:58:02 -04:00
c291b82b94 Added contribution disclaimer 2023-07-19 23:56:38 +10:00
f9320475fd allow upgrade to transformers~=4.31.0 2023-07-19 09:46:21 -04:00
9c3a556813 Merge branch 'main' into fix/transformers_4_31_0 2023-07-19 09:35:52 -04:00
0b6ef7eb7d make the convert VAE available to model manager for use in UI 2023-07-19 09:05:24 -04:00
6ba48af0a9 Added community node documentation 2023-07-19 22:04:17 +10:00
40fffec0b6 Merge branch 'invoke-ai:main' into main 2023-07-19 21:31:24 +10:00
23f0c7035c Tweaks to Image Progress Node (#3833)
* Update nodesSlice.ts

* Update ProgressImageNode.tsx

* remove unused code

* Remove Fixed Ratio

I was causing issues

* fix: Progress Image Node Size

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
2023-07-19 20:54:50 +12:00
94787b7251 Missing def choose_torch_device (#3834)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Description
Fix for
 ```
File "/home/invokeuser/InvokeAI/invokeai/app/services/processor.py",
line 70, in __process
    outputs = invocation.invoke(
File "/home/invokeuser/InvokeAI/invokeai/app/invocations/latent.py",
line 660, in invoke
    device=choose_torch_device()
NameError: name 'choose_torch_device' is not defined
```

when using scale latents node

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-19 18:53:23 +12:00
d8db618de0 import choose_torch_device from ...backend.util.devices 2023-07-19 16:43:02 +10:00
5ae2fb0d2b more doc improvements 2023-07-19 01:49:28 -04:00
5b1d7a2367 reorganized intro to web walkthru 2023-07-19 01:47:23 -04:00
f3ae9c513e updated web walkthrough 2023-07-19 01:42:52 -04:00
ff74370eda • Updated best practices
• Updated index with new contribution guide link
2023-07-19 15:39:29 +10:00
19d67b29e7 Remove not needed text 2023-07-19 15:20:40 +10:00
52e7e0b31b Missing def choose_torch_device 2023-07-19 15:15:55 +10:00
446d87516a * Updated contributiion guide
* Updated nav to be in new order prioritizing more commonuly used tabs
* Added set nav in mkdocs.yaml
2023-07-19 14:34:03 +10:00
e8299d0abb Comment out erroniously removed del statement, comment out opt tests 2023-07-18 23:23:34 -04:00
a28ab654ef Setup dist folder 2023-07-18 23:18:46 -04:00
8699fd7050 Fix invoke UI graphs for onnx 2023-07-18 23:16:51 -04:00
2e7fc055c4 Support both pre and post 4.31.0 transformers 2023-07-19 06:15:17 +03:00
9e65470ada Setup dist 2023-07-18 23:07:31 -04:00
f4e52fafac Fix as part of merging main in 2023-07-18 23:05:33 -04:00
0f7e329e76 restore access token-saving code 2023-07-18 22:58:56 -04:00
ee7b36cea5 Merge branch 'main' into onnx-testing 2023-07-18 22:56:41 -04:00
487455ef2e Add model_type to the model state object 2023-07-18 22:40:27 -04:00
a690cca5b5 make convert work with both 4.30.2 and 4.31.0 2023-07-18 22:18:13 -04:00
f29bafd6ec fix Object of type PosixPath is not JSON serializable error 2023-07-18 22:10:12 -04:00
632346b2e2 Release/invokeai 3 0 beta - mkdocs fix (#3821)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Description
This PR points mkdocs to the `main` branch again, so that the 3.0.0
documentation appears in gh-pages.

It also makes a minor tweak to the tooltip for model imports, so that
users know that URLs are accepted.

Also rebuilds frontend for use in beta testing.
2023-07-18 21:55:38 -04:00
e201ad2f51 Switch to io_binding for run, testing different session options 2023-07-18 21:54:54 -04:00
0f18898865 Merge branch 'release/invokeai-3-0-beta' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-beta 2023-07-18 21:44:09 -04:00
700131fab2 Pin to transformers 4.30.2
bump version
2023-07-18 21:43:40 -04:00
634d6bb8a6 bump version 2023-07-18 21:42:24 -04:00
0aa7193d3b Load text_model.embeddings.position_ids outsude state_dict 2023-07-19 04:18:43 +03:00
2fbf245c3d Merge branch 'main' into release/invokeai-3-0-beta
-- this adds the upscaling support
2023-07-18 21:17:15 -04:00
39c14eb2ac fix pretrained model download to work with xl 2023-07-18 21:10:33 -04:00
c89fa4b635 install, nodes, ui: restore ad-hoc upscaling (#3800)
I've opted to leave out any additional upscaling parameters like scale
and denoising strength, which, from my review of the ESRGAN code, don't
do much:
- scale just resizes the image using CV2 after the AI upscaling, so
that's not particularly useful
- denoising strength is only valid for one class of model, which we are
no longer supporting

If there is demand, we can implement output size/scale UI and handle it
by passing the upscaled image to that a resize/scale node.

I also understand we previously had some functionality to blend the
upscaled image with the original. If that is desired, we would need to
implement that as a node that we can pass the upscaled image to.

Demo:


https://github.com/invoke-ai/InvokeAI/assets/4822129/32eee615-62a1-40ce-a183-87e7d935fbf1

---

[feat(nodes): add RealESRGAN_x2plus.pth, update upscale
nodes](dbc256c5b4)

- add `RealESRGAN_x2plus.pth` model to installer @lstein 
- add `RealESRGAN_x2plus.pth` to `realesrgan` node
- rename `RealESRGAN` to `ESRGAN` in nodes
- make `scale_factor` optional in `img_scale` node

[feat(ui): restore ad-hoc
upscaling](b3fd29e5ad)

- remove face restoration entirely
- add dropdown for ESRGAN model select
- add ad-hoc upscaling graph and workflow
2023-07-18 21:00:17 -04:00
e943913f58 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-18 20:42:10 -04:00
4ada094c5c tests(nodes): fix tests due to referencing renamed node 2023-07-19 09:45:26 +10:00
893e199677 Merge branch 'main' into feat/ui/upscale 2023-07-18 19:18:55 -04:00
3c5a0c95b3 add option to hide version on logo (#3825)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-19 11:03:09 +12:00
71a07ee5a7 Merge branch 'main' into maryhipp/optional-version 2023-07-19 11:02:24 +12:00
2a0a765ec4 Cleanup vram after models offloading (#3826)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Description
There no vram cleanup on models offload which leads to filling vram and
slow generation speed.
2023-07-19 10:17:17 +12:00
ec08151009 add correct requirements for installing SDXL models 2023-07-18 18:15:37 -04:00
186e98da5e Merge branch 'main' into fix/mem_cleanup 2023-07-19 10:10:32 +12:00
dea9a5da7a Avoid crash if unable to modify the model config file (#3824)
* fix whitespace; remove invisible characters
* log error and proceed if unable to modify the model config
2023-07-18 16:33:19 -04:00
bda0000acd Cleanup vram after models offloading, tweak to cleanup local variable references on ram offload 2023-07-18 23:21:18 +03:00
4b678f2416 add toggle to not show version on logo 2023-07-18 16:16:35 -04:00
43fbbfb848 revert python version requirement 2023-07-18 16:15:47 -04:00
b71bcab691 change README screenshot 2023-07-18 15:31:41 -04:00
869f418b03 Setup onnx on linear text2image 2023-07-18 14:27:54 -04:00
c364c85915 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-18 13:27:15 -04:00
3773bfbc74 add yarn.lock back in 2023-07-18 13:05:53 -04:00
949437b4f0 Merge branch 'release/invokeai-3-0-beta' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-beta 2023-07-18 12:45:57 -04:00
efcb3a9d08 documentation fixes 2023-07-18 12:45:47 -04:00
35d5ef9118 Emit step completions 2023-07-18 12:35:07 -04:00
b4eeaaa63c Rename clip1 to clip (#3822)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
2023-07-19 04:09:38 +12:00
54bd7c7f04 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-19 03:59:10 +12:00
3240f98f4e Rename clip1 to clip 2023-07-18 18:58:17 +03:00
3d4cef0099 feat: String Param Node + titles and tags for all Nodes (#3819)
## What type of PR is this? (check all applicable)

- [x] Feature
- [x] Optimization


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-19 03:55:34 +12:00
3fa7170566 tell user that they can import a model URL in the Import Models UI 2023-07-18 11:31:31 -04:00
187d7c1cab update mkdocs-material to pull from main 2023-07-18 11:11:37 -04:00
7fde1f93ea fix: Missing context on string param node 2023-07-19 02:49:09 +12:00
9685760fac Merge branch 'main' into release/invokeai-3-0-beta 2023-07-18 10:41:57 -04:00
3f1d5000c0 Merge branch 'main' into nodes-stuff 2023-07-19 02:37:50 +12:00
ae5cb63f3c VRAM Optimizations, sdxl on 8gb (#3818)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [x] Optimization
- [ ] Documentation Update

      
## Description

Various fixes to consume less memory and make run sdxl on 8gb vram.
Most changes due to moving all output tensors to cpu, so that cached
tensors not consume vram.
2023-07-19 02:36:58 +12:00
0c18c5d603 feat: Add titles and tags to all Nodes 2023-07-19 02:26:45 +12:00
7d49c727a0 feat: Add String Param & types to other params 2023-07-19 02:26:33 +12:00
889b77d3d6 Merge branch 'main' into save_vram 2023-07-18 16:55:48 +03:00
fbbc4b3f69 Fixes 2023-07-18 16:51:16 +03:00
bc11296a5e Disable lazy offloading on disabled vram cache, move resulted tensors to cpu(to not stack vram tensors in cache), fix - text encoder not freed(detach) 2023-07-18 16:20:25 +03:00
4e13d3408f fix(nodes): fix inpaint cond logic for new compel version (#3816)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Description

Fixes a bug in the `inpaint` node introduced by the new version of
`compel`. The other nodes were updated, but this one was missed. Fixed
by @StAlKeR7779 ty

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue # discord reports
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : n/a, bugfix
2023-07-19 00:56:29 +12:00
c19d48abd0 fix(nodes): fix inpaint cond logic for new compel version
thanks @StAlKeR7779
2023-07-18 22:39:34 +10:00
b0fb4950ed rebuild front end 2023-07-18 08:12:41 -04:00
42c440c73f Merge branch 'main' into feat/ui/upscale 2023-07-18 22:08:02 +10:00
8bc1fe38b3 Release - invokeai 3 0 beta (#3814)
This contains minor fixes to the beta as well as the version bump to
3.0.0.

Fixes include:
- Warning user when the installer window size is inadequate for the TUI.
- Selection of the most frequently downloaded controlnet models for
default installation.
 - Adding the LowRA LoRA for dark image enhancement
 - Documentation
2023-07-18 08:07:09 -04:00
65df821233 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-18 08:04:59 -04:00
68f2fb2601 3.0 Pre-Release Polish -- Bug Fixes / Style Fixes / Misc (#3812)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Description

Making some final style fixes before we push the next 3.0 version
tomorrow.

- Fixed light mode colors in Settings Modal.
- Double checked other light mode colors. Nothing seems off.
- Added Base Model badge to the model list item. Makes it visually
better and also serves as a quick glance feature for the user.
- Some minor styling updates to the Node Editor.
- Fixed hotkeys 'G' and 'O', 'Shift+G' and 'Shift+O' used to toggle the
panels not resizing canvas. #3780
- Fixed hotkey 'N' not working for Snap To Grid on Canvas.
- Fixed brush opacity hotkeys not working.
- Cleaned up hotkeys modal of hotkeys that are no longer used.
- Updated compel requirement to `2.0.0`


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #3780

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-19 00:04:57 +12:00
f9459d650e update version to b7 2023-07-18 08:01:14 -04:00
bd4eaa455a fix: Update text to Badge in ModelListItem 2023-07-18 23:58:07 +12:00
b59784e521 Merge branch 'style-fixes' of https://github.com/blessedcoolant/InvokeAI into style-fixes 2023-07-18 23:52:18 +12:00
769df47863 Merge branch 'main' into style-fixes 2023-07-18 23:50:48 +12:00
1cab89fe8c Merge branch 'main' into style-fixes 2023-07-18 23:47:05 +12:00
e31b2a6ff4 feat(ui): hide sdxl from linear UI (#3815)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Description

hides sdxl models from linear ui model select. just a hold-me-over

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : n/a

## [optional] Are there any post deployment tasks we need to perform?
2023-07-18 23:46:37 +12:00
1c1a72f4c4 feat(ui): hide sdxl from linear UI 2023-07-18 21:44:24 +10:00
5ac6076944 bump version; add LowRA LoRA as recommended 2023-07-18 07:04:57 -04:00
9c3c393b84 merge with main 2023-07-18 07:00:55 -04:00
112937f1f8 reqs: Update compel to 2.0.0 2023-07-18 22:44:36 +12:00
5d635c7221 cleanup: Remove console hotkey from modal (no console anymore) 2023-07-18 22:27:36 +12:00
e6bfc382a5 cleanup: Removed unused hotkeys from hotkeys modal 2023-07-18 22:25:26 +12:00
f970e3792f fix: Snap to grid hotkey not working 2023-07-18 22:20:45 +12:00
3ffca5490e fix: Brush opacity hotkeys not working 2023-07-18 22:20:28 +12:00
f803d5cf1e fix: Shift O and Shift G not resizing the canvas correctly 2023-07-18 21:00:43 +12:00
ab2343da51 fix: Hotkeys 'g' and 'o' not resizing the canvas 2023-07-18 20:51:08 +12:00
4975b1a704 style: Minor updates to the visual look of the nodes 2023-07-18 20:35:20 +12:00
e1b756658a style: Minor update to Add Node Menu
So there's clear differentiation between the node title and desc
2023-07-18 20:34:58 +12:00
d17450bbe6 feat: Add base model label to Model Item 2023-07-18 20:00:22 +12:00
64d676219b fix: Settings Modal colors in Light Mode 2023-07-18 19:49:33 +12:00
416afd2781 chore(ui): regen types 2023-07-18 15:04:43 +10:00
afa84a149c feat(ui): restore ad-hoc upscaling
- remove face restoration entirely
- add dropdown for ESRGAN model select
- add ad-hoc upscaling graph and workflow
2023-07-18 14:57:47 +10:00
be659364c2 chore(ui): regen types 2023-07-18 14:55:39 +10:00
56098f370c feat(nodes): add RealESRGAN_x2plus.pth, update upscale nodes
- add `RealESRGAN_x2plus.pth` model to installer
- add `RealESRGAN_x2plus.pth` to `realesrgan` node
- rename `RealESRGAN` to `ESRGAN` in nodes
- make `scale_factor` optional in `img_scale` node
2023-07-18 14:55:18 +10:00
99383c2701 Add Toggle for Minimap and Tooltips (#3809)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:
If its not useful, they do not have to use it 😄 
      
## Description
While I was still in the viewportcontrols.tsx
added Option to toggle off the minimap with default being on(true)
added Tooltips to the buttons in viewportcontrols.tsx

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-18 16:26:11 +12:00
6e40b543cd Merge branch 'main' into minimapcontrol 2023-07-18 16:25:49 +12:00
f287c0174b Support SDXL models (#3714)
This is a WIP to add SDXL support.

Tasks:

- [x] SDXL model loading support
- [x] SDXL model installation
- [x] SDXL model loader
- [x] SDXL base invocations for text2latent and latent2latent
- [ ] SDXL refiner invocations for text2latent and latent2latent
- [x] Compel support / pooled embeddings
- [ ] Linear UI graph for SDXL
- [ ] Documentation
2023-07-18 16:21:27 +12:00
c955c13b6f Merge branch 'sdxl-support' of github.com:invoke-ai/InvokeAI into sdxl-support 2023-07-17 23:49:48 -04:00
ef31837167 fix caption on sdxl raw prompt 2023-07-17 23:49:23 -04:00
3d1ad86e8a chore: Clean Schema before final merge 2023-07-18 15:18:31 +12:00
b08ad28daa fix: typo in logger statement (import_model) 2023-07-18 15:17:52 +12:00
6c03d9f8f2 Spelling mistake 2023-07-18 13:13:31 +10:00
9e01a13d63 Add translation entries to right file 2023-07-18 13:09:26 +10:00
73eeef34c4 Merge branch 'sdxl-support' of github.com:invoke-ai/InvokeAI into sdxl-support 2023-07-17 23:08:48 -04:00
1353bf98b3 add specific exception for model probe failures 2023-07-17 23:08:39 -04:00
e74eac5c91 revert en.json 2023-07-18 13:08:31 +10:00
47617b8f63 Spelling Mistake 2023-07-18 12:58:42 +10:00
9c2a2b313e Add entries for the viewportcontrols tool tips 2023-07-18 12:58:00 +10:00
32662c5ee8 Add tool tips 2023-07-18 12:56:34 +10:00
a61540859e Merge branch 'sdxl-support' of https://github.com/invoke-ai/InvokeAI into sdxl-support 2023-07-18 14:37:39 +12:00
c16325a244 feat: Disable convert button on SDXL and Refiner Checkpoints 2023-07-18 14:37:20 +12:00
7221a238b3 fix: Fix Add Scan Auto Checkpoint logic 2023-07-18 14:36:56 +12:00
af1c1ab51f importing an unrecognized model now gives "Unsupported Media Type" error 2023-07-17 22:33:05 -04:00
e7443867f6 Merge branch 'sdxl-support' of github.com:invoke-ai/InvokeAI into sdxl-support 2023-07-17 22:21:20 -04:00
025cda3815 fix 424 error on model import 2023-07-17 22:21:11 -04:00
84275a3f12 Merge branch 'main' into sdxl-support 2023-07-18 14:17:09 +12:00
c5b5195f40 fix: Model Manager scan Auto Add not detecting checkpoint correctly (#3810)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-18 14:16:30 +12:00
d661bf832d Merge branch 'main' into mm-fix 2023-07-18 14:15:50 +12:00
d45ff7e100 fix: Model Manager scan Auto Add not detecting checkpoint correctly 2023-07-18 14:14:44 +12:00
9dbffadc6e Update nodesSlice.ts 2023-07-18 12:11:13 +10:00
11882173e3 Update ViewportControls.tsx 2023-07-18 12:10:57 +10:00
990f34aa15 Update MinimapPanel.tsx 2023-07-18 12:10:42 +10:00
f7de000e79 Update LOCAL_DEVELOPMENT.md (#3808)
fix json formatting to not have big red comment blocks

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: simple docs fix

      
## Description

Fix LOCAL_DEVELOPMENT.md json comment highlighting

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue # n/a
- Closes # n/a

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : simple docs change
2023-07-18 14:08:23 +12:00
04c0700762 Merge branch 'main' into psychedelicious-patch-1 2023-07-18 14:07:58 +12:00
5b7eef3d43 merge: Make Model Manager work with SDXL stuff 2023-07-18 14:01:56 +12:00
13da881953 Merge branch 'main' into sdxl-support 2023-07-18 13:34:07 +12:00
c3a7e35ad8 Model Manager UI 3.0 (#3778)
This PR completely ports over the Model Manager to 3.0 -- all of the
functionality has now been restored in addition to the following
changes.

- Model Manager now has been moved to its own tab on the left hand side.
- Model Manager has three tabs - Model Manager, Import Models and Merge
Models
- The edit forms for the Models now allow the users to update the model
name and the base model too along with other details.
- Checkpoint Edit form now displays the available config files from
InvokeAI and also allows users to supply their own custom config file.
- Under Import Models you can directly add models or a scan a folder for
your checkpoint files.
- Adding models has two modes -- Simple and Advanced.
- In Simple Mode, you just simply need to pass a path and InvokeAI will
try to determine kind of model it is and fill up the rest of the details
accordingly. This input lets you supply local paths to diffusers / local
paths to checkpoints / huggingface repo ID's to download models /
CivitAI links.
- Simple Mode also allows you to download different models types like
VAE's and Controlnet models and etc. Not just main models.
- In cases where the auto detection system of InvokeAI fails to read a
model correctly, you can take the manual approach and go to Advanced
where you can configure your model while adding it exactly the way you
want it. Both Diffusers and Checkpoint models now have their own custom
forms.
- Scan Models has been cleaned up. It will now only display the models
that are not already installed to InvokeAI. And each item will have two
options - Quick Add and Advanced .. replicating the Add Model behavior
from above.
- Scan Models now has a search bar for you to search through your
scanned models.
- Merge Models functionality has been restored.

This is a wrap for this PR.

**TODO: (Probably for 3.1)** 

- Add model management for model types such as VAE's and ControlNet
Models
- Replace the VAE slot on the edit forms with the installed VAE drop
down + custom option
2023-07-17 21:24:29 -04:00
53db91ef99 Update LOCAL_DEVELOPMENT.md
fix json formatting to not have big red comment blocks
2023-07-18 11:02:24 +10:00
ec3c15ead0 Merge branch 'main' into mm-ui 2023-07-18 12:58:57 +12:00
0edb31febd feat: model events (#3786)
[feat(nodes): emit model loading
events](7b6159f8d6)

- remove dependency on having access to a `node` during emits, would
need a bit of additional args passed through the system and I don't
think its necessary at this point. this also allowed us to drop an
extraneous fetching/parsing of the session from db.
- provide the invocation context to all `get_model()` calls, so the
events are able to be emitted
- test all model loading events in the app and confirm socket events are
received

[feat(ui): add listeners for model load
events](c487166d9c)

- currently only exposed as DEBUG-level logs

---

One change I missed in the commit messages is the `ModelInfo` class is
not serializable, so I split out the pieces of information we didn't
already have (hash, location, precision) and added them to the event
payload directly.
2023-07-18 12:57:47 +12:00
a137f7fe7b Merge branch 'main' into feat/model-events 2023-07-18 12:55:02 +12:00
179455ef46 Hide legend button Option 2 (#3797)
@psychedelicious 
This version I added a toggle button to the viewport controls to
show/hide the legend

![image](https://github.com/invoke-ai/InvokeAI/assets/115216705/f74ea273-e043-4104-921d-76861bd69841)

Option 1
https://github.com/invoke-ai/InvokeAI/pull/3790
2023-07-18 12:44:54 +12:00
6eaa7d212d Merge branch 'main' into HideLegend2 2023-07-18 12:44:26 +12:00
7c3eb06a71 fix: Scan again not refetching the model list 2023-07-18 12:44:16 +12:00
6d688ca87d docs: add vscode setup instructions
- using vscode python debugger
- automatic python environment activation
- remote dev
2023-07-17 20:21:22 -04:00
715e3217d0 feat: Improve Scanned / Model Lists layout
- Now inside ScrollArea
- Now displays installed models
2023-07-18 12:14:35 +12:00
72c1a8db08 fix: Diffusers Model edit form not closing on Scan Add 2023-07-18 11:58:04 +12:00
337399ff7c fix: Add API tags for Scanned Models 2023-07-18 11:57:45 +12:00
fbc0694527 Merge branch 'main' into HideLegend2 2023-07-18 11:18:22 +12:00
47b1a85e70 Fix/long prompts (#3806) 2023-07-18 11:18:03 +12:00
ccf093b189 Merge branch 'main' into fix/long_prompts 2023-07-18 11:05:22 +12:00
ada9b06e48 Implement compel prompt nodes for sdxl 2023-07-18 01:49:45 +03:00
7ec1be80ad Merge branch 'main' into HideLegend2 2023-07-18 08:14:34 +10:00
6ae10798b0 Merge branch 'main' into feat/model-events 2023-07-17 17:15:12 -04:00
ded5ebc745 model installer -- Prevent crashes on malformed models (#3619)
This small patch improves the stability of `invokeai-*` scripts by
avoiding crashes in the model manager while scanning the models
directory for new and removed models.
2023-07-17 17:11:32 -04:00
65ed43afb9 resolve conflicts with main 2023-07-17 17:10:57 -04:00
3f8e978543 remove yarn.lock from repo 2023-07-17 17:09:51 -04:00
0c9c7591c6 Merge branch 'main' into fix/long_prompts 2023-07-17 17:04:31 -04:00
0fce35c54c Cleanup, fix variable name, fix controlnet for sequential and cross attention guidance 2023-07-17 23:53:50 +03:00
c82ae74610 feat(ui): consolidate imagecontextmenu and send to menu
Both support the same actions:
- Open in new tab
- Copy image (if supported by browser)
- Use prompt
- Use seed
- Use all
- Send to img2img
- Send to canvas
- Change board
- Download image
- Delete
2023-07-17 16:43:24 -04:00
380aa1d7b5 feat(ui): fix copy image, add context menu to IAIDndImage
- restore copy image functionality* in image context menu, current image buttons
- give IAIDndImage the same context menu

* copying image to clipboard is not possible on Firefox unless the user enables a setting which is disabled by default. if the browser does not support copying an image, the copy functionality is disabled.
2023-07-17 16:43:24 -04:00
81ccbc5c6a feat(ui): improve context menu feel
- faster animation
- do not handle context menu events inside context menu (fixes issue where the context menu appears to not fire)
2023-07-17 16:43:24 -04:00
bcce70fca6 Testing different session opts, added timings for testing 2023-07-17 16:27:33 -04:00
1c680a7147 Fix - encoder_attention_mask not passed before to unet, even if passed it will broke sequential guidance run, so rewrite logic 2023-07-17 23:13:37 +03:00
dcd7e01908 Merge branch 'main' into HideLegend2 2023-07-18 02:30:16 +10:00
fca6a5dd3c Merge branch 'main' into mm-ui 2023-07-17 10:06:03 -04:00
e03e43281b Merge branch 'mm-ui' of github.com:blessedcoolant/InvokeAI into mm-ui 2023-07-17 10:00:36 -04:00
08854b6d68 keep model path consistent with model manager key in model update api 2023-07-17 10:00:28 -04:00
0712294c17 fix: Model Manager light mode color fixes 2023-07-18 00:29:20 +12:00
0ea8d3c30c prevent crash on rename operation on models in models directory 2023-07-17 07:50:06 -04:00
84a13ff8e1 Merge branch 'mm-ui' of github.com:blessedcoolant/InvokeAI into mm-ui 2023-07-17 07:29:35 -04:00
3fba262c94 expose paths as absolute to web api 2023-07-17 07:29:26 -04:00
107ca6bf47 expose model paths as absolute to web models API 2023-07-17 07:26:05 -04:00
1d3fda80aa Create pull request template for the project 2023-07-17 21:18:16 +10:00
562d2dd751 Allow bin extension to detect diffusers-ti provided as file (#3796) 2023-07-17 22:08:23 +12:00
cbfd1d1b27 Merge branch 'main' into feat/standalone_diffusers_ti 2023-07-17 22:01:52 +12:00
f767bf2330 use FileNotFoundError insted of "File path not found" 2023-07-17 05:49:09 -04:00
b1008af696 apply changes as suggested @psychedelicious in PR comments.
- filename -> file_path
- pre and post prompt changed to optional
- clearer pre and post prompt descriptions
- handle pre and post prompt passed as None
- max_prompts defaults to 1 isted of 0 to avoid accidentally processing large prompt files with it set to 0 when adding a new node.
2023-07-17 05:49:09 -04:00
956011066d Added class PromptsFromFileInvocation to prompt.py. A new PromptFromFile Custom node that reads prompts from a file one line per prompt and outputs them as a prompt collection. With inputs for filename, pre_prompt, post_prompt, start line number, and max_prompts 2023-07-17 05:49:09 -04:00
e039771d07 fix: Incorrect type on SDXL Model Loader 2023-07-17 21:47:41 +12:00
cfdaa30d44 feat: Scan models add to differentiate between ckpt and diffusers 2023-07-17 19:40:08 +12:00
3e2a948007 Merge branch 'main' into feat/model-events 2023-07-17 17:36:20 +10:00
af9e8fefce feat(ui): socket event timestamps have ms precision 2023-07-17 17:35:20 +10:00
ba12849685 fix(nodes): fix some model load events not emitting
Missed adding the `context` arg to them initially
2023-07-17 17:16:55 +10:00
f398fe4136 fix: Merge models not respecting save directory 2023-07-17 17:59:05 +12:00
41e7b008fb feat: Add search to Scanned Models 2023-07-17 17:32:34 +12:00
98e6a56714 fix: Model Manager jank / bugs / refinement 2023-07-17 17:09:41 +12:00
cbd5be73d2 feat: Add Scan Models Advanced Add 2023-07-17 16:44:01 +12:00
38e6e3b36b feat: Add Quick Add To Scan Model 2023-07-17 16:07:38 +12:00
c9233eeca2 Merge branch 'main' into HideLegend2 2023-07-17 13:58:51 +10:00
540f40c293 fix: Better file and component naming for Add Models 2023-07-17 13:58:11 +12:00
641b90cc3f chore: regen types 2023-07-17 13:50:35 +12:00
aebd595607 Merge branch 'main' into mm-ui 2023-07-17 13:49:25 +12:00
ed88e72412 correct cannot assign to field 'unconditioned_embeddings' error 2023-07-16 21:06:40 -04:00
6aefd8600a Fix error with long prompts when controlnet used 2023-07-16 21:06:40 -04:00
ccb43d5a91 make check for 2.3 root directory more stringent 2023-07-16 20:43:15 -04:00
ce58c41553 Merge branch 'main' into HideLegend2 2023-07-17 10:35:22 +10:00
9b55eea673 Silly prettier 2023-07-17 10:31:25 +10:00
d9a853857c Change icon to FaInfo 2023-07-17 10:11:18 +10:00
29afde7f6e Add defaults for controlnet/textual inversion/realesrgan (#3773)
This PR adds several default models to the ones selected at install
time. It also removes the GFPGAN and text2clip models, which should
shave a little time off the install process.

## ESRGAN:
* models/core/upscaling/realesrgan/RealESRGAN_x4plus.pth
* models/core/upscaling/realesrgan/RealESRGAN_x4plus_anime_6B.pth
*
models/core/upscaling/realesrgan/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth

## ControlNet
* models/sd-1/controlnet/canny
* models/sd-1/controlnet/depth
* models/sd-1/controlnet/lineart
* models/sd-1/controlnet/openpose

## Embedding (textual inversion)
* models/sd-1/embedding/EasyNegative.safetensors
2023-07-16 19:37:49 -04:00
675a92401c Merge branch 'main' into lstein/default-model-install 2023-07-16 19:32:59 -04:00
72a0b727d7 Updating installation instructions to include Node.JS 2023-07-16 19:17:07 -04:00
036e5d7292 Update nodesSlice.ts 2023-07-17 08:43:45 +10:00
b4e09d4143 Update TopRightPanel.tsx 2023-07-17 08:43:05 +10:00
bc3aab93f1 Update ViewportControls.tsx 2023-07-17 08:42:31 +10:00
b61c83e836 Allow bin extension to detect diffusers-ti provided as file 2023-07-17 00:32:17 +03:00
32994a261a add renaming capabilities to model update API route (#3793)
This PR allows the `update_model` API call to change the model's name
and/or base type as well. The `rename_model` call has accordingly been
retired.
2023-07-17 08:52:59 +12:00
2bc3e36bc0 add missing exception name 2023-07-16 16:14:28 -04:00
6fbb5ce780 add renaming capabilities to model update API route 2023-07-16 14:17:05 -04:00
cad3f96831 add model input to refiner 2023-07-16 12:38:04 -04:00
6534288b75 refiner only has clip2 not clip 2023-07-16 12:36:38 -04:00
0a2964d8c0 add differentiated sdxl and sdxl_refiner model loaders 2023-07-16 12:17:56 -04:00
932112b640 testing being super wasteful with data 2023-07-16 00:17:33 -04:00
dabd2bf301 fix: Readd model name to edit forms
Will be needed when we implement changing name and base model type.
2023-07-16 16:15:53 +12:00
91112167b1 Fix syntax err 2023-07-15 23:56:48 -04:00
5206ddf9b2 truncate long prompts to avoid a crash with controlnet 2023-07-15 23:49:25 -04:00
92029e69c6 feat: Update Checkpoint Model Edit to use config picker 2023-07-16 15:48:44 +12:00
5351171d0e cleanup: Scan Models component (to begin anew) 2023-07-16 15:29:25 +12:00
5b047baeb0 fix: Mantine Required icon being on new line 2023-07-16 15:29:01 +12:00
fe78a08e37 Fix sd1/2 models conditionings 2023-07-16 06:24:24 +03:00
d93d42af4a feat: Add Manual Checkpoint / Safetensor Models 2023-07-16 15:21:49 +12:00
b767b5d44c user must adjust terminal size on Windows 2023-07-15 23:19:50 -04:00
c9c2229917 Separate prompt to sdxl and sdxl-refiner, add denoising start-end fields, add l2l node(supports both sdxl and sdxl-refiner), add fp32 to vae encode 2023-07-16 06:00:37 +03:00
421fcb761b feat: Manual Add Diffusers Model 2023-07-16 14:20:27 +12:00
2e0370d845 feat: Extract BaseModel and ModelVariant Select's
For reusability
2023-07-16 14:07:26 +12:00
72c891bbac remove conhost from windows install process 2023-07-15 21:48:04 -04:00
3074af2e33 Merge branch 'main' into lstein/default-model-install 2023-07-16 13:39:10 +12:00
b56be07ab3 Update NODES.md
Add experimental banner.
2023-07-15 21:26:42 -04:00
5d59dd4b97 feat(nodes): use correctly-typed configuration service in upscale node 2023-07-16 10:54:52 +10:00
48a031dbaf fix(nodes): fix typing of configuration service 2023-07-16 10:52:18 +10:00
39e66ec934 rebuild front end 2023-07-15 20:32:22 -04:00
eda1c94bd6 fix default launcher option 2023-07-15 20:22:12 -04:00
e95cb3aa71 Merge branch 'lstein/default-model-install' into release/invokeai-3-0-beta 2023-07-15 20:16:51 -04:00
ab840742b0 Merge branch 'sdxl-support' of github.com:invoke-ai/InvokeAI into sdxl-support 2023-07-15 19:59:45 -04:00
be0603b64c update to new compel 2.0.0rc2 2023-07-15 19:59:29 -04:00
5b5d5ec978 Merge branch 'main' into sdxl-support 2023-07-15 19:49:57 -04:00
ccbfa5d862 resolve conflicts 2023-07-15 19:47:50 -04:00
7fa394912d Merge branch 'main' into lstein/default-model-install 2023-07-15 18:26:35 -04:00
373beefd13 remove restoration option from invokeai.yaml 2023-07-15 18:26:19 -04:00
6b0a158ffa Merge branch 'main' into lstein/default-model-install 2023-07-15 18:23:34 -04:00
c90345d6a3 deprecate the face restoration option 2023-07-15 18:23:32 -04:00
bcb962f01f Setup textual inversion training with new model manager (#3775) 2023-07-15 18:23:11 -04:00
70b12d9693 Merge branch 'main' into update-textual-inversion-training 2023-07-15 18:16:20 -04:00
9faffa2245 revert inadvertent breaking change to config causing test failures (override) 2023-07-15 18:15:59 -04:00
f66ead0819 Merge branch 'main' into update-textual-inversion-training 2023-07-15 17:44:45 -04:00
0a121e139f add documentation on the configuration system (#3787)
This PR adds a new documentation file that describes how to configure
InvokeAI using `invokeai.yaml`, etc.
2023-07-15 17:39:40 -04:00
6073cb8020 add documentation on the configuration system 2023-07-15 16:14:47 -04:00
c487166d9c feat(ui): add listeners for model load events
- currently only exposed as DEBUG-level logs
2023-07-16 02:26:30 +10:00
7b6159f8d6 feat(nodes): emit model loading events
- remove dependency on having access to a `node` during emits, would need a bit of additional args passed through the system and I don't think its necessary at this point. this also allowed us to drop an extraneous fetching/parsing of the session from db.
- provide the invocation context to all `get_model()` calls, so the events are able to be emitted
- test all model loading events in the app and confirm socket events are received
2023-07-16 02:12:01 +10:00
c7b547ea3e feat(nodes): remove references to restoration services
- remove restoration services
- remove the restore faces nodes
- update tests
2023-07-16 01:12:39 +10:00
8a1b9d1001 chore(ui): regen types 2023-07-16 01:06:57 +10:00
74ca87ac9e feat(nodes): add realesrgan node 2023-07-16 01:06:50 +10:00
610c3a4512 merge main + migrate script now initializes destination root if needed (#3784) 2023-07-15 10:39:04 -04:00
77b0129b4c Merge branch 'main' into lstein/migrate-fix 2023-07-15 10:37:56 -04:00
e01706f5f5 add fp16 support to controlnet models 2023-07-15 10:37:11 -04:00
f504c7ebbd Merge branch 'main' into lstein/migrate-fix 2023-07-15 10:13:44 -04:00
a111539059 migrate script now initializes destination root if needed 2023-07-15 09:59:34 -04:00
cd033f4ead fix: Refine some UI 2023-07-16 01:57:42 +12:00
b1e16aa3db fix: placeholder text for Add model input 2023-07-16 01:41:32 +12:00
e1c0ca1ab2 feat: Add Auto Import Model 2023-07-16 01:36:00 +12:00
52948a1bbc Update CODEOWNERS 2023-07-15 08:52:41 -04:00
32e7e52d69 Merge branch 'main' into lstein/default-model-install 2023-07-15 08:30:22 -04:00
dcbb3dc49a Merge branch 'main' into mm-ui 2023-07-16 00:30:11 +12:00
8fc765694e fix: Minor UI tweak to Control Net enable button (#3783) 2023-07-16 00:28:15 +12:00
ff74de7a60 fix: Minor UI tweak to Control Net enable button 2023-07-16 00:27:52 +12:00
07a2da40b8 Rewrite controlnet to new model manager (#3665) 2023-07-15 08:24:06 -04:00
d234bf1cb9 feat(install): display full ESRGAN model filenames during installation 2023-07-15 21:27:10 +10:00
f7230d07db feat(ui): fix controlnet image preview alignment 2023-07-15 20:49:03 +10:00
b265956083 fix(ui): disable drop when controlnet disabled 2023-07-15 20:47:02 +10:00
8e0ba24bf2 feat(ui): fix cnet ui alignment 2023-07-15 20:36:32 +10:00
be4705ec32 feat(ui): move control mode and processor to main view 2023-07-15 20:34:26 +10:00
4ac0ce59fb fix(ui): add custom label to IAIMantineSelects
needed to have their label styles match chakras
2023-07-15 20:29:15 +10:00
4a2f34f77f wip: Model Search
Going to rework the whole thing. The old system is convoluted and too difficult to plug back.
2023-07-15 22:23:00 +12:00
558c26d78f feat: Create Model Manager Store 2023-07-15 22:22:22 +12:00
7daafc03d3 fix(ui): fix invoke button styles when processing 2023-07-15 20:04:33 +10:00
457e4b7fc5 feat(ui): tweak slider label spacing 2023-07-15 19:56:45 +10:00
d1ecd007ab feat(ui): promote controlnet to be just under general
It is the most impactful feature, and also takes up the most space when you expand it. Promoted.
2023-07-15 19:56:45 +10:00
7dec2d09f0 feat(ui): disable specific controlnet inputs when that controlnet is disabled
The UX is clearer now, but it's still easy to miss that your individual controlnets are enabled, but the overall controlnet feature is disabled.
2023-07-15 19:56:45 +10:00
13d182ead2 feat(ui): move cnet add button to top of list 2023-07-15 19:56:45 +10:00
401727b0c9 feat(ui): add cnet advanced tooltip 2023-07-15 19:56:45 +10:00
19e076cd15 fix(ui): fix no controlnet model selected by default 2023-07-15 19:56:45 +10:00
8a14c5db00 feat(ui): wip controlnet layout 2023-07-15 19:56:45 +10:00
77ad3c959b feat(ui): tweak slider styles 2023-07-15 19:56:45 +10:00
952a7a8674 feat(ui): do not autoprocess if user just disabled autoconfig 2023-07-15 19:56:45 +10:00
7b6d91c69f feat(ui): control net UI weights 0 to 2 2023-07-15 19:56:44 +10:00
8f66d826a5 feat(ui): refactor controlnet UI components to use local memoized selectors
makes them more portable and easier to reason about
2023-07-15 19:56:44 +10:00
d270f21c85 feat(nodes): valid controlnet weights are -1 to 2 2023-07-15 19:56:44 +10:00
ae72f372be fix(nodes): do not use hardcoded controlnet model 2023-07-15 19:56:44 +10:00
0d41346417 feat(ui): fix controlNet models
- update controlnet state to use object format for model
- update model-parsing helper functions to log errors
- update nodes components, types and state
- remove controlnets from state when models are loaded and the controlnet's model is not available
2023-07-15 19:56:44 +10:00
76dc47e88d remove frontend constants, use backend response for controlnet models. add disabled state if base model is not compatible. clear control net model if main base model changes. add logic to guess processor and move it up in UI 2023-07-15 19:56:44 +10:00
5ac114576f feat(ui): add controlnet field to nodes 2023-07-15 19:56:44 +10:00
29b2e59e65 fix(nodes): fix ref to ctx mgr service, missing import 2023-07-15 19:56:44 +10:00
96c9db6d2e chore(ui): typegen 2023-07-15 19:56:44 +10:00
82fa39b531 feat(nodes): add controlnet nodes type hint 2023-07-15 19:56:44 +10:00
788dcbde70 fix(nodes): add missing import 2023-07-15 19:56:44 +10:00
6ab9a5e108 Draft 2023-07-15 19:56:44 +10:00
9769b48661 feat: Add Custom location support for model conversion 2023-07-15 19:17:16 +12:00
8c8eddcc60 feat: Handle toasts for Model Delete 2023-07-15 18:48:18 +12:00
79ca0d0d02 feat: Allow user to pick where to saved merged model 2023-07-15 17:33:44 +12:00
690331b8c0 chore: Regen Schema 2023-07-15 17:33:09 +12:00
5d5a497ed4 Model manager route enhancements (#3768)
# Multiple enhancements to model manager REACT API
    
1. add a `/sync` route for synchronizing the in-memory model lists to
models.yaml, the models directory, and the autoimport directories.
2. added optional destination directories to convert_model and
merge_model operations.
3. added a `/ckpt_confs` route for retrieving known legacy checkpoint
configuration files.
4. added a `/search` route for finding all models in a directory located
in the server filesystem
5. added a `/add`  route for manual addition of a local models
6. added a `/rename` route for renaming and/or rebasing models
7. changed the path of the `import_model` route to `/import`

# Slightly annoying detail:

When adding a model manually using `/add`, the body JSON must exactly
match one of the model configurations returned by `list_models` (i.e.
there is no defaulting of fields). This includes the `error` field,
which should be set to "null".
2023-07-15 17:03:40 +12:00
808b2de709 Merge branch 'main' into lstein/model-manager-route-enhancements 2023-07-15 16:56:54 +12:00
2faa7cee37 add rename_model route 2023-07-14 23:03:18 -04:00
467414f214 Merge branch 'main' into update-textual-inversion-training 2023-07-14 22:32:09 -04:00
194434dbfa restore scrollbar 2023-07-15 12:25:28 +10:00
f88a338be0 Setup textual inversion training with new model manager 2023-07-14 21:58:51 -04:00
8cb19578c2 fix(ui): fix crash on LoRA remove / weight change 2023-07-15 11:09:18 +10:00
54b813c981 fix(ui): fix mouse interactions (#3764) 2023-07-15 12:45:59 +12:00
c4a6f25717 Merge branch 'main' into fix/nodes/fix-mouse-interactions 2023-07-15 12:44:49 +12:00
b306247eb5 remove clipseg model install 2023-07-14 20:39:42 -04:00
5aade31c22 Pad conditionings using zeros and encoder_attention_mask (#3772) 2023-07-15 12:35:10 +12:00
a45f7ce355 add --list-models command 2023-07-14 19:52:47 -04:00
eb9d74653d set default models for realesrgan, controlnet and text inversion 2023-07-14 19:03:41 -04:00
7093e5d033 Pad conditionings using zeros and encoder_attention_mask 2023-07-15 00:52:54 +03:00
565299c7a1 Minor Updates to NODES.md 2023-07-14 16:10:49 -04:00
44662c0c0e remove incorrect hyperlink 2023-07-14 16:10:49 -04:00
ba783d9466 add node grouping concepts & typos
Added a section that focuses more on conceptualizing node workflows as groups of nodes working together as a whole. Also, typos.
2023-07-14 16:10:49 -04:00
759ca317d0 add node types & functions
Add a list of node types and their functions. Functions are just copied text from node descriptions in webui.
2023-07-14 16:10:49 -04:00
3454b7654c add i2i and controlnet examples
Added examples for img2img and ControlNet
2023-07-14 16:10:49 -04:00
19cbda56b6 Create NODES.md
Introductory nodes documentation
2023-07-14 16:10:49 -04:00
bd7b59910d Testing onnx in new ui updates 2023-07-14 14:24:15 -04:00
e71ce83e9c Merge branch 'main' into lstein/model-manager-route-enhancements 2023-07-14 13:52:55 -04:00
8600aad12b multiple enhancements to model manager REACT API
1. add a /sync route for synchronizing the in-memory model lists to
   models.yaml, the models directory, and the autoimport directories.

2. add optional destination_directories to convert_model and merge_model
   operations.

3. add /ckpt_confs route for retrieving known legacy checkpoint configuration
   files.

4. add /search route for finding all models in a directory located in the server
   filesystem
2023-07-14 13:45:16 -04:00
9960d7ca2a Allow ImageResizeInvocation w/h to take inputs from other nodes (#3765) 2023-07-15 05:34:13 +12:00
48561908b1 Merge branch 'main' into fix/nodes/fix-mouse-interactions 2023-07-15 04:13:46 +12:00
3210f96967 Model Manager 3.0 UI (#3735)
DONE:

-  Restore Update Model functionality
- Restore Delete Model functionality
- Restore Model Convert functionality
- Restore Model Merge functionality
- Refine UX (fine tweaks when everything is done - TODO)

TODO

- Add Model (will be finished in a future PR once the backend work is
done)
2023-07-15 04:13:31 +12:00
ad076b1174 add model directory search route 2023-07-14 11:14:33 -04:00
f6752965b7 fix(ui): allow decimals in number inputs
still some jank but eh
2023-07-15 01:05:10 +10:00
30e45eaf47 feat(ui): hold shift to make nodes draggable from anywhere 2023-07-15 00:45:26 +10:00
0257b4a611 fix(ui): fix mouse interactions 2023-07-15 00:13:45 +10:00
3c7cf72423 fix: Clean up merge models submit handler 2023-07-15 01:29:51 +12:00
2a533b275f Merge branch 'main' into mm-ui 2023-07-15 01:24:40 +12:00
25d07891b5 Merge branch 'mm-ui' of https://github.com/blessedcoolant/InvokeAI into mm-ui 2023-07-15 01:24:20 +12:00
401fa6deb5 fix: Misc fixes 2023-07-15 01:23:08 +12:00
f68ab55d6b fix(ui): fix missing mantineTheme, fixes fonts 2023-07-14 23:16:05 +10:00
79d65125c2 feat(ui): extract mantine component styles to hook, add less opinionated mantine components
IAIMantineSelect and IAIMantineMultiSelect have a bit of extra logic that prevents simple select functionality from working as expected.

- extract the styles into hooks
- rename those two components to IAIMantineSearchableSelect and IAIMantineSearchableMultiSelect
- Create IAIMantineSelect (which is just a dropdown) and use it in model manager and a few other places

When we only have a few options to present and searching is not efficient, we should use this instead.
2023-07-14 23:00:38 +10:00
e8d531b987 feat(api): set max-age for images (#3711)
Image files are immutable and we expect deletion to result in no further
requests for a given image, so we can set the max-age to something
thicc.

Resolves #3426

@ebr @brandonrising @maryhipp
2023-07-15 00:33:19 +12:00
545e2f557f Merge branch 'main' into feat/api/image-max-age 2023-07-14 08:21:44 -04:00
23c1a6b9d5 fix(nodes): make ResizeLatents w/h optional
now you can connect to them in node editor
2023-07-14 21:42:42 +10:00
d4dfd84525 feat(ui): mm colors 2023-07-14 20:12:02 +10:00
eb2a7058bf feat(ui): tweak fontSize in modellist 2023-07-14 19:49:05 +10:00
56d209842f feat(ui): only show modellistitem when none in array 2023-07-14 19:46:18 +10:00
0b2f0c05b2 fix(ui): fix selecting model does not update form 2023-07-14 19:31:52 +10:00
1e5ae9d986 feat(ui): refactor model manager ui
- simplify UI logic in `ModelManagerPanel` components
- fix up the types a bit to make it easier to select models
- remove `openModel` state, just make it a useState since it is very local to model manager
2023-07-14 19:22:37 +10:00
f2af82bf73 feat(ui): add model convert for success/failure handling 2023-07-14 17:39:00 +10:00
6d7fb49a7a fix(ui): fix model edit button disabled status 2023-07-14 17:36:10 +10:00
48a8bd4985 feat(ui): add model update for success/failure handling 2023-07-14 17:35:45 +10:00
d8437d3036 feat(ui): add simple selectIsBusy selector 2023-07-14 17:34:34 +10:00
a0cb18a12c feat(ui): refetch models on socket connect 2023-07-14 17:34:13 +10:00
b2005d821a fix(ui): fix types for models queries 2023-07-14 16:59:31 +10:00
66b12ab0ea fix(ui): do not blacklist the rtk query events
doing so breaks the devtools
2023-07-14 16:59:13 +10:00
834774ce4c fix: Merge Conflicts 2023-07-14 18:16:34 +12:00
7cd60214cb Merge branch 'main' into mm-ui 2023-07-14 18:14:45 +12:00
5c58bc6348 fix: Missing VAE Input Field Component 2023-07-14 16:07:22 +10:00
e1d6c09ed2 fix: Type errors & missing Unet field component 2023-07-14 16:07:22 +10:00
8dd4ca5723 feat(ui): update node editor to use model object format
similar to the previous commit, update the node editor to not just store models as strings - instead, store the model object.

the model select components in nodes are now just kinda copy-pastes over the linear UI versions of the same components, but they were different enough that we can't just share them.

i explored adding some props to override the linear ui components' logic, but it was too brittle. so just copy/paste.
2023-07-14 16:07:22 +10:00
a071873327 feat(ui): fix a lot of model-related crashes/bugs
We were storing all types of models by their model ID, which is a format like `sd-1/main/deliberate`.

This meant we had to do a lot of extra parsing, because nodes actually wants something like `{base_model: 'sd-1', model_name: 'deliberate'}`.

Some of this parsing was done with zod's error-throwing `parse()` method, and in other places it was done with brittle string parsing.

This commit refactors the state to use the object form of models.

There is still a bit of string parsing done in the to construct the ID from the object form, but it's far less complicated.

Also, the zod parsing is now done using `safeParse()`, which does not throw. This requires a few more conditional checks, but should prevent further crashes.
2023-07-14 16:07:22 +10:00
14587464d5 fix(ui): check for metadataAccumulator before connecting to it
Fixes an edge case in graph building.
2023-07-14 16:07:22 +10:00
d858a0c5d8 fix(ui): fix rtk tags
I had mixed up `type` and `id` on a bunch of the tags. Fixing those
2023-07-14 15:32:09 +10:00
abe2a0f9b4 fix: merge conflicts (name renamed to model_name) for models 2023-07-14 15:53:28 +12:00
16e93c6455 Merge branch 'main' into mm-ui 2023-07-14 15:46:53 +12:00
9fb0b0959f Make sde and ancestral schedulers reproducible 2023-07-14 05:25:09 +03:00
d8f88c09ea Fix pink output on a lot of samplers 2023-07-14 05:00:33 +03:00
536a397b12 ui: gallery enhancements (#3752)
* feat(ui): salvaged gallery UI enhancements

* restore boardimage functionality, load boardimages and remove some cachine optimizations in the name of data integrity

* fix assets, fix load more params

* jk NOW fix assets, fix load more params

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: Mary Hipp Rogers <maryhipp@gmail.com>
2023-07-13 20:31:10 +00:00
524888bf3b Merge branch 'main' into feat/onnx 2023-07-13 14:23:57 -04:00
271f64068c fix: model_name reference in Model Manager (#3756) 2023-07-14 05:58:35 +12:00
50f10ce5d7 fix: model_name reference in Model Manager 2023-07-14 03:36:11 +12:00
d46261a528 chore(ui): regen types 2023-07-13 23:26:52 +10:00
978016ea51 feat(ui): use query to populate infill methods dropdown
- available infill methods is server state - remove it from client state, use the query to populate the dropdown
- add listener to ensure the selected infill method is an available one
2023-07-13 23:26:52 +10:00
4d25d702a1 feat(api): app/config route returns available infill methods 2023-07-13 23:26:52 +10:00
d6c914eedc detect if user has patchmatch enabled 2023-07-13 23:26:52 +10:00
fba25792f9 add new route for app config 2023-07-13 23:26:52 +10:00
8fc0ce7e38 Fix wrong conditioning used (#3595)
As it said in comment to this branch we want to use conditioning run:
```python
if cfg_injection:  # only applying ControlNet to conditional instead of in unconditioned
```
But in code used unconditioning
embeddings(`conditioning_data.unconditioned_embeddings`).

Later in code confirms that we want to run conditioning generation by
comment and tensor concatenation order(as all code expect to get [uc, c]
tensor):
```python
if cfg_injection:
    # Inferred ControlNet only for the conditional batch.
    # To apply the output of ControlNet to both the unconditional and conditional batches,
    #   add 0 to the unconditional batch to keep it unchanged.
    down_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_samples]
    mid_sample = torch.cat([torch.zeros_like(mid_sample), mid_sample])
```
2023-07-13 09:19:13 -04:00
9348dc8e0d Merge branch 'main' into fix/controlnet_cfg_inj_cond 2023-07-14 01:11:04 +12:00
d4ec8873f7 Fix Inpainting Issues (#3744)
- fix: Inpaint not working with some schedulers: Resolves #3732
- fix: LoRA's not working at all while inpainting.
2023-07-13 23:42:44 +12:00
34bff3ba77 Merge branch 'main' into fix-inpainting 2023-07-13 23:35:12 +12:00
3cf75e48c5 remove clearnode export 2023-07-13 21:34:12 +10:00
b5e7384f09 Delete clearnodes and change nodeEditorReset 2023-07-13 21:34:12 +10:00
ee253ea4f1 add useCallback 2023-07-13 21:34:12 +10:00
430b9c291f fix: Loras not working correctly with Inpainting 2023-07-13 22:59:38 +12:00
16f53228c2 Merge branch 'main' into fix-inpainting 2023-07-13 22:31:08 +12:00
28001271b8 fix(ui): fix nodes crash when adding model loader (#3755) 2023-07-13 22:30:17 +12:00
4ab4942e69 fix(ui): fix nodes crash when adding model loader 2023-07-13 20:29:03 +10:00
1b42996925 fix(ui): fix node parsing failing (#3754) 2023-07-13 22:19:00 +12:00
98a5b3fc24 Merge branch 'main' into fix/ui/fix-missing-nodes 2023-07-13 22:18:27 +12:00
944b46908a fix(ui): fix node parsing failing 2023-07-13 20:17:18 +10:00
af3b7e73ab fix(ui): fix lora name disappearing (#3753) 2023-07-13 22:15:25 +12:00
23d2af52df fix(ui): fix lora name disappearing 2023-07-13 20:14:26 +10:00
059e427457 fix(ui): check for metadata accumulator before connecting to it (#3751) 2023-07-13 22:08:13 +12:00
f4b37d4e55 Merge branch 'main' into fix/ui/fix-inpaint-graph-again 2023-07-13 22:07:30 +12:00
43cc96255b fix(ui): check for metadata accumulator before connecting to it 2023-07-13 20:05:45 +10:00
37b06a4f41 fix(ui): fix inpaint invalid model error (#3750) 2023-07-13 22:02:38 +12:00
4702eb2e6a fix(ui): fix inpaint invalid model error 2023-07-13 19:59:51 +10:00
5a546e66f1 Merge branch 'main' into fix-inpainting 2023-07-13 20:42:13 +12:00
fde9fa2fe4 Add Clear nodes Button (#3747)
Adds a Clear Nodes Button with Confirmation Dialog, I think I Did it
right 😃

I am sure there is a way to make the Confirmation look better and have
Yes/No instead of OK/Cancel
2023-07-13 20:35:40 +12:00
19fdb70e20 Merge branch 'clearnodes' of https://github.com/mickr777/InvokeAI into pr/3747 2023-07-13 20:34:57 +12:00
6861499697 fix: Simplify modal code 2023-07-13 20:34:23 +12:00
8274488d2c Merge branch 'main' into clearnodes 2023-07-13 18:30:12 +10:00
91c4e4c9b9 useDisclosure instead of useState. 2023-07-13 18:08:30 +10:00
d1ac50b061 Change Confirmation Dialog
Changed Confirmation Dialog to use chakra-ui alert dialog
2023-07-13 17:19:59 +10:00
5f5c93abb4 feat(app): embed PNG info in invokeai_metadata and invokeai_graph
Using just `metadata` and `graph` feel a bit too generic.
2023-07-13 15:40:05 +10:00
6bea7bac36 feat(ui): restore recall functionality
- Restore recall functionality to `CurrentImageButtons` and `ImageContextMenu`.
- Debounce metadata requests for `ImageMetadataViewer` and `CurrentImageButtons` by 500ms. It's possible to scroll through these really fast, so we want to debounce the network requests.
- `ImageContextMenu` is lazy-mounted so it does not need to be debounced; it makes the metadata request as soon as you click it.
- Move next/prev image selection logic into hook and add the hotkeys for this to `CurrentImageButtons`. The hotkeys now work when metadata viewer is open.

I will follow up with improved loading state during the debounced calls in the future
2023-07-13 15:40:05 +10:00
a43c900961 feat(ui): update UI for new metadata
- Update for new routes
- Update model storage in state to be `MainModelField` type instead of `string`, simplifies a lot of model handling
- Update model-related stuff for model `name` --> `model_name`
- Update linear graphs to use `MetadataAccumulator`
- Update `ImageMetadataViewer` UI
- Ensure all `recall` functions work (well, the ones that are active anyways)
2023-07-13 15:40:05 +10:00
bddc04af96 chore(ui): regen types 2023-07-13 15:40:05 +10:00
50bef87da7 feat(db,nodes,api): refactor metadata
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.

Additionally, we provide the unexpanded graph with the metadata API response.

Both of these are embedded into the PNGs.

- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
2023-07-13 15:40:05 +10:00
eb0d55263b fix(mm): make model config attribute names consistent
Our model fields use `model_name`, but the API response uses `name`. Some places use `model_type` but the API response used `type`.

Changed the API response to provide `model_name` and `model_type`, which simplifies how we manage models on the client substantially.
2023-07-13 15:40:05 +10:00
be02a55cac output stringified error for session and invocation errors 2023-07-13 15:24:56 +10:00
8a25e22fb0 Update en.json 2023-07-13 14:42:09 +10:00
90441c4257 Update TopCenterPanel.tsx 2023-07-13 14:41:00 +10:00
99c1d5c044 Update nodesSlice.ts 2023-07-13 14:40:33 +10:00
c7dcf1f4a0 Create ClearNodesButton.tsx 2023-07-13 14:40:09 +10:00
7e3b9f1320 fix: Inpaint not working with some schedulers
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-07-13 15:06:03 +12:00
1c2144794c Merge branch 'main' into mm-ui 2023-07-13 13:58:22 +12:00
10bb05b753 feat: Add Aspect Ratio To Canvas Bounding Box (#3717) 2023-07-13 13:57:14 +12:00
bc7c0f75a0 fix: Rename toggleBoundingBoxDimension to flipBoundingBoxAxes 2023-07-13 13:53:15 +12:00
b7a4f3c2cb Merge branch 'main' into bbox-ar 2023-07-13 13:45:08 +12:00
9779276a8f feat: Save and Loads Nodes From Disk (#3724) 2023-07-13 13:40:58 +12:00
2cfe67bf1f Merge branch 'main' into save-load-nodes 2023-07-13 13:37:36 +12:00
71e34ac256 Merge branch 'main' into mm-ui 2023-07-13 12:48:43 +12:00
212156cb15 (ci) remove testing branch 2023-07-12 16:51:15 -04:00
0b0efa82e9 (docker) ROCm support fixes - contributed by @Rubonnek 2023-07-12 16:51:15 -04:00
a9d7ce8ca4 (ci) free up disk space on GHA runners 2023-07-12 16:51:15 -04:00
d6da7ad922 (docker) dockerfile fixes including PR feedback
When previously using base Debian-ish images, the Invoke image
failed to find CUDA drivers on some RHEL-ish distributions
2023-07-12 16:51:15 -04:00
7111db2e0d (ci) fix container build workflow 2023-07-12 16:51:15 -04:00
c910376bb6 Don't use .env file lines where = is at the end of the line 2023-07-12 16:51:15 -04:00
a674fff17a Update dockerignore, set venv to 3.10, pass cache to yarn vite buidl 2023-07-12 16:51:15 -04:00
674f42ba9a Pass env vars as build-args, ensure node modules isn't getting passed in 2023-07-12 16:51:15 -04:00
3b1eeda4d4 (docker) only install default models when running the container against a new runtime directory 2023-07-12 16:51:15 -04:00
6fbd643948 (docker) tidy up dockerignore 2023-07-12 16:51:15 -04:00
72a11ec4bc (docker) use docker-compose in deprecated build scripts
temporarily retaining the build scripts for backwards compatibility
2023-07-12 16:51:15 -04:00
e9bc8254dd (docker) add a README for the docker setup 2023-07-12 16:51:15 -04:00
2a5737c146 (docker) add README used by the Runpod template 2023-07-12 16:51:15 -04:00
f3b45d0ad9 (docker) rewrite container implementation with docker-compose support
- rewrite Dockerfile
- add a stage to build the UI
- add docker-compose.yml
- add docker-entrypoint.sh such that any command may be used at runtime
- docker-compose adds .env support - add a sample .env file
2023-07-12 16:51:15 -04:00
4a8172bcd0 disable features that are not supported yet or no longer supported (#3739)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-12 13:03:39 -04:00
67c8cf4bc2 Controlnet model detection 2023-07-12 08:50:19 -04:00
a328986b43 Less naive model detection 2023-07-12 08:50:19 -04:00
c0a4045d31 Merge branch 'main' into save-load-nodes 2023-07-13 00:33:22 +12:00
31bb4bfc61 style: Update Model Manager Styling to new format 2023-07-12 23:12:12 +12:00
3db1aa738c feat: Restore Model Merge functionality 2023-07-12 22:43:06 +12:00
683229e285 fix: Update model convert toast message 2023-07-12 20:44:57 +12:00
2cedf6aed5 feat: Restore Model Convert Functionality 2023-07-12 20:40:58 +12:00
6238a53fdd feat: Add basic form validation for path input 2023-07-12 20:11:05 +12:00
310e401b03 feat: Create basic IAIMantineTextInput component for form usage 2023-07-12 20:10:33 +12:00
3568e28b1c fix: Type resolutions & Bug Fixes
- Fix checkpoint filter not working
- Resolve all typescript and undefined issues in Model Manager List / Edit Forms and main panel
2023-07-12 19:05:16 +12:00
5a6ad99d4e feat: Restore Delete Model Functionality 2023-07-12 16:39:07 +12:00
afb46564e8 feat: Restore Update Model functionality 2023-07-12 16:13:49 +12:00
0282aa83c5 feat: Do not store edge styling data when saving a graph setup 2023-07-12 14:32:54 +12:00
af239fa122 installer installs torchimetrics 0.11.4 (#3733)
* fix the test of the config system

* Add torchmetrics==0.11.4 to installer

- Closes #3700
- Closes #3658

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-07-11 22:15:46 -04:00
84af35597d fix: Update Load & Save Icons to FontAwesome 2023-07-12 13:58:14 +12:00
3b61a3abeb Merge branch 'main' into save-load-nodes 2023-07-12 13:52:26 +12:00
c1f2a9d56c Node Editor: QoL Fixes (#3734)
- Make the viewport fit to view on Init
- Update Reload Schema to an icon.
2023-07-12 13:33:18 +12:00
222d8b05a6 fix: Update Sync icon to FontAwesom 2023-07-12 13:31:24 +12:00
cd11d08d74 feat: Update Reload Schema button 2023-07-12 13:14:14 +12:00
acea304348 feat(node-editor): fit view on init 2023-07-12 13:11:43 +12:00
c3adb301a0 fix the test of the config system 2023-07-11 17:46:16 -04:00
b444b8db25 chore: Regen Schema 2023-07-12 09:17:43 +12:00
b8a9b499df Merge branch 'main' into fix/controlnet_cfg_inj_cond 2023-07-11 23:43:47 +03:00
75c5ce46bc merged SDXLModelLoader into ModelLoader invocation 2023-07-11 16:33:08 -04:00
e0a7ec6e95 Branch for invokeai 3.0-beta bugfixes (#3683)
Model installer and documentation fixes for the beta branch.
2023-07-11 16:22:18 -04:00
25591788c1 fix conflicts 2023-07-11 15:55:10 -04:00
b6b22dc799 feat: Update Reload Schema button 2023-07-12 07:50:11 +12:00
dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
d32f9f7cb0 reverse logic of gpu_mem_reserved
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
  for model caching (similar to max_cache_size).
2023-07-11 15:16:40 -04:00
fabcf276ac rebuild front end 2023-07-11 14:45:46 -04:00
358ced6bab SDXL Prompt and t2l nodes draft, add fp32 to vae decode 2023-07-11 18:19:36 +03:00
9bd6b6068c Merge branch 'main' into release/invokeai-3-0-beta 2023-07-11 10:57:59 -04:00
f6302aa691 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-11 10:57:36 -04:00
8b62eb364c bump version 2023-07-11 10:57:17 -04:00
6b93c1451f do not crash when probing an unknown model type 2023-07-11 10:56:47 -04:00
34cff848c7 do not display sdxl models in main model loader 2023-07-11 08:51:02 -04:00
5bf144e6bc feat(node-editor): fit view on init 2023-07-11 18:22:50 +12:00
4d9a342437 feat: Parametrize useGetMainModelsQuery 2023-07-11 16:33:26 +12:00
7ce43692c2 feat: Add multi param query support to getMainModels 2023-07-11 14:50:56 +12:00
23d8a2777e add ability to filter list_models on list of base models 2023-07-10 21:59:32 -04:00
6733f5bfec always enable these things on txt2img tab (#3726) 2023-07-11 13:14:03 +12:00
913789d966 Merge branch 'main' into maryhipp/enable-wh-for-txt-2-img 2023-07-11 13:13:41 +12:00
48efcb0ba9 always enable these things on txt2img tab 2023-07-10 20:19:03 -04:00
8e42502dfd partial implementation of SDXL model loader 2023-07-10 20:18:30 -04:00
e06a6bb077 disable hotkey for lightbox if lightbox is disabled (#3725) 2023-07-11 11:51:54 +12:00
d8ebbd258a Merge branch 'main' into sdxl-support 2023-07-10 18:51:03 -04:00
83ec4c983c Merge branch 'main' into lstein/keep-models-in-vram 2023-07-10 18:47:05 -04:00
c9c61ee459 Update invokeai/app/services/config.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-07-10 18:46:32 -04:00
83eb511330 disable hotkey for lightbox if lightbox is disabled 2023-07-10 18:44:54 -04:00
bbdb26511a feat: Fit to view on load rather than using older position 2023-07-11 09:44:36 +12:00
b9767e9c6e feat: Save and Loads Nodes From Disk 2023-07-11 07:22:45 +12:00
f46f8058be load thumbnail 2023-07-10 23:47:49 +10:00
18e2b130fc disable multiselect 2023-07-10 23:47:49 +10:00
0bfa5ffd8e feat: Make BBox Handles adapt to Aspect Ratio lock 2023-07-10 20:37:00 +12:00
15175bb998 feat: Add Aspect Ratio To Canvas Bounding Box 2023-07-10 20:04:32 +12:00
b6fabe5146 feat: Add Aspect Ratio (#3709)
- Adds Aspect Ratio functionality to the UI
- The ratios are placeholder. Feel free to add any ratios you want.
 

https://github.com/invoke-ai/InvokeAI/assets/54517381/43921f57-fe0a-457f-baf2-b003310d4f85

- I did not add the same to Bounding Box width and height on the canvas.
But its very easy to extend it to that too. So feel free to add if you
want to.
2023-07-10 18:12:12 +12:00
964c71dcb0 feat: Add Swap Sizes 2023-07-10 18:10:57 +12:00
3476c58702 Merge branch 'main' into aspect-ratio 2023-07-10 17:13:27 +12:00
8b4e153acc ui, db: rand fixes (#3715)
[feat(ui): memoize ImageContextMenu
selector](265996d230)

Without the selector itself being memoized, the gallery was rerendering
on every progress image.

[feat(ui): memoize NextPrevImageButtons
component](a7b8109ac2)

This was rerendering on every progress image, now it doesn't

[fix(ui): correctly set disabled on invoke button during
generation](1c45d18e6d)

It wasn't disabled when it should have been, looked clickable during
generation.

[fix(nodes): remove board_id column from images
table](00e26ffa9a)

This is extraneous; the `board_images` table holds image-board
relationships. @maryhipp
2023-07-10 17:10:28 +12:00
00e26ffa9a fix(nodes): remove board_id column from images table
This is extraneous; the `board_images` table holds image-board relationships.
2023-07-10 11:30:35 +10:00
1c45d18e6d fix(ui): correctly set disabled on invoke button during generation
It wasn't disabled when it should have been, looked clickable during generation.
2023-07-10 11:23:13 +10:00
a7b8109ac2 feat(ui): memoize NextPrevImageButtons component
This was rerendering on every progress image, now it doesn't
2023-07-10 11:22:34 +10:00
265996d230 feat(ui): memoize ImageContextMenu selector
Without the selector itself being memoized, the gallery was rerendering on every progress image.
2023-07-10 11:21:56 +10:00
bf2b5b5cd4 improvements to sdxl support in model manager
- Move SDXL-related models to models/sdxl.py
- Create separate base type BaseModelType.StableDiffusionXLRefiner for the refiner
  models.
2023-07-09 20:42:03 -04:00
5759a390f9 introduce gpu_mem_reserved configuration parameter 2023-07-09 18:35:04 -04:00
130249a2dd add model loading support for SDXL 2023-07-09 15:47:06 -04:00
8d7dba937d fix undefined variable 2023-07-09 14:37:45 -04:00
d6cb0e54b3 don't unload models from GPU until the space is needed 2023-07-09 14:26:30 -04:00
2f3190ad6c merge with main 2023-07-09 13:28:05 -04:00
f9dc5a0530 bump version 2023-07-09 13:27:11 -04:00
f335363a6f Merge branch 'main' into release/invokeai-3-0-beta 2023-07-09 13:26:49 -04:00
11d78ad75f Updating Docs (#3456)
Opening this PR to make incremental doc updates and improvements ahead
of 3.0
2023-07-09 13:26:19 -04:00
2ad95f961c Merge branch 'main' into doc_updates_23 2023-07-09 13:25:58 -04:00
f2b2ebfffa merge with main, resolve conflicts 2023-07-09 13:25:32 -04:00
dfe338fc50 fix(ui): fix missing import 2023-07-09 22:47:54 +10:00
c5539b442c feat(api): set max-age for images
Image files are immutable and we expect deletion to result in no further requests for a given image, so we can set the max-age to something thicc.

Resolves #3426
2023-07-09 22:42:05 +10:00
0e178c3bb7 feat(ui): aspect ratio styling 2023-07-09 22:13:38 +10:00
50218f1595 fix(ui): fix number input on aspect ratio 2023-07-09 22:13:26 +10:00
cafd97e5bc fix: Reset handler not adjusting correctly 2023-07-09 23:24:15 +12:00
d01d5b6fa9 feat: Add Aspect Ratio 2023-07-09 23:18:06 +12:00
344d87c9f1 Add Cancel Button button to nodes tab (#3706)
Just a small thing now, as nodes are all still wip, but since
@psychedelicious was nice enough to add the progress image node for me,
what I noticed was missing now is the cancel button on nodes tab
2023-07-09 15:13:19 +12:00
5b876bd646 Add Stop button to nodes tab 2023-07-09 11:48:31 +10:00
be6f366f6b fix(api): fix for borked windows mimetypes registry (#3705)
It's possible for the Windows mimetypes for js to be changed and cause
content-type errors when running the app.

Explicitly set the mimetypes to rectify this. Note that the root cause
is a misconfiguration on the client - not our end.

See
https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
2023-07-09 13:11:24 +12:00
4640969037 fix(api): fix for borked windows mimetypes registry
It's possible for the Windows mimetypes for js to be changed and cause content-type errors when running the app.

Explicitly set the mimetypes to rectify this. Note that the root cause is a misconfiguration on the client - not our end.

See https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
2023-07-09 11:05:01 +10:00
d7218d44d7 feat(ui): add progress image node
it is excluded from graph, so you can add it without affecting generation
2023-07-09 10:51:08 +10:00
2454b51d51 fix(ui): escape on embedding popup closes it 2023-07-09 10:47:30 +10:00
9cee861b4c add load more images to the right arrow (#3694)
@psychedelicious @blessedcoolant Somehow i deleted the branch the other
version of this pull request was on. 🤭

Just an idea, if you think its worth while please make changes ( I did
what I could)
I added a load more to the right arrow to avoid having to open gallery
to load more images,

I am not sure about the icon i used, maybe it should just be the normal
arrow, so you don't even need to show its loading more images.

there is an issue with it not disappearing once all images have been
loaded, (I did play around for a while to try and fix that)
2023-07-09 11:56:55 +12:00
df27218f96 Merge branch 'main' into main 2023-07-09 11:56:17 +12:00
d582cf2961 default launcher to choice [1] not [2] 2023-07-08 19:53:23 -04:00
b6cc4df1d8 report processing stack traces to the console 2023-07-08 19:48:32 -04:00
e6a84c5ae5 fix: Rearrange Model Select to take full width (#3701)
Some users want the model select to take full width coz their model
names might be long. As this is a more frequently used feature,
rearrange it to do that.

Followed by VAE (as it is related to the model) and the Sampler next to
it.
2023-07-09 11:01:26 +12:00
5fb24197cd fix: Rearrange Model Select to take full width 2023-07-09 07:23:31 +12:00
5f7435955e if models.yaml doesn't exist, rebuild it 2023-07-08 15:13:51 -04:00
f4aa28bee0 bump version 2023-07-08 14:52:29 -04:00
3616ac8754 model installer calls invokeai-configure if something wrong with root 2023-07-08 12:45:23 -04:00
42fbaf0647 feat: Upgrade Diffusers to 0.18.1 2023-07-08 12:07:47 -04:00
f7968ef8ce feat: Upgrade Diffusers to 0.18.1 (#3699) 2023-07-08 12:07:09 -04:00
92d4486214 don't write 'version:' to the invokeai.yaml file 2023-07-08 12:06:23 -04:00
6c17607a2b feat: Upgrade Diffusers to 0.18.1 2023-07-09 03:54:20 +12:00
69ef1e1e56 speculative change to upgrade script 2023-07-08 11:45:26 -04:00
0cceb81ec2 Version of _find_root() that works in conda environment (#3696)
I made a recent change to the function that finds the default root
directory locatoin that broke it when run under Conda (where VIRTUAL_ENV
is not set). This revision fixes the issue.
2023-07-09 02:51:27 +12:00
9af61d3ff5 Merge branch 'main' into lstein/find-root-works-under-conda 2023-07-09 02:42:59 +12:00
3001e4c947 feat(ui): update right arrow gallery load more
- add hotkey support
- add loading state
- only show if there are more images to load
2023-07-08 10:29:31 -04:00
2c956806d7 Update NextPrevImageButtons.tsx 2023-07-08 10:29:31 -04:00
be06d4c0af fix(ui): fix selection on dropdowns
Mantine's multiselect does not let you edit the search box with mouse, paste into it, etc. Normal select is fine.

I can't remember why I made Lora etc multiselects, but everything seems to work with normal selects, so I've change to that.
2023-07-08 10:29:19 -04:00
81817532f8 fix(ui): fix tab translations
model manager was using the wrong key due to the tabs render func subbing values in. made translation key a prop of a tab item.
2023-07-08 10:29:05 -04:00
ae835f47b6 add missing frontend files 2023-07-08 10:18:47 -04:00
8a3072db1a fix image upload issue 2023-07-08 10:14:55 -04:00
bd9786564c merge with main 2023-07-08 10:11:25 -04:00
b2a5e1922d Merge branch 'release/invokeai-3-0-beta' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-beta 2023-07-08 09:45:26 -04:00
f6ecee926f version of _find_root() that works in conda environment 2023-07-08 09:02:17 -04:00
454c2c0952 version of _find_root() that works in conda environment 2023-07-08 09:01:05 -04:00
c2b0f83be3 alternative version of _find_root() 2023-07-08 08:38:45 -04:00
0f33a98e95 feat: Add App Version to UI (#3692)
![opera_jpFG2RBO0c](https://github.com/invoke-ai/InvokeAI/assets/54517381/4a3a1da4-efbd-470c-9870-cfeab5fb7580)
2023-07-08 22:16:26 +12:00
b27bf7bb0c Merge branch 'main' into add-app-version 2023-07-08 21:58:17 +12:00
0c528f22a7 fix(ui): improve initial gallery loading logic
- `isLoading` - now `true` *only* on first load
- added `isFetching` - `true` whenever gallery images are fetching
- on first load, show a spinner instead of skeletons. this prevents an awkward flash of skeletons into empty gallery when the gallery doesn't have enough images to fill it.
- removed `imageCategoriesChanged` listener, bc now on app start, both images and assets will be populated. leaving this in caused jank flashes of skeletons when switching gallery tabs when gallery doesn't have images to load
2023-07-08 19:57:36 +10:00
d418e763ce fix(ui): fix controlnet processing fallback dimensions
Just made it a spinner, getting it to be styled correctly otherwise is a pain
2023-07-08 19:57:36 +10:00
07ce53678b fix(ui): fix drag preview image dimensions 2023-07-08 19:57:36 +10:00
173d3e6918 fix(ui): ensure initial gallery fetch happens once, fix skeleton count for initial fetch 2023-07-08 19:57:36 +10:00
18b6c1a24b feat(ui): fill up gallery on app start
taking the coward's way out on this and just fetching 100 images & 100 assets on app start...

- add `appStarted` action, dispatched once on mount in App.tsx. listener fetches 100 images & 100 assets
- fix bug with selectedBoardId & assets tab
2023-07-08 19:57:36 +10:00
cbecf3cb89 handle case where user has no images 2023-07-08 19:57:36 +10:00
84645495a9 load images for whichever tab youre on 2023-07-08 19:57:36 +10:00
6399055f7f make sure images tab is active if auto-switch to new images is on 2023-07-08 19:57:36 +10:00
078a829b3a feat(ui): add hover show/hide to appVersion 2023-07-08 19:55:19 +10:00
3333805821 feat: Add App Version to UI 2023-07-08 21:31:17 +12:00
1cd09a5a53 fix(ui): fix inconsistent shift modifier capture (#3691)
The shift key listener didn't catch pressed when focused in a textarea
or input field, causing jank on slider number inputs.

Add keydown and keyup listeners to all such fields, which ensures that
the `shift` state is always correct.

Also add the action tracking it to `actionsDenylist` to not clutter up
devtools.
2023-07-08 21:13:04 +12:00
a0ccb4385f fix(ui): fix inconsistent shift modifier capture
The shift key listener didn't catch pressed when focused in a textarea or input field, causing jank on slider number inputs.

Add keydown and keyup listeners to all such fields, which ensures that the `shift` state is always correct.

Also add the action tracking it to `actionsDenylist` to not clutter up devtools.
2023-07-08 18:52:37 +10:00
26cea7b13d fix(ui): do not diable show progress toggle while generating (#3690) 2023-07-08 20:25:09 +12:00
2c78ac4a13 Merge branch 'main' into fix/ui/fix-progress-toggle 2023-07-08 20:24:23 +12:00
018cd00b2f fix(ui): fix readonly inputs (#3689)
There was a props on IAISlider to make the input component readonly - I
didn't know this existed and at some point used a component with that
prop as a template for other sliders, copying the flag over.

It's not actually used anywhere, so I removed the prop entirely,
enabling the number inputs everywhere.
2023-07-08 20:24:01 +12:00
e715aa075d Merge branch 'main' into fix/ui/fix-inputs-readonly 2023-07-08 20:23:33 +12:00
681470e508 ui: add cpu noise (#3688)
![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/a6a61cd1-5ac8-4a6b-b6bc-7eb31777571a)
2023-07-08 20:23:22 +12:00
5146e92463 fix(ui): do not diable show progress toggle while generating 2023-07-08 17:23:36 +10:00
e7370e5ef3 fix(ui): fix readonly inputs
There was a props on IAISlider to make the input component readonly - I didn't know this existed and at some point used a component with that prop as a template for other sliders, copying the flag over.

It's not actually used anywhere, so I removed the prop entirely, enabling the number inputs everywhere.
2023-07-08 17:16:34 +10:00
a73206c105 feat(ui): add cpu noise to linear graphs 2023-07-08 14:52:19 +10:00
0138f52220 feat(ui): add ui for cpu noise
not hooked up to graphs
2023-07-08 14:15:13 +10:00
2bc99f5b6c Revert "get uploads working again" 2023-07-08 12:22:10 +10:00
b11d5970f6 get uploads working again (#3679)
I'm not sure if this was just my local install, but even after a fresh
`yarn install` my upload network request was failing because no file was
passed in. I don't think the `bodySerializer` part is getting run
2023-07-07 21:37:37 -04:00
92a83da416 get uploads working again (#3679)
I'm not sure if this was just my local install, but even after a fresh
`yarn install` my upload network request was failing because no file was
passed in. I don't think the `bodySerializer` part is getting run
2023-07-07 21:34:51 -04:00
e1c7012125 Merge branch 'main' into maryhipp/restore-upload-functionality 2023-07-07 21:34:28 -04:00
8e8f9cce0f print version when --version provided at command line 2023-07-07 20:47:29 -04:00
06961072c8 fix en.json "modelManager" key 2023-07-07 20:19:51 -04:00
0ec00e3d11 rebuild front end 2023-07-07 17:47:47 -04:00
657e8031bb Merge branch 'main' into release/invokeai-3-0-beta 2023-07-07 17:45:18 -04:00
10d3bccf32 Mac MPS FP16 fixes (#3641)
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
2023-07-07 17:43:23 -04:00
b8e53ca135 fix version number 2023-07-07 17:28:10 -04:00
24f6fecdd5 first 3.0.0 beta release candidate 2023-07-07 17:27:12 -04:00
fefe56599f fixes ImportError described in #3658. (#3668)
The issue was introduced by a new release of torchmetrics.
2023-07-07 17:23:37 -04:00
235c14ca2c Merge branch 'main' into maryhipp/restore-upload-functionality 2023-07-07 17:17:27 -04:00
6259142078 Merge branch 'main' into patch-1 2023-07-07 17:16:37 -04:00
f32a2f135c Merge branch 'release/invokeai-3-0-alpha' of https://github.com/invoke-ai/InvokeAI into release/invokeai-3-0-alpha 2023-07-08 06:30:04 +12:00
f4fe878781 cleanup: No longer used. 2023-07-08 06:27:11 +12:00
97b2ec58e2 Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-07 14:18:12 -04:00
3ddbb70bd7 prop to hide toggle for advanced settings (#3681) 2023-07-08 06:13:19 +12:00
3dc42869f4 prop to hide toggle for advanced settings 2023-07-07 14:03:37 -04:00
bdbdcabcdf add ability to disable lora, ti, dynamic prompts, vae selection (#3677) 2023-07-08 06:00:34 +12:00
294336b046 switch wording to embeddings 2023-07-07 13:58:07 -04:00
fd51edfc81 remove log 2023-07-07 12:04:41 -04:00
fbac11a521 get uploads working again 2023-07-07 12:02:22 -04:00
01b27a03a8 Merge branch 'maryhipp/hide-some-things' of https://github.com/invoke-ai/InvokeAI into maryhipp/hide-some-things 2023-07-07 11:45:05 -04:00
d9acb0eea6 fix bug 2023-07-07 11:44:58 -04:00
1ed72cdbed Merge branch 'main' into maryhipp/hide-some-things 2023-07-07 11:34:32 -04:00
d368a1de0c feat(ui): improve embed button styles (#3676)
![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/33bfc9c1-f554-459c-934b-c02d2817525f)

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7ee2d020-ebea-437c-8b92-f13e4cb148b9)
2023-07-08 03:24:04 +12:00
2933d81118 cleanup 2023-07-07 11:16:23 -04:00
888c47d37b add ability to disable lora, ti, dynamic prompts, vae selection 2023-07-07 11:13:42 -04:00
8d88ad3b8d restore ability to launch web server with invokeai --web 2023-07-07 10:07:15 -04:00
56f4712814 fix checkpoint VAE handling in migrate script 2023-07-07 09:34:42 -04:00
78bcaec4da feat(ui): improve embed button styles 2023-07-07 23:14:31 +10:00
2cbe98b1b1 fix(ui): resolve merge conflicts 2023-07-07 22:50:22 +10:00
8457fcf7d3 feat(ui): finalize base model compatibility for lora, ti, vae 2023-07-07 22:50:22 +10:00
a9a4081f51 add modelSelected middleware to clear submodels on base_model change 2023-07-07 22:50:22 +10:00
b9a1aa38e3 disable submodels that have incompatible base models 2023-07-07 22:50:22 +10:00
6356dc335f change model store to object, update main model and vae dropdowns 2023-07-07 22:50:22 +10:00
9f58ed35cf improve user migration experience
- No longer fail root directory probing if invokeai.yaml is missing
  (test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
  with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
909fe047e4 fix: Adjust clip skip layer count based on model (#3675)
Clip Skip breaks when you supply a number greater than the number of
layers for the model type. So capping this out based on the model on the
frontend

- `sd-1` at 12
- `sd-2` at 24
- Will update later to whatever SDXL needs if it is different.

- Also fixes LoRA's breaking with Clip Skip.
2023-07-07 23:46:09 +12:00
a8fc75b6d0 feat(ui): make clipSkip activeLabel "Clip Skip"
we know its active if it displays
2023-07-07 21:42:16 +10:00
74557c8b6e fix: Loras breaking with clip skip 2023-07-07 23:27:21 +12:00
53cb200f85 fix: Clamp clipskip value when model changes 2023-07-07 19:29:11 +12:00
a4dec53b4d fix: Adjust clip skip layer count based on model 2023-07-07 19:05:10 +12:00
803e1aaa17 feat(ui): update openapi-fetch; fix upload issue
My PR to fix an issue with the handling of formdata in `openapi-fetch` is released. This means we no longer need to patch the package (no patches at all now!).

This PR bumps its version and adds a transformer to our typegen script to handle typing binary form fields correctly as `Blob`.

Also regens types.
2023-07-07 16:36:42 +10:00
7481508282 feat: Add Clip Skip (#3666) 2023-07-07 16:28:17 +12:00
7aa918677e Merge branch 'main' into feat/clip_skip 2023-07-07 16:21:53 +12:00
c6d6b33e3c feat: Reset clipSkip when advanced options is turned off 2023-07-07 16:21:16 +12:00
54f3686e3b merge with main, fix conflicts 2023-07-06 15:21:45 -04:00
f78f10bef6 Merge branch 'lstein/model-manager-router-api' 2023-07-06 15:13:41 -04:00
e9352227f3 add merge api 2023-07-06 15:12:34 -04:00
80575344fc Merge branch 'main' into patch-1 2023-07-06 15:11:40 -04:00
6cb7df75de Add REACT API routes for model manager (#3639)
This is PR adds the following API methods for managing models:

* list_models (GET)
* update_model (PATCH)
* import_model (POST)
* delete_model (DELETE)
* convert_model (PUT)
* merge_models (PUT)
2023-07-06 15:10:37 -04:00
1ac787f3c1 feat: Change Clip Skip to Slider & Add Collapse Active Text 2023-07-07 06:37:07 +12:00
bc5371eeee Merge branch 'main' into feat/clip_skip 2023-07-07 06:03:39 +12:00
ce7803231b feat: Add Clip Skip To Linear UI 2023-07-07 05:57:39 +12:00
e573a533ae remove redundant import 2023-07-06 13:24:58 -04:00
581be42c75 Merge branch 'main' into lstein/model-manager-router-api 2023-07-06 13:20:36 -04:00
90c66aab3d merge with upstream 2023-07-06 13:17:02 -04:00
3e925fbf34 model merging API ready for testing 2023-07-06 13:15:15 -04:00
75b28eb79b Update CONCEPTS.md 2023-07-06 12:22:52 -04:00
ec7c2f07c6 model merge backend, CLI and TUI working 2023-07-06 12:21:42 -04:00
2eddd5db7d Update and rename TEXTUAL_INVERSION.md to TRAINING.md 2023-07-06 11:52:49 -04:00
82978d3ee5 Update Combinatorial Setting Information 2023-07-06 11:28:21 -04:00
b250d1ec86 Merge branch 'main' into doc_updates_23 2023-07-06 11:24:42 -04:00
48258c4bb8 wip(docs): ELI5 Tutorial For Invocations 2023-07-06 11:24:05 -04:00
d5f90b1a02 Improved loading for UI (#3667)
* load images on gallery render

* wait for models to be loaded before you can invoke

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-06 14:48:42 +00:00
a9e77675a8 Move clip skip to separate node 2023-07-06 17:39:49 +03:00
94faa5de14 fixes ImportError described in #3658.
The issue was introduced by a new release of torchmetrics.
2023-07-06 16:16:02 +02:00
7a0154a7b8 expose max_cache_size to invokeai-configure interface (#3664)
This PR allows the user to set the model manager cache size from within
the `invokeia-configure` TUI.
2023-07-07 01:58:22 +12:00
b229fe19aa Merge branch 'main' into lstein/configure-max-cache-size 2023-07-07 01:52:12 +12:00
04b57c408f Add clip skip option to prompt node 2023-07-06 16:09:40 +03:00
2595c1d86f LoRA model loading fixes (#3663)
This PR enables model manager importation of diffusers-style .bin LoRAs.
However, since there is no backend support for this type of LoRA yet,
attempts to use them will result in an unimplemented error.

It closes #3636 and #3637
2023-07-07 01:09:13 +12:00
c2eb6c33b9 Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-07 01:00:02 +12:00
94e38e9769 feat(ui): remove delete image button in gallery
it was really easy to accidentally click, just commented out, easy to add back or add a setting for it in the future
2023-07-06 22:35:50 +10:00
984121d682 only show delete icon if big enough 2023-07-06 22:35:50 +10:00
6f1268e2b1 Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-07 00:32:22 +12:00
405054d802 feat: Add Embedding Picker to Linear UI (#3654) 2023-07-07 00:29:19 +12:00
a901a37433 feat(ui): improve no loaded loras UI 2023-07-06 22:26:54 +10:00
e09c07a97d fix(ui): fix board auto-add 2023-07-06 22:25:05 +10:00
87feae959d feat(ui): improve no loaded embeddings UI 2023-07-06 22:24:50 +10:00
c21245f590 fix(api): make list models params querys, make path /, remove defaults
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)

Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
fbd6b25b4d feat(ui): improve ux on TI autcomplete
- cursor reinserts at the end of the trigger
- `enter` closes the select
- popover styling
2023-07-06 14:56:37 +10:00
267f0408bb Update PROMPTS with Dynamic Prompts docs 2023-07-05 23:50:04 -04:00
cc8c34311c Update LICENSE 2023-07-05 23:46:27 -04:00
2415dc1235 feat(ui): refactor embedding ui; now is autocomplete 2023-07-06 13:40:13 +10:00
8f5fcb188c Merge branch 'main' into lstein/model-manager-router-api 2023-07-05 23:16:43 -04:00
f7daa6e71d all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT 2023-07-05 23:13:01 -04:00
3691b55565 fix autoimport crash 2023-07-05 21:53:08 -04:00
1ee41822bc restore .gitignore treatment of frontend/web 2023-07-05 21:30:56 -04:00
fbad839d23 add missing .js files 2023-07-05 21:09:13 -04:00
f610045a14 Merge branch 'main' into mps-fp16-fixes 2023-07-05 21:01:48 -04:00
a7cbcae176 expose max_cache_size to invokeai-configure interface 2023-07-05 20:59:57 -04:00
0a6dccd607 expose max_cache_size to invokeai-configure interface 2023-07-05 20:59:14 -04:00
43c51ff157 Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-05 20:48:15 -04:00
bf25818d76 rebuild front end; bump version 2023-07-05 20:33:28 -04:00
cfa3b2419c partial implementation of merge 2023-07-05 20:25:47 -04:00
d4550b3059 clean up lint errors in lora.py 2023-07-05 19:18:25 -04:00
83d3a043da merge latest changes from main 2023-07-05 19:15:53 -04:00
169ff6368b Update mps_fixes.py - additional torch op for nodes
This fixes scaling in the nodes UI.
2023-07-05 17:47:23 -04:00
71dad6d404 Merge branch 'main' into ti-ui 2023-07-05 16:57:31 -04:00
c21bd806f0 default LoRA weight to 0.75 2023-07-05 16:54:23 -04:00
007d125e40 Update README.md 2023-07-05 16:53:37 -04:00
716d154957 Update LICENSE 2023-07-05 16:41:28 -04:00
685a47cc7d fix crash during lora application 2023-07-05 16:40:47 -04:00
52498cc0b9 Put tokenizer and text encoder in same clip-vit-large-patch14 (#3662)
This PR fixes the migrate script so that it uses the same directory for
both the tokenizer and text encoder CLIP models. This will fix a crash
that occurred during checkpoint->diffusers conversions

This PR also removes the check for an existing models directory in the
target root directory when `invokeai-migrate3` is run.
2023-07-05 16:29:33 -04:00
cb947bcbf0 Merge branch 'main' into lstein/fix-migrate3-textencoder 2023-07-05 16:23:00 -04:00
bbfb5bb1d4 Remove hardcoded cuda device in model manager init (#3624)
There was a line in model_manager.py in which the GPU device was
hardcoded to "cuda". This has now been removed.
2023-07-05 16:22:45 -04:00
f8bbec8572 Recognize and load diffusers-style LoRAs (.bin)
Prevent double-reporting of autoimported models
- closes #3636

Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:21:23 -04:00
863336acbb Recognize and load diffusers-style LoRAs (.bin)
Prevent double-reporting of autoimported models
- closes #3636

Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:19:16 -04:00
90ae8ce26a prevent model install crash "torch needs to be restarted with spawn" 2023-07-05 16:18:20 -04:00
ad5d90aca8 prevent model install crash "torch needs to be restarted with spawn" 2023-07-05 15:38:07 -04:00
5b6dd47b9f add API for model convert 2023-07-05 15:13:21 -04:00
5027d0a603 accept @psychedelicious suggestions above 2023-07-05 14:50:57 -04:00
9f9ce08e44 Merge branch 'main' into lstein/remove-hardcoded-cuda-device 2023-07-05 13:38:33 -04:00
17c5568661 build: remove web ui dist from gitignore (#3650)
The web UI should manage its own .gitignore

I think would explain why certain files were not making it into the pypi
release
2023-07-05 13:36:16 -04:00
94740e440d Merge branch 'main' into build/gitignore 2023-07-05 13:35:54 -04:00
021e1eca8e Merge branch 'main' into mps-fp16-fixes 2023-07-05 13:19:52 -04:00
5fe722900d allow clip-vit-large-patch14 text encoder to coexist with tokenizer in same directory 2023-07-05 13:15:08 -04:00
cf173b522b allow clip-vit-large-patch14 text encoder to coexist with tokenizer in same directory 2023-07-05 13:14:41 -04:00
ea81ce9489 close modal when user clicks cancel (#3656)
* close modal when user clicks cancel

* close modal when delete image context cleared

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-05 17:12:27 +00:00
8283b80b58 Fix ckpt scanning on conversion (#3653) 2023-07-06 05:09:13 +12:00
9e2d63ef97 Merge branch 'main' into fix/ckpt_convert_scan 2023-07-06 05:01:34 +12:00
dd946790ec Fix loading diffusers ti (#3661) 2023-07-06 05:01:11 +12:00
0ac9dca926 Fix loading diffusers ti 2023-07-05 19:46:00 +03:00
acd3b1a512 build: remove web ui dist from gitignore
The web UI should manage its own .gitignore
2023-07-06 00:39:36 +10:00
bd82c4ace0 model installer confirms deletion of models 2023-07-05 09:57:23 -04:00
e4d92da3a9 fix: Make space for icons in prompt box 2023-07-06 01:48:50 +12:00
9204b72383 feat: Make Embedding Picker a mini toggle 2023-07-06 01:45:00 +12:00
9edf78dd2e merge with main 2023-07-05 09:12:54 -04:00
5d31703224 Merge branch 'release/invokeai-3-0-alpha' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-alpha 2023-07-05 09:05:59 -04:00
6112197edf convert implemented; need router 2023-07-05 09:05:05 -04:00
44d5bef7e4 bump version number 2023-07-05 09:02:35 -04:00
a556bf45bb Merge branch 'main' into ti-ui 2023-07-05 23:42:48 +12:00
818616a0c5 fix(ui): fix prompt resize & style resizer (#3652) 2023-07-05 23:42:23 +12:00
8c9266359d feat: Add Embedding Select To Linear UI 2023-07-05 23:41:15 +12:00
3b324a7d0a Merge branch 'main' into fix/ui/fix-prompt-resize 2023-07-05 23:40:47 +12:00
c8cb43ff2d Fix clip path in migrate script (#3651)
Update path for clip model according to path used in ckpt conversion and
invokeai-configure
2023-07-05 23:38:45 +12:00
ba7345deb4 Merge branch 'main' into mps-fp16-fixes 2023-07-05 07:38:41 -04:00
ee042ab76d Fix ckpt scanning on conversion 2023-07-05 14:18:30 +03:00
596c791844 fix(ui): fix prompt resize & style resizer 2023-07-05 21:02:31 +10:00
780e77d2ae Merge branch 'main' into fix/clip_path 2023-07-05 22:45:52 +12:00
e3fc1b3816 Fix clip path in migrate script 2023-07-05 13:43:09 +03:00
9ad9e91a06 Detect invalid model names when migrating 2.3->3.0 (#3623)
A user discovered that 2.3 models whose symbolic names contain the "/"
character are not imported properly by the `migrate-models-3` script.
This fixes the issue by changing "/" to underscore at import time.
2023-07-05 06:35:54 -04:00
307a01d604 when migrating models, changes / to _ in model names to avoid breaking model name keys 2023-07-05 20:27:03 +10:00
56d4ea3252 fix(api): improve mm routes 2023-07-05 20:08:47 +10:00
5d4d0e795c fix(mm): fix up mm service types 2023-07-05 20:07:10 +10:00
0981a7d049 fix(ui): fix dnd on nodes (#3649)
I had broken this earlier today
2023-07-05 21:09:36 +12:00
2a7dee17be fix(ui): fix dnd on nodes
I had broken this earlier today
2023-07-05 19:06:40 +10:00
6c6d600cea fix(ui): deleting image selects first image (#3648)
@mickr777
2023-07-05 21:00:01 +12:00
1c7166d2c6 Merge branch 'main' into fix/ui/delete-image-select 2023-07-05 20:57:34 +12:00
07d7959dc0 feat(ui): improve accordion ux (#3647)
- Accordions now may be opened or closed regardless of whether or not
their contents are enabled or active
- Accordions have a short text indicator alerting the user if their
contents are enabled, either a simple `Enabled` or, for accordions like
LoRA or ControlNet, `X Active` if any are active



https://github.com/invoke-ai/InvokeAI/assets/4822129/43db63bd-7ef3-43f2-8dad-59fc7200af2e
2023-07-05 20:57:23 +12:00
9ebab013c1 fix(ui): deleting image selects first image 2023-07-05 18:21:46 +10:00
e41e8606b5 feat(ui): improve accordion ux
- Accordions now may be opened or closed regardless of whether or not their contents are enabled or active
- Accordions have a short text indicator alerting the user if their contents are enabled, either a simple `Enabled` or, for accordions like LoRA or ControlNet, `X Active` if any are active
2023-07-05 17:33:03 +10:00
6ce867feb4 Fix model detection (#3646) 2023-07-05 19:00:31 +12:00
bc8cfc2baa Merge branch 'main' into fix/model_detect 2023-07-05 18:52:11 +12:00
7170e82f73 expose max_cache_size in config 2023-07-05 02:44:15 -04:00
2beb8f049e Fix model detection 2023-07-05 09:43:46 +03:00
66c10cc2f7 fix: Change Lora weight bounds to -1 to 2 (#3645) 2023-07-05 18:23:06 +12:00
1fb317243d fix: Change Lora weight bounds to -1 to 2 2023-07-05 18:12:45 +12:00
71310a180d feat: Add Lora to Canvas (#3643)
- Add Loras to Canvas
- Revert inference_mode to no_grad coz inference tensors fail with
latent to latent.
2023-07-05 17:15:28 +12:00
1a29a3fe39 feat: Add Lora to Canvas 2023-07-05 16:39:28 +12:00
639d88afd6 revert: inference_mode to no_grad 2023-07-05 16:39:15 +12:00
f155887b7d fix(ui): change multi image drop to not have selection as payload
This caused a lot of re-rendering whenever the selection changed, which caused a huge performance hit. It also made changing the current image lag a bit.

Instead of providing an array of image names as a multi-select dnd payload, there is now no multi-select dnd payload at all - instead, the payload types are used by the `imageDropped` listener to pull the selection out of redux.

Now, the only big re-renders are when the selectionCount changes. In the future I'll figure out a good way to do image names as payload without incurring re-renders.
2023-07-05 13:25:07 +10:00
1358c5eb7d fix(ui): fix selector memoization
Every `GalleryImage` was rerendering any time the app rerendered bc the selector function itself was not memoized. This resulted in the memoization cache inside the selector constantly being reset.

Same for `BatchImage`.

Also updated memoization for a few other selectors.
2023-07-05 13:25:07 +10:00
c0501ed5c2 fix: Slow loading of Loras
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-07-05 12:47:34 +10:00
0f0336b6ef fix(ui): fix incorrect lora id processing 2023-07-05 12:47:34 +10:00
52a09422c7 feat(ui): create rtk-query hooks for individual model types
Eg `useGetMainModelsQuery()`, `useGetLoRAModelsQuery()` instead of `useListModelsQuery({base_type})`.

Add specific adapters for each model type. Just more organised and easier to consume models now.

Also updated LoRA UI to use the model name.
2023-07-05 12:47:34 +10:00
c21b56ba31 fix(ui): fix mantine disabled styles 2023-07-05 12:47:34 +10:00
bf895221c2 fix: Tab index not being correct
This probably needs to be updated to an object over an array so the index of item in the array doesnt break the rest of it.
2023-07-05 12:47:34 +10:00
db8862d860 feat(ui): add LoRA ui & update graphs 2023-07-05 12:47:34 +10:00
d537b9f0cb chore(ui): regen types 2023-07-05 12:47:34 +10:00
08d428a5e7 feat(nodes): add lora field, update lora loader 2023-07-05 12:47:34 +10:00
233869b56a Mac MPS FP16 fixes
This PR is to allow FP16 precision to work on Macs with MPS. In addition, it centralizes the torch fixes/workarounds
required for MPS into a new backend utility file `mps_fixes.py`. This is conditionally imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to StAlKeR7779 for patiently working to debug and fix these issues.
2023-07-04 18:10:53 -04:00
5d099f4a49 update_model working 2023-07-04 17:26:57 -04:00
752b4d50cf model_delete method now working 2023-07-04 10:40:32 -04:00
c1c49d9a76 import model returns 404 for invalid path, 409 for duplicate model 2023-07-04 10:08:10 -04:00
92b163e95c (wip) Model Manager 3.0 UI (#3586)
...
2023-07-04 17:34:06 +12:00
af728b4b1d chore(ui): regen types 2023-07-04 15:04:01 +10:00
099082abc1 feat(ui): model manager tab naming tweaks 2023-07-04 14:52:00 +10:00
96bf92ead4 add the import model router 2023-07-04 14:35:47 +10:00
0988725c1b fix: Floating gallery button showing up in Model Manager 2023-07-04 14:35:47 +10:00
089d95baeb fix: Change graph id vals to constants 2023-07-04 14:35:47 +10:00
511978979e feat: Add Custom VAE Support to Linear UI 2023-07-04 14:35:47 +10:00
7e18814dd0 Add standard names for Model Loader Nodes 2023-07-04 14:35:06 +10:00
bd5a764988 Remove 'automatic' from VAE Loader in Nodes 2023-07-04 14:35:06 +10:00
a8a2209560 VAE loader is loading proper VAE. Unclear if it is changing the image 2023-07-04 14:35:06 +10:00
fa8a5838d3 add vae lodaer 2023-07-04 14:35:06 +10:00
630f3c8b0b fix: Missing VAE Loader stuff 2023-07-04 14:34:41 +10:00
6c6299ce49 fix: Style fixes to the MM edit forms 2023-07-04 14:34:41 +10:00
6684e00f0a wip: Move Merge Models to new panel & use new model fetch 2023-07-04 14:34:41 +10:00
2f8f558df3 wip: Move Add Model from Modal to new Panel 2023-07-04 14:34:41 +10:00
de7b059e67 feat: Port Checkpoint Edit to Mantine Form 2023-07-04 14:34:41 +10:00
33db4e27a0 feat: Update DiffusersEdit Component to use Mantine Form 2023-07-04 14:34:41 +10:00
009c20bfea feat: Add Mantine Form 2023-07-04 14:34:41 +10:00
d61b3818fe feat: Add VAE Select to Linea UI Panels (non func) 2023-07-04 14:34:41 +10:00
51db4d1269 fix: Minor fixes to the VAESelect components 2023-07-04 14:34:41 +10:00
38660a2162 feat: Addvae_model input type front end 2023-07-04 14:34:41 +10:00
5ad6b64721 cleanup: merge conflicts 2023-07-04 14:34:22 +10:00
0da4f4bb6f fix: Add missing Unet, Clip, VAE Field Template types 2023-07-04 14:34:22 +10:00
8d5a953dcb feat: Add VAESelect Component 2023-07-04 14:33:56 +10:00
6c62f41f2e chore: Change PipelineModels to MainModels 2023-07-04 14:33:56 +10:00
2ad5a4ea46 feat: Initial port of Model Manager to new tab 2023-07-04 14:31:48 +10:00
9e35643911 feat: Make new tab for Model Manager 2023-07-04 14:31:24 +10:00
0bb668b8a8 feat: hook up model edit forms 2023-07-04 14:29:42 +10:00
e73f774920 fix: Restore Model display and select functionality 2023-07-04 14:29:42 +10:00
b4b760d9e9 Quash memory leak when compel invocation called (#3633)
This commit prevents each image generation from leaking ~160 MB of VRAM.
Thanks to @damian0815 and @StAlKeR7779 for helping to sort this out.
2023-07-04 06:33:56 +12:00
4d2c7806fc quash memory leak when compel invocation called 2023-07-03 14:12:35 -04:00
3937428563 Merge branch 'release/invokeai-3-0-alpha' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-alpha 2023-07-03 14:11:28 -04:00
fc419546bc Merge branch 'main' into lstein/remove-hardcoded-cuda-device 2023-07-03 14:10:47 -04:00
252c790969 Add runtime root path to relative vaes and other submodels (#3631)
This PR fixes a crash that would occur when VAEs and other submodels
have a relative path in the config file.
2023-07-03 14:10:33 -04:00
cfd09214d3 Merge branch 'main' into lstein/fix-vae-conversion-crash 2023-07-03 14:03:13 -04:00
b128ba81db Merge branch 'main' into lstein/remove-hardcoded-cuda-device 2023-07-03 13:58:14 -04:00
78857bf5ad Make unit tests work again (#3575)
This PR is for adjusting the unit tests in the `tests` directory so that
they no longer throw errors.

I've removed two tests that were obsoleted by the shift to latent nodes,
but `test_graph_execution_state.py` and `test_invoker.py` are throwing
this validation error:

```
TypeError: InvocationServices.__init__() missing 2 required positional arguments: 'boards' and 'board_images'
```
2023-07-03 12:53:04 -04:00
9c83a4eada Merge branch 'main' into dev/fix-unit-tests 2023-07-03 12:44:02 -04:00
c314b17f5c Add missing k-* legacy sampler names to init file migrate list (#3625)
The `invokeai-configure` script migrates the old invokeai.init file to
the new invokeai.yaml format. However, the parser for the invokeai.init
file was missing the names of the k* samplers and was giving a parser
error on any invokeai.init file that referred to one of these samplers.
This PR fixes the problem.

Ironically, there is no longer the concept of the preferred scheduler in
3.0, and so these sampler names are simply ignored and not written into
`invokeai.yaml`
2023-07-03 12:41:33 -04:00
27088610ed Merge branch 'main' into dev/fix-unit-tests 2023-07-03 12:38:42 -04:00
7f054431df Merge branch 'main' into fix/controlnet_cfg_inj_cond 2023-07-03 12:37:22 -04:00
b17406a985 Merge branch 'main' into lstein/improve-model-install-stability 2023-07-03 12:37:02 -04:00
ebcbfc8a12 Merge branch 'main' into lstein/recognize-legacy-sampler-names 2023-07-03 12:36:00 -04:00
d6de11bd56 resolve merge conflict 2023-07-03 12:19:11 -04:00
ed86d0b708 Union[foo, None]=>Optional[foo] 2023-07-03 12:17:45 -04:00
fb2b2a371d Merge branch 'lstein/fix-vae-conversion-crash' into release/invokeai-3-0-alpha 2023-07-03 11:21:16 -04:00
10d513c5f7 add runtime root path to relative vaes and other submodels 2023-07-03 11:19:33 -04:00
877b187a1b Merge branch 'lstein/restore-3.9-compatibility' into release/invokeai-3-0-alpha 2023-07-03 11:01:34 -04:00
ac9ec4e75a restore 3.9 compatibility by replacing | with Union[] 2023-07-03 10:57:40 -04:00
2465c7987b Revert "restore 3.9 compatibility by replacing | with Union[]"
This reverts commit 76bafeb99e.
2023-07-03 10:56:41 -04:00
73a27918c6 Merge branch 'main' of github.com:invoke-ai/InvokeAI into main 2023-07-03 10:55:12 -04:00
76bafeb99e restore 3.9 compatibility by replacing | with Union[] 2023-07-03 10:55:04 -04:00
c33f0ae055 feat(ui): hide batch ui pending logic implementation 2023-07-04 00:26:58 +10:00
90aa97edd4 feat(ui): add multi-select and batch capabilities
This introduces the core functionality for batch operations on images and multiple selection in the gallery/batch manager.

A number of other substantial changes are included:
- `imagesSlice` is consolidated into `gallerySlice`, allowing for simpler selection of filtered images
- `batchSlice` is added to manage the batch
- The wonky context pattern for image deletion has been changed, much simpler now using a `imageDeletionSlice` and redux listeners; this needs to be implemented still for the other image modals
- Minimum gallery size in px implemented as a hook
- Many style fixes & several bug fixes

TODO:
- The UI and UX need to be figured out, especially for controlnet
- Batch processing is not hooked up; generation does not do anything with batch
- Routes to support batch image operations, specifically delete and add/remove to/from boards
2023-07-04 00:18:27 +10:00
fa169b5517 feat(nodes): add ImageCollection node in prep for batch processing 2023-07-04 00:18:27 +10:00
aae60b6142 quash memory leak when compel invocation called 2023-07-03 10:08:10 -04:00
b79740d61d back out torch.no_grad() 2023-07-02 23:03:24 -04:00
8c93c8dda8 add web dist files to enable network pip install 2023-07-02 22:02:53 -04:00
176504a475 add .js, .woff2 and .css files to web/dist/assets 2023-07-02 21:50:29 -04:00
fa8ccd2a94 add no_grad() to compel node invoke() method 2023-07-02 18:20:16 -04:00
6935858ef3 add debugging messages to aid in memory leak tracking 2023-07-02 13:34:53 -04:00
2b67509061 bump version; rebuild frontend 2023-07-02 13:02:31 -04:00
fa1f9939cc adjust invokeai-configure TUI vertical height to show NEXT button on Mac 2023-07-02 09:44:16 -04:00
2d314d2b3d another fix to repo_id loading 2023-07-02 09:18:11 -04:00
42f537f655 Fix Invoke Progress Bar (#3626)
@blessedcoolant it looks like with the new theme buttons not being
transparent the progress bar was completely hidden, I moved to be on
top, however it was not transparent so it hid the invoke text, after
trying for a while couldn't get it to be transparent, so I just made the
height 15%,
2023-07-02 19:12:23 +12:00
f399b36ae6 fix: Progress Bar being broken 2023-07-02 18:49:24 +12:00
a6334750cb Update InvokeButton.tsx 2023-07-02 15:07:01 +10:00
45a551125d Update NodeInvokeButton.tsx 2023-07-02 15:06:32 +10:00
72d64513d0 add borderBottomRadius: '5px', 2023-07-02 15:05:32 +10:00
0e50005643 fix(ui): show skeletons only for currently loading images 2023-07-02 11:55:51 +10:00
19c632e793 remove width 2023-07-02 11:55:51 +10:00
85a4d37883 remove long loading state, introduce loading to gallery and model list 2023-07-02 11:55:51 +10:00
b2775d6b4c Merge branch 'lstein/recognize-legacy-sampler-names' into release/invokeai-3-0-alpha 2023-07-01 21:45:39 -04:00
06694d465d add missing k-* legacy sampler names to init file migrate list 2023-07-01 21:45:14 -04:00
3c2ce51f10 Merge branch 'lstein/remove-hardcoded-cuda-device' into release/invokeai-3-0-alpha 2023-07-01 21:15:58 -04:00
0f02915012 remove hardcoded cuda device in model manager init 2023-07-01 21:15:42 -04:00
0016236889 Merge branch 'lstein/fix-imported-model-names' into release/invokeai-3-0-alpha 2023-07-01 21:09:29 -04:00
f4bd5bb986 when migrating models, changes / to _ in model names to avoid breaking model name keys 2023-07-01 21:08:59 -04:00
1cf61feead print GPU device at startup 2023-07-01 20:47:11 -04:00
5de820f2dc fix updater and model installer 2023-07-01 20:13:28 -04:00
f1fb1c9a60 Merge branch 'lstein/fix-update-script' into release/invokeai-3-0-alpha 2023-07-01 20:13:04 -04:00
f7d8ae20a6 rolled back changes to package.json 2023-07-01 20:07:14 -04:00
9724143ab7 rolled back changes to package.json 2023-07-01 20:05:00 -04:00
ecc5b6eec5 change single to double quotes so that pip install works on windows 2023-07-01 19:56:18 -04:00
4ac9be115e rebuild frontend 2023-07-01 14:48:23 -04:00
7d64a5849f merge draft docs 2023-07-01 14:45:00 -04:00
054b5f484a resolve conflicts with main 2023-07-01 14:42:48 -04:00
41a8f155ed Merge branch 'main' into fix/controlnet_cfg_inj_cond 2023-07-01 14:36:09 -04:00
3458f45a2b Merge branch 'lstein/improve-model-install-stability' into release/invokeai-3-0-alpha 2023-07-01 14:35:35 -04:00
6c80620c25 Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-01 14:34:38 -04:00
f1928d2588 prevent crashes on malformed models 2023-07-01 14:32:58 -04:00
96212bb35f feat(ui): gallery minSize tweak (#3618)
- Set min size for floating gallery panel
- Correct the default pinned width (it cannot be less than the min width
and this was sometimes happening during window resize)
2023-07-01 22:37:08 +12:00
f46c50f69a feat(ui): gallery minSize tweak
- Set min size for floating gallery panel
- Correct the default pinned width (it cannot be less than the min width and this was sometimes happening during window resize)
2023-07-01 20:27:52 +10:00
3aa6a7e7df feat(ui): minimum gallery size
Add `useMinimumPanelSize()` hook to provide minimum resizable panel sizes (in pixels).

The library we are using for the gallery panel uses percentages only. To provide a minimum size in pixels, we need to do some math to calculate the percentage of window size that corresponds to the desired min width in pixels.
2023-07-01 18:29:55 +10:00
d9ac36df1d fix incorrect VAE config file path during conversion of ckpts (#3616)
This fixes a "config file not found" error when loading VAE checkpoints.
2023-07-01 11:26:36 +12:00
c74bb5cdbf Merge branch 'main' into lstein/fix-vae-convert 2023-07-01 11:18:21 +12:00
1347fc2f00 fix incorrect VAE config file path during conversion of ckpts 2023-06-30 19:14:06 -04:00
d0834cfa19 export new ColorModeButton component (#3614)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-06-30 09:07:36 -04:00
2b6c9c93e0 fix(ui): fix canvas crash by rolling back swagger-parser (#3611)
The node polyfills needed to run the `swagger-parser` library (used to
dereference the OpenAPI schema) cause the canvas tab to immediately
crash when the package build is used in another react application.

I'm sure this is fixable but it's not clear what is causing the issue
and troubleshooting is very time consuming.

Selectively rolling back the implementation of `swagger-parser`.
2023-06-30 23:34:06 +12:00
9a123ed662 Merge branch 'main' into fix/ui/fix-canvas-crash 2023-06-30 23:31:42 +12:00
a9bc45b8af feat(ui): tweak light mode colors, buttons pop (#3612)
the light mode button colors were way off, much improved
2023-06-30 23:31:30 +12:00
d6cfbe982f feat(ui): tweak light mode colors, buttons pop 2023-06-30 13:15:58 +10:00
30464f4fe1 fix(ui): fix canvas crash by rolling back swagger-parser
The node polyfills needed to run the `swagger-parser` library (used to dereference the OpenAPI schema) cause the canvas tab to immediately crash when the package build is used in another react application.

I'm sure this is fixable but it's not clear what is causing the issue and troubleshooting is very time consuming.

Selectively rolling back the implementation of `swagger-parser`.
2023-06-30 12:24:28 +10:00
877483093a ui: support dark mode (#3592)
[feat(ui): remove themes, add hand-crafted dark and light
modes](032c7e68d0)

[032c7e6](032c7e68d0)

Themes are very fun but due to the differences in perceived saturation
and lightness across the
the color spectrum, it's impossible to have have multiple themes that
look great without hand-
crafting *every* shade for *every* theme. We've ended up with 4 OK
themes (well, 3, because the
light theme was pretty bad).

I've removed the themes and added color mode support. There is now a
single dark and light mode,
each with their own color palette and the classic grey / purple / yellow
invoke colors that
@blessedcoolant first designed.

I've re-styled almost everything except the model manager and lightbox,
which I keep forgetting
to work on.

One new concept is the Chakra `layerStyle`. This lets us define "layers"
- think body, first layer,
second layer, etc - that can be applied on various components. By
defining layers, we can be more
consistent about the z-axis and its relationship to color and lightness.
2023-06-30 06:13:43 +12:00
295444c730 cleanup: Minor theme related cleanup 2023-06-30 06:09:14 +12:00
fb015332f2 feat: Add tooltips to color mode switcher 2023-06-30 06:05:08 +12:00
6e917dcbb0 chore: More colors to own files + small color tweaks 2023-06-30 06:04:42 +12:00
032c7e68d0 feat(ui): remove themes, add hand-crafted dark and light modes
Themes are very fun but due to the differences in perceived saturation and lightness across the
the color spectrum, it's impossible to have have multiple themes that look great without hand-
crafting *every* shade for *every* theme. We've ended up with 4 OK themes (well, 3, because the
light theme was pretty bad).

I've removed the themes and added color mode support. There is now a single dark and light mode,
each with their own color palette and the classic grey / purple / yellow invoke colors that
@blessedcoolant first designed.

I've re-styled almost everything except the model manager and lightbox, which I keep forgetting
to work on.

One new concept is the Chakra `layerStyle`. This lets us define "layers" - think body, first layer,
second layer, etc - that can be applied on various components. By defining layers, we can be more
consistent about the z-axis and its relationship to color and lightness.
2023-06-30 03:24:36 +10:00
c00aea7a6c tests(nodes): fix nodes tests 2023-06-29 23:11:48 +10:00
28d78a8fb4 Add image board support to invokeai-node-cli (#3594)
This PR corrects a crash during startup of `invokeai-node-cli` due to
failure to initialize the image board service.
2023-06-29 08:20:07 -04:00
2c5b050d82 add image board support to invokeai-node-cli 2023-06-29 22:12:34 +10:00
723d68e496 add image usage for board images and listener to handle actual deletion 2023-06-29 21:14:53 +10:00
ba67e57a7e (wip) delete images along with board 2023-06-29 21:14:53 +10:00
45935caf1d fix query 2023-06-29 21:14:53 +10:00
73f2092ec5 (api) add option to board delete route and logic to services 2023-06-29 21:14:53 +10:00
8297b7e1ae Fix duplicate model key addition when root directory is a relative path (#3607)
This fixes model directory scanning so that it works properly when the
root is a relative path (e.g. ".").
2023-06-29 18:01:22 +12:00
5be1e71d1b Merge branch 'main' into lstein/fix-model-scan-on-rel-root 2023-06-29 17:54:12 +12:00
e65e635944 Fix Typo in migrate_to_3.py (#3610)
this caused the vae in the models.yaml to point to the wrong folder
2023-06-29 17:53:50 +12:00
30a917f70c Fix Typo in migrate_to_3.py 2023-06-29 14:45:55 +10:00
4308d593c3 fix(ui): improve IDE TS performance by not resolving JSON
The TS Language Server slows down immensely with our translation JSON, which is used to provide kinda-type-safe translation keys. I say "kinda", because you don't get autocomplete - you only get red squigglies when the key is incorrect.

To improve the performance, we can opt out of this process entirely, at the cost of no red squigglies for translation keys. Hopefully we can resolve this in the future.

It's not clear why this became an issue only recently (like past couple weeks). We've tried rolling back the app dependencies, VSCode extensions, VSCode itself, and the TS version to before the time when the issue started, but nothing seems to improve the performance.

1. Disable `resolveJsonModule` in `tsconfig.json`
2. Ignore TS in `i18n.ts` when importing the JSON
3. Comment out the custom types in `i18.d.ts` entirely

It's possible that only `3` is needed to fix the issue.

I've tested building the app and running the build - it works fine, and translation works fine.
2023-06-28 23:55:44 -04:00
8f6b3660c5 Set use-credentials on commercial deployment if authToken is set on canvas image calls, comment out the UpdateImageUrls on connect listener 2023-06-29 13:55:03 +10:00
fe5e0b103f update README; chnage default root directory to invokeai-3 2023-06-28 17:47:04 -04:00
218eb8522f tweak launcher option wording 2023-06-28 17:10:07 -04:00
1e97ba3628 merge with fix needed to run installer 2023-06-28 17:04:44 -04:00
ace4f6d586 fix duplicate model key addition when root directory is a relative path 2023-06-28 17:02:03 -04:00
261ca823c0 bump version number 2023-06-28 17:00:38 -04:00
8a90e51408 Apply lora by model patching (#3583)
Rewrite lora to be applied by model patching as it gives us benefits:
1) On model execution calculates result only on model weight, while with
hooks we need to calculate on model and each lora
2) As lora now patched in model weights, there no need to store lora in
vram

Results:
Speed:
| loras count | hook | patch |
| --- | --- | --- |
| 0 | ~4.92 it/s | ~4.92 it/s |
| 1 | ~3.51 it/s | ~4.89 it/s |
| 2 | ~2.76 it/s | ~4.92 it/s |

VRAM:
| loras count | hook | patch |
| --- | --- | --- |
| 0 | ~3.6 gb | ~3.6 gb |
| 1 | ~4.0 gb | ~3.6 gb |
| 2 | ~4.4 gb | ~3.7 gb |

As based on #3547 wait to merge.
2023-06-28 15:48:57 -04:00
20fbe81395 Merge branch 'main' into fix/controlnet_cfg_inj_cond 2023-06-28 15:44:50 -04:00
ac46b129bf Merge branch 'main' into feat/lora_model_patch 2023-06-28 22:43:58 +03:00
ff2ae683d1 Update 060_INSTALL_PATCHMATCH.md (#3591)
installing the package 'blas' is needed in Archlinux, otherwise
patchmatch fails initializing with a "libblas.so.3 missing" error.
2023-06-28 15:40:45 -04:00
2714138af2 Merge branch 'main' into patch-1 2023-06-28 15:40:22 -04:00
2d85f9a123 Configuration and model installer for new model layout (#3547)
# Restore invokeai-configure and invokeai-model-install

This PR updates invokeai-configure and invokeai-model-install to work
with the new model manager file layout. It addresses a naming issue for
`ModelType.Main` (was `ModelType.Pipeline`) requested by
@blessedcoolant, and adds back the feature that allows users to dump
models into an `autoimport` directory for discovery at startup time.
2023-06-28 15:31:46 -04:00
79fc708580 warn but do not crash when model scan finds random cruft in models directory 2023-06-28 15:26:42 -04:00
72209d0cc3 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-28 14:49:37 -04:00
fffeb6f7f5 nodes: default to CPU noise (#3598)
this provides reproducible results across platforms.
we can expose this in the app.
2023-06-28 18:24:47 +12:00
75614bbba3 Merge branch 'main' into feat/nodes/cpu-noise 2023-06-28 18:22:08 +12:00
201b8430e4 Feat/controlnet extras (#3596)
Trying to get a few ControlNet extras in before 3.0 release:

- SegmentAnything ControlNet preprocessor node
- LeResDepth ControlNet preprocessor node (but commented out till
controlnet_aux v0.0.6 is released & required by InvokeAI)
- TileResampler ControlNet preprocessor node (should be equivalent to
Mikubill/sd-webui-controlnet extension tile_resampler)
- fix for Midas ControlNet preprocessor error with images that have
alpha channel

Example usage of SegmentAnything preprocessor node:
![Screenshot from 2023-06-26
16-53-44](https://github.com/invoke-ai/InvokeAI/assets/303100/c6278f9a-5f6b-44bd-98b1-fcaf77251a76)
2023-06-28 17:56:24 +12:00
32883adf6e Merge branch 'main' into feat/controlnet_extras 2023-06-28 17:36:21 +12:00
00c78b1cbc feat(ui): use max prompts for combinatorial, iterations for non-combi… (#3600)
…natorial
2023-06-28 17:35:45 +12:00
1ea3160594 Merge branch 'main' into feat/ui/dynamic-prompts-ux 2023-06-28 17:34:36 +12:00
fc322aa9f7 Update controlnet-aux to 0.0.6 and add LeReS 2023-06-27 23:45:47 -04:00
e12dbef18f fix(nodes): use context for logger in param_easing (#3529) 2023-06-27 23:36:01 -04:00
73f63853ba fix(nodes): use context for logger in param_easing 2023-06-27 23:30:10 -04:00
e8ed0fad6c autoimport from embedding/controlnet/lora folders designated in startup file 2023-06-27 12:30:53 -04:00
1f3e5582f4 feat(ui): add type extraction helpers 2023-06-28 01:17:34 +10:00
642db657c2 feat(ui): use max prompts for combinatorial, iterations for non-combinatorial 2023-06-27 20:29:41 +10:00
246298d1d6 chore(ui): regen types 2023-06-27 13:57:41 +10:00
2e14528e4c feat(nodes): default to CPU noise 2023-06-27 13:57:31 +10:00
f15d28d141 improved wording of v2 selection prompt 2023-06-26 20:30:08 -04:00
862bfa2c36 Merge branch 'main' of github.com:invoke-ai/InvokeAI into feat/controlnet_extras 2023-06-26 16:39:31 -07:00
dc1f220b3e Fix wrong conditioning used 2023-06-27 01:18:15 +03:00
044fe6bb20 remove dangling debug statement 2023-06-26 17:48:06 -04:00
8c74f49a18 Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout 2023-06-26 16:31:00 -04:00
823e098b7c prompt user for prediction type when autoimporting a v2 model without .yaml file
don't ask user for prediction type of a config.yaml provided
2023-06-26 16:30:34 -04:00
b7e9d09537 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-26 16:22:23 -04:00
3c30368c62 Configure and model install TUI tweaks (#3519)
The installer TUI requires a minimum window width and height to provide
a satisfactory user experience. If, after trying and exhausting all
means of enlarging the window (on Linux, Mac and Windows) the window is
still too small, this PR generates a message telling the user to enlarge
the window and pausing until they do so. If the user fails to enlarge
the window the program will proceed, and either issue an error message
that it can't continue (on Windows), or show a clipped display that the
user can remedy by enlarging the window.
2023-06-26 16:08:56 -04:00
ea15d037f9 Merge branch 'main' into lstein/tweak-installer-ui 2023-06-26 15:05:16 -04:00
f67dec7f0c Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-26 15:03:22 -04:00
10d2d85c83 Started to add ControlNet resize_crop and resize_fill options, but commented out, not ready to deploy yet. 2023-06-26 12:03:05 -07:00
4208766e19 Merge branch 'main' into patch-1 2023-06-26 15:00:50 -04:00
bf1f2eb128 Bypass failing tests (#3593)
"Fixes" the test suite generally so it doesn't fail CI, but some tests
needed to be skipped/xfailed due to recent refactor.

- ignore three test suites that broke following the model manager
refactor
- move `InvocationServices` fixture to `conftest.py`
- add `boards` items to the `InvocationServices`  fixture

This PR makes the unit tests work, but end-to-end tests are temporarily
commented out due to `invokeai-configure` being broken in `main` -
pending #3547

Looks like a lot of the tests need to be rewritten as they reference
`TextToImageInvocation` / `ImageToImageInvocation`
2023-06-26 14:41:56 -04:00
16829682c8 Merge branch 'main' into ebr/make-tests-pass 2023-06-26 14:27:31 -04:00
011adfc958 merge with main 2023-06-26 13:53:59 -04:00
befd95eb19 rename root_dir to root_path attributes to emphasize return of a Path 2023-06-26 13:52:25 -04:00
a2ddb3823b fix add_model() logic 2023-06-26 13:33:38 -04:00
cc400c9fa5 (ci) temporarily comment out end-to-end tests 2023-06-26 13:08:43 -04:00
4eb7a5fc60 (ci) clean up pip tests 2023-06-26 13:08:43 -04:00
587203d589 (tests) make fixture reusable; support boards
fixes the test suite generally, but some tests needed to be
skipped/xfailed due to recent refactor

- ignore three test suites that broke following the model manager
  refactor
- move InvocationServices fixture to conftest.py
- add `boards` InvocationServices to the fixture
2023-06-26 13:08:34 -04:00
e3f136cdda Update 060_INSTALL_PATCHMATCH.md
installing the packaged 'blas' is needed in Archlinux, otherwise patchmatch fails initializing with a "libblas.so.3 missing" error.
2023-06-26 14:23:10 +02:00
af566adf56 For MediapipeFace ControlNet preprocessor, if input image is RGBA format then convert to RGB (otherwise MediapipeFace image processing throws an error) 2023-06-26 04:29:43 -07:00
873c18bc4b Added TileResampler ControlNet preprocessor node.
Also fixes to SegmentAnything ControlNet preprocessor node.
2023-06-26 04:27:26 -07:00
d905d0e42a feat(ui): only show canvas image fallback on loading error (#3589) 2023-06-26 21:40:10 +12:00
6ccf62a863 feat(ui): only show canvas image fallback on loading error 2023-06-26 19:20:05 +10:00
6390af229d feat(ui): add dynamic prompts to t2i tab
- add param accordion for dynamic prompts
- update graphs
2023-06-26 19:15:54 +10:00
47e651225d query for 'main' model type when populating UI lists
to support renaming of 'pipeline' models to 'main'
2023-06-26 01:39:46 -04:00
9cfac4175f feat(ui): improved node parsing (#3584)
- use `swagger-parser` to dereference openapi schema
- tidy vite plugins
- use mantine select for node add menu
2023-06-26 17:38:23 +12:00
3a19be1606 fix: Add missing IAIMantineSelect disabled styles 2023-06-26 17:37:47 +12:00
b51ab056f2 Merge branch 'main' into feat/ui/update-node-parsing 2023-06-26 17:32:44 +12:00
e206fad22a fix(ui): fix controlnet image size (#3585) 2023-06-26 17:32:07 +12:00
7b97639961 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-26 01:24:30 -04:00
60780e990d fix(ui): fix controlnet image size 2023-06-26 12:03:11 +10:00
8d43cf92f6 feat(ui): update action santizer for schema actions 2023-06-26 12:00:38 +10:00
862bf7546c feat(ui): improved node parsing
- use `swagger-parser` to dereference openapi schema
- tidy vite plugins
- use mantine select for node add menu
2023-06-26 11:53:54 +10:00
91c3a58fb6 Fix lycoris layers init 2023-06-26 04:33:37 +03:00
5cebf67ee4 Apply lora by patching lora instead of hooks 2023-06-26 03:57:33 +03:00
1ba94a92b3 Fixes 2023-06-26 03:54:42 +03:00
23c22ac933 Refactor logic/small fixes 2023-06-26 03:07:54 +03:00
160b5d7992 add support for an autoimport models directory scanned at startup time 2023-06-25 18:50:15 -04:00
10e8389fa4 Commenting out LeReS ControlNet image preprocessor until release of controlnet_aux v0.0.6 (supported on controlnet_aux current main, but not on latest release v0.0.5) 2023-06-25 14:25:14 -07:00
45aa338a98 Changed pyproject.toml to require controlnet_aux >= 0.0.5 (to enable use of SAM ControlNet preprocessor) 2023-06-25 14:22:34 -07:00
414a04774c Added LeReS ControlNet image preprocessor. 2023-06-25 14:19:55 -07:00
c91d1eacba Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout 2023-06-25 16:04:48 -04:00
60b37b7ff4 fix model manager documentation 2023-06-25 16:04:43 -04:00
b872e7a5e0 Simplifying ControlNet SAM preprocessor segmentation color mapping. 2023-06-25 12:54:48 -07:00
de4064bdac Fixed problem with with non-reproducible results from ControlNet SegmentAnything preprocessor. Cause was controlnet_aux randomization of segmentation coloring, which seems to lead to some randomization of resulting images using ControlNet seg model. Switched to using deterministic ADE20K color palette instead, which solved the problem. 2023-06-25 12:38:17 -07:00
10c3753d7f Added SAM preprocessor 2023-06-25 11:16:39 -07:00
a3c22b5fe6 Remove upcast_attention and prediction_type from stable diffusion model logic, fix ckpt conversion according to this 2023-06-25 21:06:22 +03:00
922468b836 Add control_mode parameter to ControlNet (#3535)
This PR adds the "control_mode" option to ControlNet implementation. 
Possible control_mode options are: 

- balanced -- this is the default, same as previous implementation
without control_mode
- more_prompt -- pays more attention to the prompt
- more _control -- pays more attention to the ControlNet (in earlier
implementations this was called "guess_mode")
- unbalanced -- pays even more attention to the ControlNet 

balanced, more_prompt, and more_control should be nearly identical to
the equivalent options in the [auto1111 sd-webui-controlnet
extension](https://github.com/Mikubill/sd-webui-controlnet#more-control-modes-previously-called-guess-mode)

The changes to enable balanced, more_prompt, and more_control are
managed deeper in the code by two booleans, "soft_injection" and
"cfg_injection". The three control mode options in sd-webui-controlnet
map to these booleans like:
 
!soft_injection && !cfg_injection ⇒  BALANCED            
 soft_injection &&  cfg_injection ⇒  MORE_CONTROL 
 soft_injection && !cfg_injection ⇒  MORE_PROMPT   
 
The "unbalanced" option simply exposes the fourth possible combination
of these two booleans:
!soft_injection &&  cfg_injection ⇒ UNBALANCED

With "unbalanced" mode it is very easy to overdrive the controlnet
inputs. It's recommended to use a cfg_scale between 2 and 4 to mitigate
this, along with lowering controlnet weight and possibly lowering "end
step percent". With those caveats, "unbalanced" can yield interesting
results.

Example of all four modes using Canny edge detection ControlNet with
prompt "old man", identical params except for control_mode:

![Screenshot from 2023-06-11
23-53-00](https://github.com/invoke-ai/InvokeAI/assets/303100/c9e31e7f-50de-4d85-94f2-b5a4af3d067b)
Top middle:       BALANCED
Top right:          MORE_CONTROL
Bottom middle: MORE_PROMPT
Bottom right :    UNBALANCED

I kind of chose this seed because it shows pretty rough results with
BALANCED (the default), but in my opinion better results with both
MORE_CONTROL and MORE_PROMPT. And you can definitely see how MORE_PROMPT
pays more attention to the prompt, and MORE_CONTROL pays more attention
to the control image. And shows that UNBALANCED with default cfg_scale
etc is unusable.

But here are four examples from same series (same seed etc), all have
control_mode = UNBALANCED but now cfg_scale is set to 3.
![Screenshot from 2023-06-11
23-48-44](https://github.com/invoke-ai/InvokeAI/assets/303100/5a495306-2164-40aa-9cc8-ce737d7671e7)
And param differences are:
Top middle: prompt="old man", control_weight=0.3, end_step_percent=0.5
Top right: prompt="old man", control_weight=0.4, end_step_percent=1.0
Bottom middle: prompt=None, control_weight=0.3, end_step_percent=0.5
Bottom right: prompt=None, control_weight=0.4, end_step_percent=1.0

So with the right settings UNBALANCED seems useful.
2023-06-25 16:09:26 +12:00
57e719702d fix(ui): add missing ControlNetInvocation type; tidy schema-derived types 2023-06-25 14:04:53 +10:00
11378a9236 chore(ui): regen api schema 2023-06-25 14:04:16 +10:00
132829c88f fix(ui): fix path of generated schema types 2023-06-25 14:04:00 +10:00
4d4b5b56dc Merge branch 'main' into feat/controlnet-control-modes 2023-06-25 15:48:07 +12:00
a9334128c9 chore(ui): bump all packages (#3579)
Everything seems to be working.

- Due to a change to `reactflow`, I regenerated `yarn.lock`
- New chakra CLI fixes issue I had made a patch for; removed the patch
- Change to fontsource changed how we import that font
- Change to fontawesome means we lost the txt2img tab icon, just chose a
similar one
2023-06-25 15:45:39 +12:00
6b276587d8 chore(ui): bump all packages
Everything seems to be working.

- Due to a change to `reactflow`, I regenerated `yarn.lock`
- New chakra CLI fixes issue I had made a patch for; removed the patch
- Change to fontsource changed how we import that font
- Change to fontawesome means we lost the txt2img tab icon, just chose a similar one
2023-06-25 13:44:10 +10:00
c5faffc18b Merge branch 'main' of github.com:invoke-ai/InvokeAI into feat/controlnet-control-modes
Only "real" conflicts were in:
     invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx
     invokeai/frontend/web/src/features/controlNet/store/controlNetSlice.ts
2023-06-24 17:05:57 -07:00
c3c4a71173 implemented Stalker's suggested improvements 2023-06-24 12:37:26 -04:00
d5f742620f Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-24 11:58:06 -04:00
ba1371a88f rename ModelType.Pipeline to ModelType.Main 2023-06-24 11:45:49 -04:00
3ae996ebcb fix(ui): fix metadata viewer too stronk 2023-06-24 18:15:49 +10:00
3d16605762 fix(ui): fix controlnet upload button 2023-06-24 18:15:49 +10:00
b6dec2b826 fix(ui): fix controlnet dnd overlay not showing on dragover 2023-06-24 18:15:49 +10:00
013e2aa2a1 fix(ui): fix control image sizes
they were all weird
2023-06-24 18:15:49 +10:00
8f9fa15fc8 fix(ui): fix image fetching query string 2023-06-24 18:15:49 +10:00
dde497404b fix(ui): fix init image display buttons
- Reset and Upload buttons along top of initial image
- Also had to mess around with the control net & DnD image stuff after changing the styles
- Abstract image upload logic into hook - does not handle native HTML drag and drop upload - only the button click upload
2023-06-24 18:15:49 +10:00
0472b33164 fix(ui): fix duplicate is_intermediate query param when fetching images 2023-06-24 17:57:39 +10:00
a6c615a98c fix(ui): fix canvas staging area
Missed some of the `imageUpdated` stuff
2023-06-24 17:57:39 +10:00
bab3a9504e fix(nodes): fix LatentsToImage not using is_intermediate when creating images
Appears this was removed during a merge conflict resolution.
2023-06-24 17:57:39 +10:00
13f25edb1e fix(ui): fix incorrect boards endpoint matchers being used
Should fix some stale-data issues with the auto-adding of images to selected boards, and deleting images from boards.
2023-06-24 17:57:39 +10:00
8bacee115a fix(ui): fix thunks not using configured api client 2023-06-24 17:57:39 +10:00
3619c86f07 fix(ui): fix deleting image does not refresh board
I had some some wonkiness in the thunks
2023-06-24 17:57:39 +10:00
8e724b5abe fix(ui): fix image upload
`openapi-fetch` does not handle non-JSON `body`s, always stringifying them, and sets the `content-type` to `application/json`.

The patch here does two things:
- Do not stringify `body` if it is one of the types that should not be stringified (https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#body)
- Do not add `content-type: application/json` unless it really is stringified JSON.

Upstream issue: https://github.com/drwpow/openapi-typescript/issues/1123

I'm not a bit lost on fixing the types and adding tests, so not raising a PR upstream.
2023-06-24 17:57:39 +10:00
e076231398 fix(ui): fix node editor image fields
I had broken this when converting to rtk-query
2023-06-24 17:57:39 +10:00
e386b5dc53 feat(ui): api layer refactor
*migrate from `openapi-typescript-codegen` to `openapi-typescript` and `openapi-fetch`*

`openapi-typescript-codegen` is not very actively maintained - it's been over a year since the last update.
`openapi-typescript` and `openapi-fetch` are part of the actively maintained repo. key differences:

- provides a `fetch` client instead of `axios`, which means we need to be a bit more verbose with typing thunks
- fetch client is created at runtime and has a very nice typescript DX
- generates a single file with all types in it, from which we then extract individual types. i don't like how verbose this is, but i do like how it is more explicit.
- removed npm api generation scripts - now we have a single `typegen` script

overall i have more confidence in this new library.

*use nanostores for api base and token*

very simple reactive store for api base url and token. this was suggested in the `openapi-fetch` docs and i quite like the strategy.

*organise rtk-query api*

split out each endpoint (models, images, boards, boardImages) into their own api extensions. tidy!
2023-06-24 17:57:39 +10:00
8137a99981 simplify 2023-06-24 17:57:39 +10:00
878847defd use BASE and TOKEN from OpenAPI if they are set 2023-06-24 17:57:39 +10:00
539d1f3bde remove redundant prediction_type and attention_upscaling flags 2023-06-23 16:54:52 -04:00
466ec3ab5e add router API support for model manager heuristic_import()` 2023-06-23 16:35:39 -04:00
54b74427f4 adjust for change in list_models() API 2023-06-23 14:13:37 -04:00
58d1857ab6 merge with main 2023-06-23 13:57:25 -04:00
3043af4620 implement vae passthru 2023-06-23 13:56:30 -04:00
9de54b2266 Fix vae conversion (#3555)
Unsure at which moment it broke, but now I can't convert vae(and model
as vae it's part) without this fix.
Need further research - maybe it's breaking change in `transformers`?
2023-06-23 15:55:26 +01:00
afd19ab61a merge 2023-06-23 10:53:48 -04:00
56bd873d7a make relative model paths work in model manager 2023-06-23 10:52:59 -04:00
5aaaaf64a1 Fix ckpt conversion 2023-06-23 17:29:54 +03:00
9140e2c0f2 Merge branch 'main' into fix/vae_conversion 2023-06-23 15:03:59 +03:00
65d0e80e96 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-23 02:18:34 +01:00
83e2b7578b fix(linux): installer script prints maximum python version usable (#3546)
Changes:
* Linux `install.sh` now prints the maximum python version to use in
case no installed python version matches

Commits:
fix(linux): installer script prints maximum python version usable
2023-06-23 02:16:01 +01:00
df1907e849 Merge branch 'main' into install-script-python-version-error-prompt-fix 2023-06-23 02:15:36 +01:00
a910403003 correctly migrate models that have relative paths 2023-06-22 21:10:31 -04:00
c7b7e087e4 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-23 01:45:05 +01:00
d65c833b90 migration now integrated into invokeai-configure 2023-06-22 16:44:55 -04:00
33b04f6386 migration script working well 2023-06-22 15:47:12 -04:00
0327eae509 chore: Regen API 2023-06-23 05:21:06 +12:00
bb85608890 Merge branch 'main' into feat/onnx 2023-06-23 05:18:41 +12:00
6c7668aaca Update onnx model structure, change code according 2023-06-22 20:03:17 +03:00
22c337b1aa Update UI To Use New Model Manager (#3548)
PR for the Model Manager UI work related to 3.0

[DONE]

- Update ModelType Config names to be specific so that the front end can
parse them correctly.
- Rebuild frontend schema to reflect these changes.
- Update Linear UI Text To Image and Image to Image to work with the new
model loader.
- Updated the ModelInput component in the Node Editor to work with the
new changes.

[TODO REMEMBER]

- Add proper types for ModelLoaderType in `ModelSelect.tsx`

[TODO] 

- Everything else.
2023-06-22 22:06:26 +12:00
339e7ce213 feat(ui): initial implementation of model loading
- Update model listing code to use `rtk-query`
- Update all graph generation to use new `pipeline_model_loader` node
2023-06-22 17:48:57 +10:00
2a178f5a25 chore(ui): regen api client 2023-06-22 17:48:13 +10:00
1bc170727b tidy(nodes): rename sd_model_loader to pipeline_model_loader
this is more accurate bc it can do eg kandinsky also
2023-06-22 17:47:58 +10:00
3722cdf5d6 chore(ui): regen api client 2023-06-22 17:36:20 +10:00
42a59aa147 feat(nodes): add sd_model_loader node
Loads any pipeline model.

Also introduced is `PipelineModelField`, which includes a model name and base model.
2023-06-22 17:36:05 +10:00
b937b7da01 feat(models): update model manager service & route to return list of models 2023-06-22 17:34:12 +10:00
21245a0fb2 Set model type to const value in openapi schema, add model format enums to model schema(as they not not referenced in case of Literal definition) 2023-06-22 16:51:53 +10:00
da566b59e8 Update model format field to use enums 2023-06-22 16:51:53 +10:00
e4dc9c5a04 Rename format to model_format(still named format when work with config) 2023-06-22 16:51:53 +10:00
aceadacad4 Remove default model logic 2023-06-22 16:51:53 +10:00
d3dec59cc3 tweal: UI colors 2023-06-22 16:51:53 +10:00
6c98700740 fix: Adjust the Schedular select width
So the long names do not get cut off.
2023-06-22 16:51:53 +10:00
c4c3c96062 Revert "feat: Port Schedulers to Mantine"
This reverts commit e0c105f413.
2023-06-22 16:51:35 +10:00
6256be480c fix: Remove type from Model type name 2023-06-22 16:48:35 +10:00
7033071934 fix: Unserialization key issue 2023-06-22 16:48:35 +10:00
e48528bbef revert: getModels to receivedModels 2023-06-22 16:48:35 +10:00
6bdf68dd4c feat: Port Schedulers to Mantine 2023-06-22 16:48:35 +10:00
0c3616229e cleanup: Updated model slice names to be more descriptive
Basically updated all slices to be more descriptive in their names. Did so in order to make sure theres good naming scheme available for secondary models.
2023-06-22 16:43:14 +10:00
604cc1adcd wip: Move Model Selector to own file 2023-06-22 16:43:14 +10:00
4847212d5b feat: Enable 2.x Model Generation in Linear UI 2023-06-22 16:43:14 +10:00
727293d722 fix: 2.1 models breaking generation
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-06-22 16:42:59 +10:00
d2f3500e1b chore: Rebuild API - base_model and type added 2023-06-22 16:42:59 +10:00
ef83a2fffe Add name, base_mode, type fields to model info 2023-06-22 16:42:51 +10:00
f8d7477c7a wip: Add 2.x Models to the Model List 2023-06-22 16:42:51 +10:00
e374211313 chore: Rebuild API with new Model API names 2023-06-22 16:41:31 +10:00
01d17601b8 Generate config names for openapi 2023-06-22 16:41:19 +10:00
bf0d5f4cfc fix: Update missing name types to new names 2023-06-22 16:41:02 +10:00
663f4935f5 chore: Rebuild API 2023-06-22 16:41:02 +10:00
9838dda1b7 chore: Update model config type names 2023-06-22 16:40:40 +10:00
2d889e133d chore(ui): regen api client 2023-06-22 16:25:49 +10:00
6779f1a5ad fix(db): update models for boards w/ nullable deleted_at 2023-06-22 16:25:49 +10:00
19a6e5dad8 chore(ui): regen api client 2023-06-22 16:25:49 +10:00
285195bf72 feat(api): add get_board route 2023-06-22 16:25:49 +10:00
10008859a4 tidy(ui): remove all refs to boards thunks 2023-06-22 16:25:49 +10:00
3c04340f3f tidy(ui): tidy up update image board modal 2023-06-22 16:25:49 +10:00
79f0c4d3c4 feat(ui): add remove from board to image context menu 2023-06-22 16:25:49 +10:00
37d4e05838 fix(ui): fix board's image list not updating when image removed from board 2023-06-22 16:25:49 +10:00
a00ad6ac03 feat(ui): dropping image on All Images board removes it from board 2023-06-22 16:25:49 +10:00
2ffead000c tidy(ui): remove console.log() 2023-06-22 16:25:49 +10:00
922319cb84 fix(ui): fix first added board doesn't show until refresh
Had incorrect `invalidatesTags` array for the mutation.
2023-06-22 16:25:49 +10:00
6ee0e197bb feat(db): add deleted_at to board_images 2023-06-22 16:25:49 +10:00
d3e6f0130c fix(ui): fix issue with gallery not letting you load more images
To determine whether the Load More button should work, we need to keep track of how many images are left to load for a given board or category.

The Assets tab doesn't work, though. Need to figure out a better way to handle this.
2023-06-22 16:25:49 +10:00
421c23d3ea fix(ui): fix gallery image fetching for board categories 2023-06-22 16:25:49 +10:00
4545f3209f fix(ui): fix bug with image deletion not removing image from gallery 2023-06-22 16:25:49 +10:00
e2ee8102c2 tidy(db): tidy image_record_storage.py 2023-06-22 16:25:49 +10:00
083a0fc4cf tidy(ui): remove references to boardsAdapter 2023-06-22 16:25:49 +10:00
26b75b85f7 fix(ui): if deleting selected board, deselect it 2023-06-22 16:25:49 +10:00
f560a462a0 feat(ui): rudimentary categorized gallery image fetching 2023-06-22 16:25:49 +10:00
d501986610 chore(ui): regen api client 2023-06-22 16:25:49 +10:00
67a75f6895 feat(api, db): support board_id filter on images service get_many() 2023-06-22 16:25:49 +10:00
3c032c0767 feat(ui): only auto-add image to board if is not intermediate 2023-06-22 16:25:49 +10:00
abd6561140 feat(ui): just fetch all boards instead of paginating them 2023-06-22 16:25:49 +10:00
bd533426fc feat(ui): first pass at boards styling 2023-06-22 16:25:49 +10:00
2489d5459f chore(ui): regen api client 2023-06-22 16:25:49 +10:00
ac477cf5d6 fix(ui): improve image deletion handling 2023-06-22 16:25:49 +10:00
be3bdae847 fix: resolve rebase conflicts 2023-06-22 16:25:49 +10:00
3e0ee838cf fix(ui): add initial image dimensions to state
We need to access the initial image dimensions during the creation of the `ImageToImage` graph to determine if we need to resize the image.

Because the `initialImage` is now just an image name, we need to either store (easy) or dynamically retrieve its dimensions during graph creation (a bit less easy).

Took the easiest path. May need to revise this in the future.
2023-06-22 16:25:49 +10:00
8d3bec57d5 feat(ui): store only image name in parameters
Images that are used as parameters (e.g. init image, canvas images) are stored as full `ImageDTO` objects in state, separate from and duplicating any object representing those same objects in the `imagesSlice`.

We cannot store only image names as parameters, then pull the full `ImageDTO` from `imagesSlice`, because if an image is not on a loaded page, it doesn't exist in `imagesSlice`. For example, if you scroll down a few pages in the gallery and send that image to canvas, on reloading the app, the canvas will be unable to load that image.

We solved this temporarily by storing the full `ImageDTO` object wherever it was needed, but this is both inefficient and allows for stale `ImageDTO`s across the app.

One other possible solution was to just fetch the `ImageDTO` for all images at startup, and insert them into the `imagesSlice`, but then we run into an issue where we are displaying images in the gallery totally out of context.

For example, if an image from several pages into the gallery was sent to canvas, and the user refreshes, we'd display the first 20 images in gallery. Then to populate the canvas, we'd fetch that image we sent to canvas and add it to `imagesSlice`. Now we'd have 21 images in the gallery: 1 to 20 and whichever image we sent to canvas. Weird.

Using `rtk-query` solves this by allowing us to very easily fetch individual images in the components that need them, and not directly interact with `imagesSlice`.

This commit changes all references to images-as-parameters to store only the name of the image, and not the full `ImageDTO` object. Then, we use an `rtk-query` generated `useGetImageDTOQuery()` hook in each of those components to fetch the image.

We can use cache invalidation when we mutate any image to trigger automated re-running of the query and all the images are automatically kept up to date.

This also obviates the need for the convoluted URL fetching scheme for images that are used as parameters. The `imagesSlice` still need this handling unfortunately.
2023-06-22 16:25:49 +10:00
cfda128e06 feat(ui): wip boards via rtk-query 2023-06-22 16:25:49 +10:00
661a94b3de feat(db): add get_all() method for boards
This is needed to show the full list of boards in the update boards modal.
2023-06-22 16:25:49 +10:00
9ef64016c7 feat(db): sort board by created_at 2023-06-22 16:25:49 +10:00
21f0d0b0c1 fix(db): fix deserialize_board_record()
It was not adding `cover_image_name`
2023-06-22 16:25:49 +10:00
8bce234542 feat(db): update image-board relationships on add
Functionally, `add_image_to_board()` now moves images between boards.
2023-06-22 16:25:49 +10:00
daadf6ebfd feat(ui): add board image count badge 2023-06-22 16:25:49 +10:00
fe10a9f747 render cover image based on URL in image entities 2023-06-22 16:25:49 +10:00
7a2d3f628a add boardToAddTo state so that result can be added to board when generation is complete 2023-06-22 16:25:49 +10:00
4defb92105 handle long board names 2023-06-22 16:25:49 +10:00
f9f3c91a83 drag and drop to move image to board, a bit of board list UI 2023-06-22 16:25:49 +10:00
95b9c8e505 return cover_image_name since urls change, override one from db for now 2023-06-22 16:25:49 +10:00
49a02c157b feat(ui): fix UpdateImageBoardModal select 2023-06-22 16:25:49 +10:00
d604d986f9 feat(db, api): update get_board_for_image & service dependencies
- previously was `get_boards_for_image`, returning a list of `BoardDTO`, now returns a single `board_id`
2023-06-22 16:25:49 +10:00
70cc037a9c fix(ui): do not persist boards 2023-06-22 16:25:49 +10:00
e4893e4031 fix(db): return board records from CRUD methods 2023-06-22 16:25:49 +10:00
4a0a718b96 foiled by a comma 2023-06-22 16:25:49 +10:00
ca8f1a7828 (api) use most recently generated image for cover photo 2023-06-22 16:25:49 +10:00
2e41af2109 [half-baked] adding image to board modal 2023-06-22 16:25:49 +10:00
bd29e5e655 UI tweaks 2023-06-22 16:25:49 +10:00
dcfee2e1e4 add searching to boards list 2023-06-22 16:25:49 +10:00
8aac683319 can delete and rename boards 2023-06-22 16:25:49 +10:00
d306a84447 feat(ui): rough out boards UI 2023-06-22 16:25:49 +10:00
5865ecd530 feat(db): add FK for boards.cover_image_name 2023-06-22 16:25:49 +10:00
e1f9685b02 feat(db): add index for boards 2023-06-22 16:25:49 +10:00
498bf0d0ba feat(db): add indices for board_images 2023-06-22 16:25:49 +10:00
163ef2c941 feat(ui): remove refs to BoardRecord in UI
UI should only work w/ BoardDTO
2023-06-22 16:25:49 +10:00
48193b7fa7 chore(ui): regen api client 2023-06-22 16:25:49 +10:00
dd1b3c9f35 fix(api): update API models to use BoardDTOs 2023-06-22 16:25:49 +10:00
4b32322a58 feat(nodes): make board <> images a one-to-many relationship
we can extend this to many-to-many in the future if desired.
2023-06-22 16:25:49 +10:00
e06c43adc8 lint fix 2023-06-22 16:25:49 +10:00
c009f46b00 regenerate api schema 2023-06-22 16:25:49 +10:00
748016bdab routes working 2023-06-22 16:25:49 +10:00
72e9ced889 feat(nodes): add boards and board_images services 2023-06-22 16:25:49 +10:00
3833304f57 [WIP] board list endpoint w cover photos 2023-06-22 16:25:49 +10:00
4bfaae6617 fix type 2023-06-22 16:25:49 +10:00
499a174832 some more 2023-06-22 16:25:49 +10:00
6ca5ad9075 filter images by board_id 2023-06-22 16:25:49 +10:00
a121e6b3a0 add board_id association to image 2023-06-22 16:25:49 +10:00
207602f425 remove unused 2023-06-22 16:25:49 +10:00
a1671519d5 board CRUD 2023-06-22 16:25:49 +10:00
1c31efa57c punctuation fix in user message 2023-06-21 09:37:24 -04:00
b727442f84 better window size behavior under alacritty & terminator 2023-06-21 09:32:58 -04:00
7759b3f75a Small refactor 2023-06-21 04:24:25 +03:00
4d337f6abc ONNX Model/runtime first implementation 2023-06-21 02:12:21 +03:00
90df316835 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-20 22:50:41 +01:00
257e972599 fix failing pytest for config module 2023-06-20 13:26:01 -04:00
8639794c12 Merge branch 'main' into install-script-python-version-error-prompt-fix 2023-06-20 18:24:54 +01:00
2fc19d9afa suppress description in "other models" tab for space reasons 2023-06-20 11:45:37 -04:00
ac6403f877 address some of ebr issues 2023-06-20 11:08:27 -04:00
678bb4fe10 Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout 2023-06-20 09:42:21 -04:00
294b1e83e6 test and fix edge cases 2023-06-20 09:42:10 -04:00
92c86fd0b8 Set model type to const value in openapi schema, add model format enums to model schema(as they not not referenced in case of Literal definition) 2023-06-20 03:44:58 +03:00
46dc751139 Update model format field to use enums 2023-06-20 03:30:09 +03:00
4cefe37723 Rename format to model_format(still named format when work with config) 2023-06-20 03:25:08 +03:00
82b73c50a0 Remove default model logic 2023-06-20 03:13:10 +03:00
7df7a95299 Merge branch 'main' into model-manager-ui-30 2023-06-19 23:26:11 +12:00
d339c8627f feat: Upgrade to Diffusers 0.17.1 (#3545)
Just syncing up with diffusers upstream.
2023-06-19 23:25:22 +12:00
a53e0dce6c Merge branch 'upgrade-diffusers' of https://github.com/blessedcoolant/InvokeAI into upgrade-diffusers 2023-06-19 23:21:06 +12:00
0ae6325353 chore: Add torchsde as a dependency for the SDE schedulers 2023-06-19 23:20:53 +12:00
12299120ab Merge branch 'main' into upgrade-diffusers 2023-06-19 23:16:39 +12:00
85b4b359c2 tweal: UI colors 2023-06-19 23:16:14 +12:00
cfe81b5e00 fix: Adjust the Schedular select width
So the long names do not get cut off.
2023-06-19 23:05:32 +12:00
b0c4451324 Merge branch 'main' into model-manager-ui-30 2023-06-19 23:02:59 +12:00
1a7fe172ca Fix inpaint node to new manager (#3550)
Inpaint node still used by canvas, so fixed it to new model manager api.
Other old generation code deleted.
2023-06-19 23:01:05 +12:00
4f5693040e Merge branch 'main' into fix/inpaint_new_manager 2023-06-19 22:55:00 +12:00
d4931522d4 Merge branch 'main' into model-manager-ui-30 2023-06-19 22:53:13 +12:00
bb2df88c06 Add dpmpp_sde and dpmpp_2m_sde schedulers(with karras) (#3554)
Added sde schedulers.
Problem - they add random on each step, to get consistent image we need
to provide seed or generator.
I done it, but if you think that it better do in other way - feel free
to change.

Also made ancestral schedulers reproducible, this done same way as for
sde scheduler.
2023-06-19 22:52:33 +12:00
41442eb7f6 feat(ui): convert canvas txt2img & img2img to latents
- Add graph builders for canvas txt2img & img2img - they are mostly copy and paste from the linear graph builders but different in a few ways that are very tricky to work around. Just made totally new functions for them.
- Canvas txt2img and img2img support ControlNet (not inpaint/outpaint). There's no way to determine in real-time which mode the canvas is in just yet, so we cannot disable the ControlNet UI when the mode will be inpaint/outpaint - it will always display. It's possible to determine this in near-real-time, will add this at some point.
- Canvas inpaint/outpaint migrated to use model loader, though inpaint/outpaint are still using the non-latents nodes.
2023-06-19 15:57:28 +10:00
223a679ac1 chore(ui): regen api client 2023-06-19 15:57:28 +10:00
3c60616b4d feat(ui): simplify linear graph creation logic
Instead of manually creating every node and edge, we can simply copy/paste the base graph from node editor, then sub in parameters.

This is a much more intelligible process. We still need to handle seed, img2img fit and controlnet separately.
2023-06-19 15:57:28 +10:00
a01998d095 Remove more old logic 2023-06-19 15:57:28 +10:00
7b35162b9e Remove old logic except for inpaint, add support for lora and ti to inpaint node 2023-06-19 15:57:28 +10:00
c26e1a9271 Rewrite inpaint node to new model manager, remove TextToImage and ImageToImage nodes 2023-06-19 15:57:28 +10:00
9b32407744 Provide generator to all schedulers step function to make both ancestral and sde schedulers reproducible 2023-06-19 00:34:01 +03:00
82091b9a66 Fix vae conversion 2023-06-18 23:46:07 +03:00
f3d9797ebe Add dpmpp_sde and dpmpp_2m_sde schedulers(with karras) 2023-06-18 23:38:15 +03:00
f312e1448f Update index.md
fixed typo
2023-06-18 10:39:02 -04:00
17e2a35228 fix: merge conflicts 2023-06-18 22:25:48 +12:00
91016d8b29 Merge branch 'main' into model-manager-ui-30 2023-06-18 22:23:18 +12:00
9fda21cf40 Revert "feat: Port Schedulers to Mantine"
This reverts commit e0c105f413.
2023-06-18 22:22:56 +12:00
a11946f0ad feat: Port Schedulers to Mantine (#3552)
- Ports Schedulers to use IAIMantineSelect.
- Adds ability to favorite schedulers in Settings. Favorited schedulers
show up at the top of the list.
- Adds IAIMantineMultiSelect component.
- Change SettingsSchedulers component to use IAIMantineMultiSelect
instead of Chakra Menus.
2023-06-18 22:22:03 +12:00
80a8d3ef28 style: Theme placeholder style for IAIMantineMultiSelect 2023-06-18 22:17:09 +12:00
f4ca9d0e09 Merge branch 'scheduler-select' of https://github.com/blessedcoolant/InvokeAI into scheduler-select 2023-06-18 22:05:12 +12:00
a960fa009d fix: Fix some styling issues with IAIMantineMultiSelect 2023-06-18 22:04:12 +12:00
b96b95bc95 feat(ui): enabledSchedulers -> favoriteSchedulers 2023-06-18 20:01:05 +10:00
450641c414 fix(ui): enable all schedulers by default 2023-06-18 19:39:31 +10:00
94cfcdc411 feat(ui): improve scheduler selection logic
- remove UI-specific state (the enabled schedulers) from redux, instead derive it in a selector
- simplify logic by putting schedulers in an object instead of an array
- rename `activeSchedulers` to `enabledSchedulers`
- remove need for `useEffect()` when `enabledSchedulers` changes by adding a listener for the `enabledSchedulersChanged` action/event to `generationSlice`
- increase type safety by making `enabledSchedulers` an array of `SchedulerParam`, which is created by the zod schema for scheduler
2023-06-18 19:34:37 +10:00
150059f704 fix(ui): create all scheduler constants up-front 2023-06-18 18:49:10 +10:00
f1a8b9daee fix(ui): clarify scheduler logic
- use full conditional syntax with `{}`
- do not mutate `action.payload` in a reducer
2023-06-18 18:47:59 +10:00
be8c0bb952 feat: Use Labels for Schedulers 2023-06-18 20:17:51 +12:00
dae5b9b259 fix: Minor styling fix to the IAIMantineMultiSelect component 2023-06-18 20:06:56 +12:00
06428fac67 fix: Revert scheduler back to zod validation 2023-06-18 20:02:36 +12:00
59b5dfc3e0 feat: Port Schedulers to Mantine 2023-06-18 19:47:27 +12:00
809ec7163e fix: Remove type from Model type name 2023-06-18 19:41:30 +12:00
7c9a939b47 fix: Unserialization key issue 2023-06-18 19:38:15 +12:00
9634c96020 revert: getModels to receivedModels 2023-06-18 19:35:46 +12:00
e0c105f413 feat: Port Schedulers to Mantine 2023-06-18 19:31:53 +12:00
f0bf32c476 Merge branch 'main' into model-manager-ui-30 2023-06-18 17:37:34 +12:00
fd981a90be Add lms and dpmpp2_s karras scheduler (#3551)
Karras sigmas support added to lms and dpmpp2_s schedulers in 0.17.0
diffusers.
2023-06-18 17:36:47 +12:00
28373dbb98 cleanup: Updated model slice names to be more descriptive
Basically updated all slices to be more descriptive in their names. Did so in order to make sure theres good naming scheme available for secondary models.
2023-06-18 17:36:23 +12:00
e1d53b86f3 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-17 16:26:56 -07:00
ddb3f4b02b make configure script work properly on empty rootdir 2023-06-17 19:26:35 -04:00
4133d77772 wip: Move Model Selector to own file 2023-06-18 09:19:13 +12:00
61c426f502 feat: Enable 2.x Model Generation in Linear UI 2023-06-18 08:27:13 +12:00
bf0577c882 fix: 2.1 models breaking generation
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-06-18 08:26:25 +12:00
24673fd859 chore: Rebuild API - base_model and type added 2023-06-18 07:50:28 +12:00
dc669d1447 Add name, base_mode, type fields to model info 2023-06-17 22:48:44 +03:00
ce4110b9f4 wip: Add 2.x Models to the Model List 2023-06-18 07:01:44 +12:00
6b7cf3f3be Add lms and dpmpp2_s karras scheduler 2023-06-17 21:00:16 +03:00
0f3b7d2b3d chore: Rebuild API with new Model API names 2023-06-18 03:00:16 +12:00
16dc78f6c6 Generate config names for openapi 2023-06-17 17:15:36 +03:00
7a66856785 wip: Update Linear UI Txt2Img and Img2Img Graphs
Update the text to imaeg and image to image graphs to work with the new model loader. Currently only supports 1.x models. Will update this soon to make it work with all models.
2023-06-18 01:38:01 +12:00
c8dfa49d86 fix: Update missing name types to new names 2023-06-17 22:04:28 +12:00
76dd749b1e chore: Rebuild API 2023-06-17 21:29:32 +12:00
67d05d2066 chore: Update model config type names 2023-06-17 21:28:43 +12:00
15f8132e17 add direct-call script for model installer 2023-06-16 22:57:53 -04:00
f28d50070e configure/install basically working; needs edge case testing 2023-06-16 22:54:36 -04:00
f6f66307fc WIP README.md Updates 2023-06-16 17:27:02 -04:00
469dae8c88 fix(linux): installer script prints maximum python version usable 2023-06-16 15:18:23 +02:00
9d4b84ef68 feat: Upgrade to Diffusers 0.17.1 2023-06-16 23:57:57 +12:00
ada7399753 rewrite of widget display - marshalling needs rewrite 2023-06-15 23:32:33 -04:00
4cbc802e36 Model manager fixes (#3541)
Fix lora import
Fix sd2 config - `variant` field not added
Fix list models api - `base_model` arg not provided, redundant assert
check
2023-06-16 06:43:00 +12:00
5f2d07917d Fix lora import, fix sd2 config, fix list models api 2023-06-15 21:30:15 +03:00
5c740452f6 Model Manager rewrite (#3335) 2023-06-14 08:44:04 -07:00
82c2498043 Merge branch 'main' into lstein/new-model-manager 2023-06-14 08:41:40 -07:00
4ca325e8e6 chore: Rebuild API 2023-06-15 03:20:49 +12:00
6b8e88ad7f Merge branch 'main' into feat/controlnet-control-modes 2023-06-15 03:18:41 +12:00
0497bea264 fix: add dynamicprompts to pyproject.toml 2023-06-15 01:05:16 +10:00
b8e32fa459 chore(ui): regen api client 2023-06-15 01:05:16 +10:00
34ebee67b7 fix(nodes): fix revert conflict 2023-06-15 01:05:16 +10:00
e0c998d192 Revert "feat(ui): add warning socket event handling"
This reverts commit e7a61e631a42190e4b64e0d5e22771c669c5b30c.
2023-06-15 01:05:16 +10:00
b51e9a6bdb Revert "feat(nodes): add warning socket event"
This reverts commit cefdd9d634e515239bd85666c872a0d64bb9d772.
2023-06-15 01:05:16 +10:00
09f396ce84 feat(ui): add warning socket event handling 2023-06-15 01:05:16 +10:00
abee37eab3 feat(nodes): add warning socket event 2023-06-15 01:05:16 +10:00
42e48b2bef feat(nodes): add dynamic prompt node 2023-06-15 01:05:16 +10:00
70ece4364c refactor(minor): Image & Latent File Storage (#3538)
- `DiskImageStorage` and `DiskLatentsStorage` have now both been updated
to exclusively work with `Path` objects and not rely on the `os` lib to
handle pathing related functions.
- We now also validate the existence of the required image output
folders and latent output folders to ensure that the app does not break
in case the required folders get tampered with mid-session.
- Just overall general cleanup.

Tested it. Don't seem to be any thing breaking.
2023-06-15 02:43:27 +12:00
f9d5f9d52c fix(nodes): minor fixes for folder validation
- fix type for `__output_folder`
- prefix `validate_storage_folders()` with `__` to indicate private method
2023-06-15 00:40:39 +10:00
d0ee3558d1 Merge branch 'main' into lstein/new-model-manager 2023-06-14 17:29:01 +03:00
587297878a refactor(minor): Latent Disk Storage 2023-06-15 02:21:49 +12:00
b4c998a9ae refactor(minor): Image File Storage 2023-06-15 01:58:58 +12:00
88e8e3977b feat(ui): update UI to not use image_origin
see commit `8ad8de8: feat(nodes): remove `image_origin` from most places` for details.
2023-06-14 23:08:27 +10:00
24b86cffe9 chore(ui): regen api client & types 2023-06-14 23:08:27 +10:00
a1773197e9 feat(nodes): remove image_origin from most places
- remove `image_origin` from most places where we interact with images
- consolidate image file storage into a single `images/` dir

Images have an `image_origin` attribute but it is not actually used when retrieving images, nor will it ever be. It is still used when creating images and helps to differentiate between internally generated images and uploads.

It was included in eg API routes and image service methods as a holdover from the previous app implementation where images were not managed in a database. Now that we have images in a db, we can do away with this and simplify basically everything that touches images.

The one potentially controversial change is to no longer separate internal and external images on disk. If we retain this separation, we have to keep `image_origin` around in a number of spots and it getting image paths on disk painful.

So, I am have gotten rid of this organisation. Images are now all stored in `images`, regardless of their origin. As we improve the image management features, this change will hopefully become transparent.
2023-06-14 23:08:27 +10:00
6c53abc034 feat: Add ControlMode to Linear UI 2023-06-14 20:01:17 +12:00
eb7047b21d chore: Rebuild WebAPI 2023-06-14 19:26:02 +12:00
43419ac761 Merge branch 'main' into feat/controlnet-control-modes 2023-06-14 19:04:42 +12:00
5cd0e90816 Renamed ControlNet control_mode option "even_more_control" to "unbalanced" 2023-06-13 22:30:17 -07:00
cfd49e3921 Removing vestigial comments. 2023-06-13 21:33:15 -07:00
a8e0490133 Merge branch 'feat/controlnet-control-modes' of https://github.com/invoke-ai/InvokeAI into feat/controlnet-control-modes 2023-06-13 21:21:13 -07:00
1e08d865c9 chore: dummy commit to trigger actions 2023-06-14 14:14:24 +10:00
f8bb650cc1 revert: IAIScrollArea 2023-06-14 14:14:24 +10:00
2cee8bebb2 fix(ui): revert offset scrollbars
The wonky padding is too janky. Just overlay for now.
2023-06-14 14:14:24 +10:00
ade4ec5fd8 fix(ui): fix crash when toggling pinned parameters panel 2023-06-14 14:14:24 +10:00
70ffd6b03f fix(ui): fix controlnet selects data types 2023-06-14 14:14:24 +10:00
6c551df311 fix(ui): fix rebase conflicts 2023-06-14 14:14:24 +10:00
24f605629e cleanup: Remove OverlayScrollable component 2023-06-14 14:14:24 +10:00
2af1ec9d02 fix: Minor padding issue in unpinned drawer 2023-06-14 14:14:24 +10:00
79d53341de fix: Stretch scroll area so it retains parent width 2023-06-14 14:14:24 +10:00
e40b3506c4 fix: Options squishing on accordion collapse 2023-06-14 14:14:24 +10:00
33912382e3 feat: Introduce Mantine's ScrollArea 2023-06-14 14:14:24 +10:00
d282810e53 cleanup: Remove IAICustomSelect and port types 2023-06-14 14:14:24 +10:00
9df502fc77 fix(ui): fix mantine select props 2023-06-14 14:14:24 +10:00
705573f0a8 feat(ui): even more pedantic mantine select theming 2023-06-14 14:14:24 +10:00
1878ea94f6 feat: Port Canvas Layer Select to IAIMantineSelect 2023-06-14 14:14:24 +10:00
4ba5086b9a feat(ui): add tooltip to IAIMantineSelect 2023-06-14 14:14:24 +10:00
4a991b4daa feat(ui): more pedantic mantine select theming 2023-06-14 14:14:24 +10:00
80474d26f9 feat(ui): mantine scrollbar theming 2023-06-14 14:14:24 +10:00
9a77bd9140 feat: Port IAISelect's to IAIMantineSelect's
Ported everything except Model Manager selects and the Canvas Layer Select (this needs tooltip support)
2023-06-14 14:14:24 +10:00
14cdc800c3 feat(ui): pedantic mantine select theming 2023-06-14 14:14:24 +10:00
9cfbea4c25 feat: Match styling of Mantine Select with InvokeAI 2023-06-14 14:14:24 +10:00
5fe674e223 feat: Standardize IAIMantineSelect Component 2023-06-14 14:14:24 +10:00
32200efce8 feat: Change default font to Inter 2023-06-14 14:14:24 +10:00
68a02da990 feat: Use Mantine Select for Scheduler 2023-06-14 14:14:24 +10:00
5b20766ea3 chore: Move Mantine Theme Override to own file 2023-06-14 14:14:24 +10:00
9a914250a0 feat: Change Model Select To Mantine 2023-06-14 14:14:24 +10:00
0e3106f631 feat: Add Mantine Support 2023-06-14 14:14:24 +10:00
de3e6cdb02 Switched over to ControlNet control_mode with 4 options: balanced, more_prompt, more_control, even_more_control. Based on True/False combinations of internal booleans cfg_injection and soft_injection 2023-06-13 21:08:34 -07:00
6c5954f9d1 Add controlnet to model manager, fixes 2023-06-14 04:26:21 +03:00
740c05a0bb Save models on rescan, uncache model on edit/delete, fixes 2023-06-14 03:12:12 +03:00
26090011c4 Fix conflict resolve, add model configs to type annotation 2023-06-14 00:26:37 +03:00
0ee0c16a3b Update CONTROLNET.md 2023-06-13 16:46:58 -04:00
c9ae26a176 Merge branch 'main' into lstein/new-model-manager 2023-06-13 23:37:52 +03:00
e7db6d8120 Fix ckpt and vae conversion, migrate script, remove sd2-base 2023-06-13 18:05:12 +03:00
8495764d45 Moving from ControlNet guess_mode to separate booleans for cfg_injection and soft_injection for testing control modes 2023-06-13 00:41:36 -07:00
8b7fac75ed First pass at ControlNet "guess mode" implementation. 2023-06-13 00:41:36 -07:00
9e0e26f4c4 Moving from ControlNet guess_mode to separate booleans for cfg_injection and soft_injection for testing control modes 2023-06-12 23:57:57 -07:00
a6af7e8824 use format "diffusers" rather than format "folder" in models.yaml 2023-06-13 01:43:05 -04:00
87ba17a1f5 add migration script and update convert and face restoration paths 2023-06-13 01:27:51 -04:00
c7ea46a5da use latest version of transformers to avoid deprecation warnings 2023-06-12 16:07:39 -04:00
1439dc7712 Add SchedulerPredictionType and ModelVariantType enums 2023-06-12 16:07:04 -04:00
46cac6468e Upgrade to Diffusers 0.17.0 (#3514)
Diffusers is due for an update soon. #3512

Opening up a PR now with the required changes for when the new version
is live.

I've tested it out on Windows and nothing has broken from what I could
tell. I'd like someone to run some tests on Linux / Mac just to make
sure. Refer to the PR above on how to test it or install the release
branch.

```
pip install diffusers[torch]==0.17.0
```

Feel free to push any other changes to this PR you see fit.
2023-06-13 07:11:02 +12:00
2a814d886b Merge branch 'main' into diffusers-upgrade 2023-06-13 05:29:15 +12:00
60a2fbec41 feat(ui): improve controlnet-related config types 2023-06-13 00:04:21 +10:00
f15a328b80 fix(ui): allow controlnet with preprocessed control image 2023-06-13 00:04:21 +10:00
811d9ab55a fix(ui): disable shouldAutoConfig switch while processing 2023-06-13 00:04:21 +10:00
e00fed5c46 feat(ui): support disabling controlnet models & processors 2023-06-13 00:04:21 +10:00
a3fa38b353 fix(ui): revert IAICustomSelect usage to IAISelect
There are some bugs with it that I cannot figure out related to `floating-ui` and `downshift`'s handling of refs.

Will need to revisit this component in the future.
2023-06-13 00:04:21 +10:00
2e42a4bdd9 feat(ui): disable controlnets during processing 2023-06-13 00:04:21 +10:00
36f72b5a49 fix(ui): check for valid controlnets before adding to graph 2023-06-13 00:04:21 +10:00
af42d7d347 feat(ui): support negative controlnet weights 2023-06-13 00:04:21 +10:00
8607b1994c fix(ui): fix crash when controlnet enabled but no controlnets added 2023-06-13 00:04:21 +10:00
36eb1bd893 Fixes 2023-06-12 16:14:09 +03:00
9fa78443de Fixes, add sd variant detection 2023-06-12 05:52:30 +03:00
893f776f1d model_probe working; model_install incomplete 2023-06-11 19:51:53 -04:00
e051c450ed fix: git stash (#3528) 2023-06-12 08:55:36 +12:00
50135b726e fix: git stash 2023-06-12 08:53:09 +12:00
085ab54124 remove modified models.py and migrate code to models/base.py 2023-06-11 16:10:15 -04:00
8e1a56875e remove defunct code 2023-06-11 12:57:06 -04:00
000626ab2e move all installation code out of model_manager 2023-06-11 12:51:50 -04:00
694fd0c92f Fixes, first runable version 2023-06-11 16:42:40 +03:00
fd715026a7 First pass at ControlNet "guess mode" implementation. 2023-06-11 02:00:39 -07:00
c647056287 Feat/easy param (#3504)
* Testing change to LatentsToText to allow setting different cfg_scale values per diffusion step.

* Adding first attempt at float param easing node, using Penner easing functions.

* Core implementation of ControlNet and MultiControlNet.

* Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving.

* Added example of using ControlNet with legacy Txt2Img generator

* Resolving rebase conflict

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* More rebase repair.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Fixed lint-ish formatting error

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Added dependency on controlnet-aux v0.0.3

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): add value to conditioning field

* fix(ui): add control field type

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml  had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor.

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.

* Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params.

* Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput.

* Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements.

* Added float to FIELD_TYPE_MAP ins constants.ts

* Progress toward improvement in fieldTemplateBuilder.ts  getFieldType()

* Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services.

* Cleaning up from merge, re-adding cfg_scale to FIELD_TYPE_MAP

* Making sure cfg_scale of type list[float] can be used in image metadata, to support param easing for cfg_scale

* Fixed math for per-step param easing.

* Added option to show plot of param value at each step

* Just cleaning up after adding param easing plot option, removing vestigial code.

* Modified control_weight ControlNet param to be polistmorphic --
can now be either a single float weight applied for all steps, or a list of floats of size total_steps, that specifies weight for each step.

* Added more informative error message when _validat_edge() throws an error.

* Just improving parm easing bar chart title to include easing type.

* Added requirement for easing-functions package

* Taking out some diagnostic prints.

* Added option to use both easing function and mirror of easing function together.

* Fixed recently introduced problem (when pulled in main), triggered by num_steps in StepParamEasingInvocation not having a default value -- just added default.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-06-11 16:27:44 +10:00
738ba40f51 Fixes 2023-06-11 06:12:21 +03:00
3ce3a7ee72 Rewrite model configs, separate models 2023-06-11 04:49:09 +03:00
74b43c9bdf fix incorrect variable/typenames in model_cache 2023-06-10 10:41:48 -04:00
3d2ff7755e resolve conflicts 2023-06-10 10:13:54 -04:00
a87d52a389 resolve conflicts between lstein & sttalker changes 2023-06-10 09:59:19 -04:00
959e64c9b3 start removing repo_id support 2023-06-10 09:57:23 -04:00
2c056ead42 New models structure draft 2023-06-10 03:14:10 +03:00
30f20b55d5 fix logger behavior so that it is initialized after command line parsed (#3509)
In some cases the command-line was getting parsed before the logger was
initialized, causing the logger not to pick up custom logging
instructions from `--log_handlers`. This PR fixes the issue.
2023-06-09 08:24:47 -07:00
1bca32ed16 Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-09 06:27:26 -07:00
7f91139e21 fix(ui): fix crash when using dropdown on certain device resolutions 2023-06-09 22:19:30 +10:00
c53b7c7389 ui: misc fixes (#3525)
[fix(ui): blur tab on
click](93f3658a4a)

Fixes issue where after clicking a tab, using the arrow keys changes tab
instead of changing selected image

[fix(ui): fix canvas not filling screen on first
load](68be95acbb)

[feat(ui): remove clear temp folder canvas
button](813f79f0f9)

This button is nonfunctional.

Soon we will introduce a different way to handle clearing out
intermediate images (likely automated).
2023-06-09 23:44:21 +12:00
93f3658a4a fix(ui): blur tab on click
Fixes issue where after clicking a tab, using the arrow keys changes tab instead of changing selected image
2023-06-09 18:20:52 +10:00
68be95acbb fix(ui): fix canvas not filling screen on first load 2023-06-09 17:55:11 +10:00
813f79f0f9 feat(ui): remove clear temp folder canvas button
This button is nonfunctional.

Soon we will introduce a different way to handle clearing out intermediate images (likely automated).
2023-06-09 17:33:17 +10:00
c3ec86bc70 feat(ui): enhance IAICustomSelect (#3523)
Now accepts an array of strings or array of `IAICustomSelectOption`s.
This supports custom labels and tooltips within the select component.
2023-06-09 18:26:20 +12:00
05a19753c6 feat(ui): remove controlnet model descriptions
These are not yet exposed on the UI - somebody who understands what they do better can add them when we have a place to expose them
2023-06-09 16:20:30 +10:00
a33327c651 feat(ui): enhance IAICustomSelect
Now accepts an array of strings or array of `IAICustomSelectOption`s. This supports custom labels and tooltips within the select component.
2023-06-09 16:00:17 +10:00
6ad7cc4f2a feat(ui): decrease delay on dnd to 150ms (#3522) 2023-06-09 17:54:24 +12:00
c506355b8b feat(ui): decrease delay on dnd to 150ms 2023-06-09 15:53:17 +10:00
d54168b8fb feat(nodes): add tests for depth-first execution 2023-06-09 14:53:45 +10:00
c91b071c47 fix(nodes): use DFS with preorder traversal 2023-06-09 14:53:45 +10:00
9c57b18008 fix(nodes): update Invoker.invoke() docstring 2023-06-09 14:53:45 +10:00
69539a0472 feat(nodes): depth-first execution
There was an issue where for graphs w/ iterations, your images were output all at once, at the very end of processing. So if you canceled halfway through an execution of 10 nodes, you wouldn't get any images - even though you'd completed 5 images' worth of inference.

## Cause

Because graphs executed breadth-first (i.e. depth-by-depth), leaf nodes were necessarily processed last. For image generation graphs, your `LatentsToImage` will be leaf nodes, and be the last depth to be executed.

For example, a `TextToLatents` graph w/ 3 iterations would execute all 3 `TextToLatents` nodes fully before moving to the next depth, where the `LatentsToImage` nodes produce output images, resulting in a node execution order like this:

1. TextToLatents
2. TextToLatents
3. TextToLatents
4. LatentsToImage
5. LatentsToImage
6. LatentsToImage

## Solution

This PR makes a two changes to graph execution to execute as deeply as it can along each branch of the graph.

### Eager node preparation

We now prepare as many nodes as possible, instead of just a single node at a time.

We also need to change the conditions in which nodes are prepared. Previously, nodes were prepared only when all of their direct ancestors were executed.

The updated logic prepares nodes that:
- are *not* `Iterate` nodes whose inputs have *not* been executed
- do *not* have any unexecuted `Iterate` ancestor nodes

This results in graphs always being maximally prepared.

### Always execute the deepest prepared node

We now choose the next node to execute by traversing from the bottom of the graph instead of the top, choosing the first node whose inputs are all executed.

This means we always execute the deepest node possible.

## Result

Graphs now execute depth-first, so instead of an execution order like this:

1. TextToLatents
2. TextToLatents
3. TextToLatents
4. LatentsToImage
5. LatentsToImage
6. LatentsToImage

... we get an execution order like this:

1. TextToLatents
2. LatentsToImage
3. TextToLatents
4. LatentsToImage
5. TextToLatents
6. LatentsToImage

Immediately after inference, the image is decoded and sent to the gallery.

fixes #3400
2023-06-09 14:53:45 +10:00
7bce455d16 Merge branch 'main' into diffusers-upgrade 2023-06-09 16:27:52 +12:00
3f45294c61 feat(ui): restore reset button for init image (#3521) 2023-06-09 16:02:26 +12:00
fd03c7eebe feat(ui): restore reset button for init image 2023-06-09 14:00:23 +10:00
07c49a5726 feat(ui): skip resize on img2img if not needed (#3520) 2023-06-09 15:56:22 +12:00
8c688f8e29 feat(ui): skip resize on img2img if not needed 2023-06-09 13:54:23 +10:00
887576d217 add directory scanning for loras, controlnets and textual_inversions 2023-06-08 23:11:53 -04:00
6652f3405b merge with main 2023-06-08 21:08:43 -04:00
3d13167d32 Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-08 13:41:24 -07:00
27b5e43ea4 add messages to the user to tell them to enlarge window 2023-06-08 16:37:10 -04:00
f2bb507ebb allow logger to be reconfigured after startup 2023-06-08 09:23:11 -04:00
fe8f3381fc create databases directory on startup (#3518)
This PR creates the databases directory at app startup time. It also
removes a couple of debugging statements that were inadvertently left in
the model manager.
2023-06-08 23:40:32 +12:00
2a6d11e645 create databases directory on startup 2023-06-08 07:17:54 -04:00
01f46d3c7d Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-07 19:50:44 -07:00
5f76b62553 Update installer support for main (#3448)
#  Make InvokeAI package installable by mere mortals
    
This commit makes InvokeAI 3.0 to be installable via PyPi.org and/or the
installer script. The install process is now pretty much identical to
the 2.3 process, including creating launcher scripts `invoke.sh` and
`invoke.bat`.
    
Main changes:
    
1. Moved static web pages into `invokeai/frontend/web` and modified the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root to
launch, and enables PyPi installations with `pip install invokeai`
    
2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.
    
3. Fix a bug in the checkpoint converter script that was identified
during testing.
    
4. Better error reporting when checkpoint converter fails.
    
5. Rebuild front end.

# Major improvements to the model installer.

1. The text user interface for `invokeai-model-install` has been
expanded to allow the user to install controlnet, LoRA, textual
inversion, diffusers and checkpoint models. The user can install
interactively (without leaving the TUI), or in batch mode after exiting
the application.
 

![image](https://github.com/invoke-ai/InvokeAI/assets/111189/f8f7ac23-3e18-4973-b7fe-729864c703a0)

2. The `invokeai-model-install` command now lets you list, add and
delete models from the command line:

## Listing models
```
$ invokeai-model-install --list diffusers
Diffuser models:
analog-diffusion-1.0      not loaded  diffusers  An SD-1.5 model trained on diverse analog photographs (2.13 GB)
d&d-diffusion-1.0         not loaded  diffusers  Dungeons & Dragons characters (2.13 GB)
deliberate-1.0            not loaded  diffusers  Versatile model that produces detailed images up to 768px (4.27 GB)
DreamShaper               not loaded  diffusers  Imported diffusers model DreamShaper
sd-inpainting-1.5         not loaded  diffusers  RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
sd-inpainting-2.0         not loaded  diffusers  Stable Diffusion version 2.0 inpainting model (5.21 GB)
stable-diffusion-1.5      not loaded  diffusers  Stable Diffusion version 1.5 diffusers model (4.27 GB)
stable-diffusion-2.1      not loaded  diffusers  Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
```

```
$ invokeai-model-install --list tis
Loading Python libraries...

Installed Textual Inversion Embeddings:
   EasyNegative
   ahx-beta-453407d
```

## Installing models

(this example shows correct handling of a server side error at Civitai)
```
$ invokeai-model-install --diffusers https://civitai.com/api/download/models/46259 Linaqruf/anything-v3.0
Loading Python libraries...

[2023-06-05 22:17:23,556]::[InvokeAI]::INFO --> INSTALLING EXTERNAL MODELS
[2023-06-05 22:17:23,557]::[InvokeAI]::INFO --> Probing https://civitai.com/api/download/models/46259 for import
[2023-06-05 22:17:23,557]::[InvokeAI]::INFO --> https://civitai.com/api/download/models/46259 appears to be a URL
[2023-06-05 22:17:23,763]::[InvokeAI]::ERROR --> An error occurred during downloading /home/lstein/invokeai-test/models/ldm/stable-diffusion-v1/46259: Internal Server Error
[2023-06-05 22:17:23,763]::[InvokeAI]::ERROR --> ERROR DOWNLOADING https://civitai.com/api/download/models/46259: {"error":"Invalid database operation","cause":{"clientVersion":"4.12.0"}}
[2023-06-05 22:17:23,764]::[InvokeAI]::INFO --> Probing Linaqruf/anything-v3.0 for import
[2023-06-05 22:17:23,764]::[InvokeAI]::DEBUG --> Linaqruf/anything-v3.0 appears to be a HuggingFace diffusers repo_id
[2023-06-05 22:17:23,768]::[InvokeAI]::INFO --> Loading diffusers model from Linaqruf/anything-v3.0
[2023-06-05 22:17:23,769]::[InvokeAI]::DEBUG --> Using faster float16 precision
[2023-06-05 22:17:23,883]::[InvokeAI]::ERROR --> An unexpected error occurred while downloading the model: 404 Client Error. (Request ID: Root=1-647e9733-1b0ee3af67d6ac3456b1ebfc)

Revision Not Found for url: https://huggingface.co/Linaqruf/anything-v3.0/resolve/fp16/model_index.json.
Invalid rev id: fp16)
Downloading (…)ain/model_index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 511/511 [00:00<00:00, 2.57MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [00:00<00:00, 6.13MB/s]
Downloading (…)cheduler_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 341/341 [00:00<00:00, 3.30MB/s]
Downloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 807/807 [00:00<00:00, 11.3MB/s]
```

## Deleting models

```
 invokeai-model-install --delete --diffusers anything-v3
Loading Python libraries...

[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> Processing requested deletions
[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> anything-v3...
[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> Deleting the cached model directory for Linaqruf/anything-v3.0
[2023-06-05 22:19:45,948]::[InvokeAI]::WARNING --> Deletion of this model is expected to free 4.3G

```
2023-06-07 19:25:07 -07:00
4bbe3b0d00 Merge branch 'main' into release/make-web-dist-startable 2023-06-07 19:21:01 -07:00
9ed86a08f1 multiple small fixes
1. Contents of autoscan directory field are restored after doing an installation.
2. Activate dialogue to choose V2 parameterization when importing from a directory.
3. Remove autoscan directory from init file when its checkbox is unselected.
4. Add widget cycling behavior to install models form.
2023-06-07 17:32:00 -04:00
68405910ba Upgrade to Diffusers 0.17.0 2023-06-08 04:42:52 +12:00
0a50e2638c fix(ui): default controlnet autoprocess to true (#3513)
I had accidentally defaulted it to false
2023-06-08 01:56:53 +12:00
fc7c5da4dd fix(ui): default controlnet autoprocess to true
I had accidentally defaulted it to false
2023-06-07 23:55:24 +10:00
a3357e073c refactor exception handling 2023-06-07 07:35:34 -04:00
d114833a12 pause after printing exception 2023-06-07 07:26:14 -04:00
96038bd075 print exception on TUI crash 2023-06-07 07:23:14 -04:00
2f383c2598 docs(nodes): update INVOCATIONS.md (#3511) 2023-06-07 20:47:57 +12:00
702a8d1f72 docs(nodes): update INVOCATIONS.md 2023-06-07 18:44:43 +10:00
0a8390356f feat(ui): enhance autoprocessing
The processor is automatically selected when model is changed.

But if the user manually changes the processor, processor settings, or disables the new `Auto configure processor` switch, auto processing is disabled.

The user can enable auto configure by turning the switch back on.

When auto configure is enabled, a small dot is overlaid on the expand button to remind the user that the system is not auto configuring the processor for them.

If auto configure is enabled, the processor settings are reset to the default for the selected model.
2023-06-07 18:25:30 +10:00
844058c0a5 feat(ui): make prompt not required
- also change the placeholder text
2023-06-07 18:25:30 +10:00
7d74cbe29c fix(ui): make progress image not draggable 2023-06-07 18:25:30 +10:00
62ac0ed2dc feat(ui): tweak cnet model change
If there is no control image, and the model does not have a default processor, set the processor to `none`.
2023-06-07 18:25:30 +10:00
ae14adec2a feat(ui): add reset button for control image 2023-06-07 18:25:30 +10:00
6c2b39d1df feat(ui): improve controlnet image style
css is terrible
2023-06-07 18:25:30 +10:00
0843028e6e fix(ui): improve dragging activation
- delay of 250ms
- prevent gallery images from accidentally activating native drag and drop
2023-06-07 18:25:30 +10:00
de0fd87035 fix(ui): when a session errors, reset controlnet processing spinner 2023-06-07 18:25:30 +10:00
8b6c0be259 feat(ui): fix IAIDndImage button styles when upload disabled 2023-06-07 18:25:30 +10:00
58fec84858 feat(ui): add upload to IAIDndImage
Add uploading to IAIDndImage
- add `postUploadAction` arg to `imageUploaded` thunk, with several current valid options (set control image, set init, set nodes image, set canvas, or toast)
- updated IAIDndImage to optionally allow click to upload
2023-06-07 18:25:30 +10:00
f223ad7776 fix(ui): only show loading indicator on processing control images 2023-06-07 18:25:30 +10:00
00eabf630d fix(ui): fix control image not used if processor type is none 2023-06-07 18:25:30 +10:00
6245a27650 feat(ui): auto-select controlnet processor
- when the controlnet model is changed, if there is a default processor for the model set, the processor is changed.
- once a control image is selected (and processed), changing the model does not change the processor - must be manually changed
2023-06-07 18:25:30 +10:00
fa1ac57c90 Graph overlay was expanding off the screen to the size of the prompt line (#3510)
sure this isn't really important at the moment

just limited the size of the width and gave it a shadow

![image](https://github.com/invoke-ai/InvokeAI/assets/115216705/96e2db0a-9edb-48b8-9040-56ce054b5ecf)
2023-06-07 18:01:35 +12:00
0f16b1c98d Remove Shadow 2023-06-07 15:51:37 +10:00
08e66c5451 Update NodeGraphOverlay.tsx
Graph overlay was expanding off the screen to the size of the prompt
2023-06-07 14:49:03 +10:00
563bf70c95 fix CI failure in configure non-interactive mode; merged with main 2023-06-06 23:24:40 -04:00
49d29420c4 Merge branch 'main' into release/make-web-dist-startable 2023-06-06 23:24:16 -04:00
ae9d0c6c1b fix logger behavior so that it is initialized after command line parsed 2023-06-06 23:19:10 -04:00
04f9757f8d prevent crash when trying to calculate size of missing safety_checker
- Also fixed up order in which logger is created in invokeai-web
  so that handlers are installed after command-line options are
  parsed (and not before!)
2023-06-06 22:57:49 -04:00
1f9e1eb964 merge with main 2023-06-06 22:18:41 -04:00
d8d11f9bbb quench fp16 rev id not found warning 2023-06-06 22:01:05 -04:00
13fa0d3bc0 make log message textbox deeper 2023-06-06 17:23:13 -04:00
5eeb4b8e06 allow user to abort conversion of V2 models from within TUI 2023-06-06 17:21:50 -04:00
f5044c290d fix crash during model conversion 2023-06-06 17:05:29 -04:00
1b43276e5d make widget selection wrap around 2023-06-06 13:53:11 -07:00
294f086857 configure/install working correctly on windows11 2023-06-06 12:51:34 -07:00
e5024bf5e9 fix conhost launch-with args 2023-06-06 15:17:15 -04:00
79198b4bba feat(ui): fix bugs with image deletion (#3506)
- `imageUsage` object was always stale due to react component lifecycle,
fixed this
- cleaned up the deletion listener and context
2023-06-07 05:33:05 +12:00
1a2f0984db Merge branch 'main' into feat/ui/fix-stale-imageUsage 2023-06-07 04:35:16 +12:00
454683e6eb feat(ui): update image urls on connect (#3507)
* feat(ui): update image urls on connect

Add `updateImageUrlsOnConnect` RTK listener:
- requests URLs for *every* image the app knows about, on connect: gallery, selectedImage, initialImage, canvas images, nodes images, controlnet images
- only fires when `shouldUpdateImagesOnConnect` config is enabled

* remove prop

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-06-06 10:23:51 -04:00
bbb2a08e8f feat(ui): fix bugs with image deletion
- `imageUsage` object was always stale due to react component lifecycle, fixed this
- cleaned up the deletion listener and context
2023-06-06 20:01:27 +10:00
bf116927e1 feat(ui): clear features if image used by them is deleted
This handles the case when an image is deleted but is still in use in as eg an init image on canvas, or a control image. If we just delete the image, canvas/controlnet/etc may break (the image would just fail to load).

When an image is deleted, the app checks to see if it is in use in:
- Image to Image
- ControlNet
- Unified Canvas
- Node Editor

The delete dialog will always open if the image is in use anywhere, and the user is advised that deleting the image will reset the feature(s).

Even if the user has ticked the box to not confirm on delete, the dialog will still show if the image is in use somewhere.
2023-06-06 14:35:07 +10:00
3d249c4fa3 feat(ui): refactor image deletion
Add `DeleteImageContext`:
- provide a single function to delete an image
- opens the modal or immediately deletes, if confirm is off
2023-06-06 14:35:07 +10:00
fa338ddb6a feat(ui): add useGetIsImageInUse
Checks if an image is currently being used eg in canvas, nodes, controlnet, init image.
2023-06-06 14:35:07 +10:00
b200451330 feat(ui): add nodesSelector 2023-06-06 14:35:07 +10:00
8283d23b74 feat(ui): remove shouldTransformUrls
This is no longer used.
2023-06-06 14:35:07 +10:00
2fc0a4d53b feat(ui): improve handling for urls/metadata received
Update images everywhere when urls or metadata is received:
- control images
- init images
- canvas
- nodes
- init image

Also renamed the variable.
2023-06-06 14:35:07 +10:00
3ff732d583 feat(ui): clear controlnet image when image deleted 2023-06-06 14:35:07 +10:00
840c632c0a feat(ui): sort images by updated_at instead of created_at
fixes issue where saved staging area images are sorted as expected in gallery.
2023-06-06 14:30:53 +10:00
40d6e4f287 fix(ui): fix canvas auto-save not working 2023-06-06 14:30:53 +10:00
fc5f9c30a6 fix(ui): fix metadata viewer not working for canvas images 2023-06-06 14:30:53 +10:00
229de2dbb8 feat(ui): fix canvas saving
- fix "bounding box region only" not being respected when saving
- add toasts for each action
- improve workflow `take()` predicates to use the requestId
2023-06-06 14:30:53 +10:00
cc22427f25 feat(ui): improve UI on smaller screens
- responsive changes were causing a lot of weird layout issues, had to remove the rest of them
- canvas (non-beta) toolbar now wraps
- reduces minH for prompt boxes a bit
2023-06-06 14:29:57 +10:00
90333c0074 merge with main 2023-06-05 22:03:44 -04:00
54e5301b35 Multiple fixes
1. Model installer works correctly under Windows 11 Terminal
2. Fixed crash when configure script hands control off to installer
3. Kill install subprocess on keyboard interrupt
4. Command-line functionality for --yes configuration and model installation
   restored.
5. New command-line features:
   - install/delete lists of diffusers, LoRAS, controlnets and textual inversions
     using repo ids, paths or URLs.

Help:

```
usage: invokeai-model-install [-h] [--diffusers [DIFFUSERS ...]] [--loras [LORAS ...]] [--controlnets [CONTROLNETS ...]] [--textual-inversions [TEXTUAL_INVERSIONS ...]] [--delete] [--full-precision | --no-full-precision]
                              [--yes] [--default_only] [--list-models {diffusers,loras,controlnets,tis}] [--config_file CONFIG_FILE] [--root_dir ROOT]

InvokeAI model downloader

options:
  -h, --help            show this help message and exit
  --diffusers [DIFFUSERS ...]
                        List of URLs or repo_ids of diffusers to install/delete
  --loras [LORAS ...]   List of URLs or repo_ids of LoRA/LyCORIS models to install/delete
  --controlnets [CONTROLNETS ...]
                        List of URLs or repo_ids of controlnet models to install/delete
  --textual-inversions [TEXTUAL_INVERSIONS ...]
                        List of URLs or repo_ids of textual inversion embeddings to install/delete
  --delete              Delete models listed on command line rather than installing them
  --full-precision, --no-full-precision
                        use 32-bit weights instead of faster 16-bit weights (default: False)
  --yes, -y             answer "yes" to all prompts
  --default_only        only install the default model
  --list-models {diffusers,loras,controlnets,tis}
                        list installed models
  --config_file CONFIG_FILE, -c CONFIG_FILE
                        path to configuration file to create
  --root_dir ROOT       path to root of install directory
```
2023-06-05 21:45:35 -04:00
b31fc43bfa Fix potential race condition in config system (#3466)
There was a potential gotcha in the config system that was previously
merged with main. The `InvokeAIAppConfig` object was configuring itself
from the command line and configuration file within its initialization
routine. However, this could cause it to read `argv` from the command
line at unexpected times. This PR fixes the object so that it only reads
from the init file and command line when its `parse_args()` method is
explicitly called, which should be done at startup time in any top level
script that uses it.

In addition, using the `get_invokeai_config()` function to get a global
version of the config object didn't feel pythonic to me, so I have
changed this to `InvokeAIAppConfig.get_config()` throughout.

## Updated Usage

In the main script, at startup time, do the following:

```
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
config.parse_args()
```

In non-main scripts, it is not necessary (or recommended) to call
`parse_args()`:
```
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
```

The configuration object properties can be overridden when
`get_config()` is called by passing initialization values in the usual
way. If a property is set this way, then it will not be changed by
subsequent calls to `parse_args()`, but can only be changed by
explicitly setting the property.

```
config = InvokeAIAppConfig.get_config(nsfw_checker=True)
config.parse_args(argv=['--no-nsfw_checker'])
config.nsfw_checker
# True
```

You may specify alternative argv lists and configuration files in
`parse_args()`:

```
config.parse_args(argv=['--no-nsfw_checker'],
                             conf = OmegaConf.load('/tmp/test.yaml')
)
```

For backward compatibility, the `get_invokeai_config()` function is
still available from the module, but has been removed from the rest of
the source tree.
2023-06-05 15:26:50 -07:00
9bcf0b2251 Merge branch 'main' into lstein/config-management-fixes 2023-06-05 15:10:33 -07:00
d4bc98c383 revert to conhost method 2023-06-05 11:46:01 -07:00
bc892c535c feat(ui): fix image fit (#3501)
- Prevent init, current & control images from overflowing
2023-06-05 20:48:55 +12:00
099e1e7c08 feat(ui): fix image fit
- Prevent init, current & control images from overflowing
2023-06-05 17:16:30 +10:00
b1000e30c1 feat(ui): disable keyboard dnd
Need to fix a bug w/ collision detection before enabling it. Will pursue later.
2023-06-05 15:24:24 +10:00
7bd94eac0e feat(ui): support image dnd to canvas 2023-06-05 15:24:24 +10:00
2c77563dcc feat(ui): move DropOverlay into its own IAIDropOverlay component 2023-06-05 15:24:24 +10:00
603c9a587e open Windows Terminal maximized 2023-06-05 00:24:13 -04:00
1a5a2dfda9 increased window size 2023-06-04 23:54:52 -04:00
090b7eeaf3 workaround to get adequate window size on Windows Terminal 2023-06-04 23:44:07 -04:00
117536324c the "restore" env variable in .bat launcher confuses pydantic 2023-06-04 22:53:46 -04:00
999c092b6a fix mouse and window resizing issues 2023-06-04 22:00:11 -04:00
9e31b1f387 Merge branch 'main' into lstein/config-management-fixes 2023-06-04 18:17:43 -04:00
cb157ea530 fix crash when install-models launched from config script 2023-06-04 14:55:51 -04:00
5f6f38074d merge with main 2023-06-04 13:59:31 -04:00
25b8dd340a Prompting: enable long prompts and compel's new .and() concatenating feature (#3497)
this PR adds long prompt support and enables compel's new `.and()`
concatenation feature which improves image quality especially with SD2.1

example of a long prompt:
> a moist sloppy pindlesackboy sloppy hamblin' bogomadong, Clem Fandango
is pissed-off, Wario's Woods in background, making a noise like
ga-woink-a
![000075 6dfd7adf
466129594](https://github.com/invoke-ai/InvokeAI/assets/144366/051608b6-8d52-463b-af10-04b695cda9c1)

the same prompt broken into fragments and concatenated using `.and()`
(syntax works like `.blend()`):
```
("a moist sloppy pindlesackboy sloppy hamblin' bogomadong", 
"Clem Fandango is pissed-off", 
"Wario's Woods in background", 
"making a noise like ga-woink-a").and()
```
![000076 68b1c320
466129594](https://github.com/invoke-ai/InvokeAI/assets/144366/3fee291f-5562-40f9-9c3c-a73765fc893a)


and a less silly example:

> A dream of a distant galaxy, by Caspar David Friedrich, matte
painting, trending on artstation, HQ
![000129 1b33b559
2793529321](https://github.com/invoke-ai/InvokeAI/assets/144366/d4113756-ed0d-49cd-bb2e-a2fc4a09e0af)

the same prompt broken into two fragments and concatenated:
```
("A dream of a distant galaxy, by Caspar David Friedrich, matte painting", 
"trending on artstation, HQ").and()
```
![000128 b5d5cd62
2793529321](https://github.com/invoke-ai/InvokeAI/assets/144366/c373c009-05db-4c42-8a1d-c89fbdb334ec)

as with `.blend()` you can also weight the parts eg `("a man eating an
apple", "sitting on the roof of a car", "high quality, trending on
artstation, 8K UHD").and(1, 0.5, 0.5)` which will assign weight `1` to
`a man eating an apple` and `0.5` to `sitting on the roof of a car` and
`high quality, trending on artstation, 8K UHD`.
2023-06-05 04:53:08 +12:00
fb06f5b892 Merge branch 'main' into feat_compel_longprompts_and_concat 2023-06-05 04:34:39 +12:00
1a7fb601dc ask user for v2 variant when model manager can't infer it 2023-06-04 11:27:44 -04:00
cdcfda164d enable long prompts, upgrade compel to enable .and() (concatenating prompts) 2023-06-04 15:30:54 +02:00
966b154a1f Update web README.md (#3496) 2023-06-05 00:56:00 +12:00
95fa66661c dummy commit to make github actions run 2023-06-04 22:55:35 +10:00
6247b79111 docs(ui): update API_CLIENT 2023-06-04 22:46:53 +10:00
5831364f9c Update web README.md 2023-06-04 22:44:18 +10:00
919b81cff1 fix(ui): fix rebase issue 2023-06-04 22:34:58 +10:00
065fff7db5 fix(ui): fix wonkiness with image dnd 2023-06-04 22:34:58 +10:00
a664ee30a2 feat(ui): do not change images if the dropped image is the same image 2023-06-04 22:34:58 +10:00
03f3ad435a feat(ui): updated controlnet logic/ui 2023-06-04 22:34:58 +10:00
2270c270ef feat(ui): add tooltip to IAISwitch 2023-06-04 22:34:58 +10:00
4f7820719b feat(ui): add ellipsis direction to IAICustomSelect 2023-06-04 22:34:58 +10:00
fa285883ad feat(ui): make OverlayDragImage translucent 2023-06-04 22:34:58 +10:00
474fca8e6a feat(ui): add controlNetDenylist 2023-06-04 22:34:58 +10:00
5dc0250b00 feat(ui): ControlNet layout tweaks 2023-06-04 22:34:58 +10:00
f269377a01 feat(ui): "ProcessorOptionsContainer" -> "ProcessorWrapper", organise 2023-06-04 22:34:58 +10:00
d0406024e3 feat(ui): IAICustomSelect tweak styles 2023-06-04 22:34:58 +10:00
aa3a969bd2 feat: Update ControlNet Model List & Map 2023-06-04 22:34:58 +10:00
73a95973a8 wip: Add Wrapper Container for Preprocessor Options
For fast altering of the layout across all pre-preocessors.
2023-06-04 22:34:58 +10:00
bf4fe3c1ac wip: Fixing layout shifts with the ControlNet tab 2023-06-04 22:34:58 +10:00
d6c08ba469 feat(ui): add mini/advanced controlnet ui 2023-06-04 22:34:58 +10:00
69f0ba65f1 chore(ui): bump react-icons 2023-06-04 22:34:58 +10:00
828c86964d feat(ui): IAICustomSelect prevent label wrap 2023-06-04 22:34:58 +10:00
54b7ddd63f feat(ui): IAIDndImage cursor: 'grab' 2023-06-04 22:34:58 +10:00
a0dde66b5d feat(ui): more work on controlnet mini 2023-06-04 22:34:58 +10:00
b6b3b9f99c feat(ui): make scrollbar less bright 2023-06-04 22:34:58 +10:00
faa69f8a47 feat(ui): add alpha colors 2023-06-04 22:34:58 +10:00
d92c7f5483 feat(ui): organize IAIDndImage component 2023-06-04 22:34:58 +10:00
6b824eb112 feat(ui): initial mini controlnet UI, dnd improvements 2023-06-04 22:34:58 +10:00
72b4371804 feat(ui): control image auto-process 2023-06-04 22:34:58 +10:00
fa290aff8d feat(ui): add defaults for all processors 2023-06-04 22:34:58 +10:00
3d99d7ae8b feat(ui): update handling of inProgess, do not allow cnet process when processing 2023-06-04 22:34:58 +10:00
2eb367969c feat(ui): do not autoprocess control if invocation in progress 2023-06-04 22:34:58 +10:00
9cdad95f48 feat(ui): add rest of controlnet processors 2023-06-04 22:34:58 +10:00
707ed39300 chore(ui): regen api client 2023-06-04 22:34:58 +10:00
6bbb5f061a feat(nodes): update controlnet names/descriptions 2023-06-04 22:34:58 +10:00
6896e69e95 fix(ui): fix multiple controlnets 2023-06-04 22:34:58 +10:00
b17f4c1650 feat(ui): more tweaking controlnet ui 2023-06-04 22:34:58 +10:00
98493ed9e2 feat(ui): reorg parameter panel to make room for controlnet 2023-06-04 22:34:58 +10:00
94c953deab feat(ui): get processed images back into controlnet ui 2023-06-04 22:34:58 +10:00
fa4d88e163 feat(ui): improve drag and drop ux 2023-06-04 22:34:58 +10:00
b1e1e3efc7 fix(ui): fix IAISelectableImage fallback 2023-06-04 22:34:58 +10:00
3b9426eb72 feat(ui): controlnet/image dnd wip
Implement `dnd-kit` for image drag and drop
- vastly simplifies logic bc we can drag and drop non-serializable data (like an `ImageDTO`)
- also much prettier
- also will fix conflicts with file upload via OS drag and drop, bc `dnd-kit` does not use native HTML drag and drop API
- Implemented for Init image, controlnet, and node editor so far

More progress on the ControlNet UI
2023-06-04 22:34:58 +10:00
e2e07696fc feat(ui): wip controlnet ui 2023-06-04 22:34:58 +10:00
d6a959b000 feat(nodes): tidy controlnet processor nodes & improve descriptions 2023-06-04 22:34:58 +10:00
c3935d3849 feat(nodes): add separate scripts to launch cli and web (#3495) 2023-06-04 08:13:14 -04:00
383e3d77cb feat(nodes): add separate scripts to launch cli and web 2023-06-04 22:02:47 +10:00
31e97ead2a move invokeai.db to ~/invokeai/databases
- The invokeai.db database file has now been moved into
  `INVOKEAIROOT/databases`. Using plural here for possible
  future with more than one database file.

- Removed a few dangling debug messages that appeared during
  testing.

- Rebuilt frontend to test web.
2023-06-03 20:25:34 -04:00
0b49995659 merge with main 2023-06-03 20:06:27 -04:00
ff204db6b2 Add logging configuration (#3460)
This PR provides a number of options for controlling how InvokeAI logs
messages, including options to log to a file, syslog and a web server.
Several logging handlers can be configured simultaneously.

## Controlling How InvokeAI Logs Status Messages

InvokeAI logs status messages using a configurable logging system. You
can log to the terminal window, to a designated file on the local
machine, to the syslog facility on a Linux or Mac, or to a properly
configured web server. You can configure several logs at the same time,
and control the level of message logged and the logging format (to a
limited extent).

Three command-line options control logging:

### `--log_handlers <handler1> <handler2> ...`

This option activates one or more log handlers. Options are "console",
"file", "syslog" and "http". To specify more than one, separate them by
spaces:

```bash
invokeai-web --log_handlers console syslog=/dev/log file=C:\Users\fred\invokeai.log
```

The format of these options is described below.

### `--log_format {plain|color|legacy|syslog}`

This controls the format of log messages written to the console. Only
the "console" log handler is currently affected by this setting.

* "plain" provides formatted messages like this:

```bash

[2023-05-24 23:18:2[2023-05-24 23:18:50,352]::[InvokeAI]::DEBUG --> this is a debug message
[2023-05-24 23:18:50,352]::[InvokeAI]::INFO --> this is an informational messages
[2023-05-24 23:18:50,352]::[InvokeAI]::WARNING --> this is a warning
[2023-05-24 23:18:50,352]::[InvokeAI]::ERROR --> this is an error
[2023-05-24 23:18:50,352]::[InvokeAI]::CRITICAL --> this is a critical error
```

* "color" produces similar output, but the text will be color coded to
indicate the severity of the message.

* "legacy" produces output similar to InvokeAI versions 2.3 and earlier:

```bash
### this is a critical error
*** this is an error
** this is a warning
>> this is an informational messages
   | this is a debug message
```

* "syslog" produces messages suitable for syslog entries:

```bash
InvokeAI [2691178] <CRITICAL> this is a critical error
InvokeAI [2691178] <ERROR> this is an error
InvokeAI [2691178] <WARNING> this is a warning
InvokeAI [2691178] <INFO> this is an informational messages
InvokeAI [2691178] <DEBUG> this is a debug message
```

(note that the date, time and hostname will be added by the syslog
system)

### `--log_level {debug|info|warning|error|critical}`

Providing this command-line option will cause only messages at the
specified level or above to be emitted.

## Console logging

When "console" is provided to `--log_handlers`, messages will be written
to the command line window in which InvokeAI was launched. By default,
the color formatter will be used unless overridden by `--log_format`.

## File logging

When "file" is provided to `--log_handlers`, entries will be written to
the file indicated in the path argument. By default, the "plain" format
will be used:

```bash
invokeai-web --log_handlers file=/var/log/invokeai.log
```

## Syslog logging

When "syslog" is requested, entries will be sent to the syslog system.
There are a variety of ways to control where the log message is sent:

* Send to the local machine using the `/dev/log` socket:

```
invokeai-web --log_handlers syslog=/dev/log
```

* Send to the local machine using a UDP message:

```
invokeai-web --log_handlers syslog=localhost
```

* Send to the local machine using a UDP message on a nonstandard port:

```
invokeai-web --log_handlers syslog=localhost:512
```

* Send to a remote machine named "loghost" on the local LAN using
facility LOG_USER and UDP packets:

```
invokeai-web --log_handlers syslog=loghost,facility=LOG_USER,socktype=SOCK_DGRAM
```

This can be abbreviated `syslog=loghost`, as LOG_USER and SOCK_DGRAM are
defaults.

* Send to a remote machine named "loghost" using the facility LOCAL0 and
using a TCP socket:

```
invokeai-web --log_handlers syslog=loghost,facility=LOG_LOCAL0,socktype=SOCK_STREAM
```

If no arguments are specified (just a bare "syslog"), then the logging
system will look for a UNIX socket named `/dev/log`, and if not found
try to send a UDP message to `localhost`. The Macintosh OS used to
support logging to a socket named `/var/run/syslog`, but this feature
has since been disabled.

## Web logging

If you have access to a web server that is configured to log messages
when a particular URL is requested, you can log using the "http" method:

```
invokeai-web --log_handlers http=http://my.server/path/to/logger,method=POST
```

The optional [,method=] part can be used to specify whether the URL
accepts GET (default) or POST messages.

Currently password authentication and SSL are not supported.

## Using the configuration file

You can set and forget logging options by adding a "Logging" section to
`invokeai.yaml`:

```
InvokeAI:
  [... other settings...]
  Logging:
    log_handlers:
       - console
       - syslog=/dev/log
    log_level: info
    log_format: color
```
2023-06-03 20:03:40 -04:00
f74f3d6a3a many TUI improvements:
1. Separated the "starter models" and "more models" sections. This
   gives us room to list all installed diffuserse models, not just
   those that are on the starter list.

2. Support mouse-based paste into the textboxes with either middle
   or right mouse buttons.

3. Support terminal-style cursor movement:
     ^A to move to beginning of line
     ^E to move to end of line
     ^K kill text to right and put in killring
     ^Y yank text back

4. Internal code cleanup.
2023-06-03 16:17:53 -04:00
713fb061e8 Merge branch 'main' into release/make-web-dist-startable 2023-06-02 23:19:33 -04:00
77b7680b32 slight refactoring of code; configure --yes should work now 2023-06-02 23:19:14 -04:00
ff63433591 Merge branch 'main' into lstein/config-management-fixes 2023-06-02 22:56:43 -04:00
31281d7181 Merge branch 'main' into lstein/logging-improvements 2023-06-02 22:56:13 -04:00
8285fbb0b1 Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager 2023-06-02 22:48:00 -04:00
951e6b746c remove model cache test; should be replaced with something else 2023-06-02 22:47:48 -04:00
44a6623094 Merge branch 'main' into lstein/new-model-manager 2023-06-02 22:40:51 -04:00
72d1e4e404 fix bug in model_manager that prevented import of inpainting models 2023-06-02 22:39:26 -04:00
91918e648b dynamic display of log messages now working 2023-06-02 22:24:46 -04:00
1390b65a9c new TUI is fully functional; needs some polishing 2023-06-02 17:20:50 -04:00
82231369d3 Make Invoke Button also the progress bar (#3492)
Find on some screens the progress bar at top is hard to see, Bar should
only show when in progress


![Animation](https://github.com/invoke-ai/InvokeAI/assets/115216705/04f945d3-377b-4646-b125-1355e74b6b09)
2023-06-02 19:30:45 +12:00
7620bacc01 feat: Add temporary NodeInvokeButton 2023-06-02 17:55:15 +12:00
ea9cf04765 fix: Remove progress bg instead of altering button bg 2023-06-02 17:36:14 +12:00
47301e6f85 fix: Do the same without zIndex 2023-06-02 17:33:38 +12:00
f143fb7254 feat: Make Invoke Button also the progress bar 2023-06-02 17:24:40 +12:00
2bdb655375 Change to absolute 2023-06-02 14:59:10 +10:00
41f7758977 listing, downloading and deleting LoRAs working; TI support pending 2023-06-02 00:40:15 -04:00
8ae1eaaccc Add Progress bar under invoke Button
Find on some screens the progress bar at top of screen gets cut off
2023-06-02 14:19:02 +10:00
98773b20ac merge with main 2023-06-01 18:09:49 -04:00
d66979073b add optional config for settings modal 2023-06-02 00:36:45 +10:00
c9e621093e fix(ui): fix looping gallery images fetch
The gallery could get in a state where it thought it had just reached the end of the list and endlessly fetches more images, if there are no more images to fetch (weird I know).

Add some logic to remove the `end reached` handler when there are no more images to load.
2023-06-02 00:34:03 +10:00
e06ba40795 fix(ui): do not allow dpmpp_2s to be used ever
it doesn't work for the img2img pipelines, but the implemented conditional display could break the scheduler selection dropdown.

simple fix until diffusers merges the fix - never use this scheduler.
2023-06-02 00:30:01 +10:00
6571e4c2fd feat(ui): refactor parameter recall
- use zod to validate parameters before recalling
- update recall params hook to handle all validation and UI feedback
2023-06-02 00:30:01 +10:00
ff9240b51d slight code cleanup 2023-06-01 00:45:07 -04:00
18466e01fd tab selection seems very natural; not wired to backend yet 2023-06-01 00:43:28 -04:00
e9821ab711 implemented tabbed model selection; not wired to backend yet 2023-06-01 00:31:46 -04:00
d6530df635 rename invokeai.backend.config to invokeai.backend.install 2023-05-31 21:34:20 -04:00
3c40e7fc1c most (all?) references to CLI deprecated 2023-05-31 21:29:52 -04:00
b47786e846 First working TI draft 2023-05-31 02:12:27 +03:00
062b2cf46f fix(ui): fix width and height not working on txt2img tab
I missed a spot when working on the graph logic yesterday.
2023-05-30 18:41:09 -04:00
082ecf6f25 minor formatting improvements 2023-05-30 13:59:32 -04:00
1632ac6b9f add controlnet model downloading 2023-05-30 13:49:43 -04:00
69ccd3a0b5 Fixes for checkpoint models 2023-05-30 19:12:47 +03:00
877959b413 fix(ui): ensure download image opens in new tab 2023-05-30 09:22:54 -04:00
6e60f7517b feat(ui): add model description tooltips 2023-05-30 09:06:13 -04:00
296ee6b7ea feat(ui): tidy ParamScheduler component 2023-05-30 09:06:13 -04:00
7c7ffddb2b feat(ui): upgrade IAICustomSelect to optionally display tooltips for each item 2023-05-30 09:06:13 -04:00
e1ae7842ff feat(ui): add defaultModel to config 2023-05-30 09:06:13 -04:00
9687fe7bac fix(ui): set default model to first model (alpha sort) 2023-05-30 09:06:13 -04:00
a9a2bd90c2 fix(nodes): set min and max for l2l strength 2023-05-30 09:06:13 -04:00
47ca71a7eb fix(nodes): set cfg_scale min to 1 in latents 2023-05-30 09:06:13 -04:00
a9c47237b1 fix(ui): mark img2img resize node intermediate 2023-05-30 09:06:13 -04:00
33bbae2f47 fix(ui): fix missing init image when fit disabled 2023-05-30 09:06:13 -04:00
fab7a1d337 fix(ui): fix bug with staging bbox not resetting 2023-05-30 09:06:13 -04:00
cffcf80977 fix(ui): remove w/h from canvas params, add bbox w/h 2023-05-30 09:06:13 -04:00
1a3fd05b81 fix(ui): fix canvas bbox autoscale 2023-05-30 09:06:13 -04:00
c22c6ca135 fix(ui): fix img2img fit 2023-05-30 09:06:13 -04:00
3afb6a387f chore(ui): regen api 2023-05-30 09:06:13 -04:00
33e5ed7180 fix(ui): fix edge case in nodes graph building
Inputs with explicit values are validated by pydantic even if they also
have a connection (which is the actual value that is used).

Fix this by omitting explicit values for inputs that have a connection.
2023-05-30 09:06:13 -04:00
2067757fab feat(ui): enable progress images by default 2023-05-30 09:06:13 -04:00
b1b94a3d56 Fixed problem with inpainting after controlnet support added to main.
Problem was that controlnet support involved adding **kwargs to method calls down in denoising loop, and AddsMaskLatents didn't accept **kwarg arg. So just changed to accept and pass on **kwargs.
2023-05-30 08:01:21 -04:00
c9ee42450e added controlnet models to frontend; backend needs to be done 2023-05-30 00:38:37 -04:00
10fe31c2a1 Merge branch 'main' into lstein/config-management-fixes 2023-05-29 21:03:03 -04:00
420a76ecdd Add lora loader node 2023-05-30 02:12:33 +03:00
79de9047b5 First working lora implementation 2023-05-30 01:11:00 +03:00
dc54cbb1fc Merge branch 'main' into release/make-web-dist-startable 2023-05-29 14:16:10 -04:00
a0b6654f6a updated postprocessing, prompts, img2img and web docs 2023-05-29 10:55:57 -04:00
070218aba7 feat(ui): add progress image toggle to current image buttons 2023-05-29 09:07:46 -04:00
f1c226b171 fix(ui): remove console.log() 2023-05-29 09:07:46 -04:00
7004430380 feat(ui): gallery filter dropdown -> Images/Assets toggle 2023-05-29 09:07:46 -04:00
1ddc620192 feat(ui): only cancel on staging commit if processing 2023-05-29 09:07:46 -04:00
a7cebbd970 feat(ui): cancel session when staging image accepted 2023-05-29 09:07:46 -04:00
d97438b0b3 fix(ui): fix typo in actionsDenylist 2023-05-29 09:07:46 -04:00
4522f3f4c9 fix(ui): fix progress images in canvas 2023-05-29 09:07:46 -04:00
6fe28980b0 feat(ui): revert in-gallery progress
wasn't fully baked. will revisist in the future.
2023-05-29 09:07:46 -04:00
4aec5d8ffc fix(ui): typo 2023-05-29 09:07:46 -04:00
bbb4e8f5ef feat(nodes): add resize image and scale image nodes 2023-05-29 09:07:46 -04:00
bce33ea62e fix(ui): when session is complete, null out progress image
This may cause minor gallery jumpiness at the very end of processing, but is necessary to prevent the progress image from sticking around if the last node in a session did not have an image output.
2023-05-29 09:07:46 -04:00
e4705d5ce7 fix(ui): add additional socket event layer to gate handling socket events
Some socket events should not be handled by the slice reducers. For example generation progress should not be handled for a canceled session.

Added another layer of socket actions.

Example:
- `socketGeneratorProgress` is dispatched when the actual socket event is received
- Listener middleware exclusively handles this event and determines if the application should also handle it
- If so, it dispatches `appSocketGeneratorProgress`, which the slices can handle

Needed to fix issues related to canceling invocations.
2023-05-29 09:07:46 -04:00
6764b2a854 fix(ui): fix save to gallery without bounding box 2023-05-29 09:07:46 -04:00
970340cf62 fix(ui): infill and scaling options label 2023-05-29 09:07:46 -04:00
043f9d9ba4 fix(ui): fix auto-switch to new images 2023-05-29 09:07:46 -04:00
00cb8a0c64 Merge branch 'main' into doc_updates_23 2023-05-29 08:13:12 -04:00
6f82801d07 fix(ui): fix canvas save to gallery incorrect is_intermediate flag 2023-05-28 20:19:56 -04:00
3e3dd39ae4 fix(nodes): fix images service update() for is_intermediate 2023-05-28 20:19:56 -04:00
89aa06e014 feat(ui): consolidate images slice
Now that images are in a database and we can make filtered queries, we can do away with the cumbersome `resultsSlice` and `uploadsSlice`.

- Remove `resultsSlice` and `uploadsSlice` entirely
- Add `imagesSlice` fills the same role
- Convert the application to use `imagesSlice`, reducing a lot of messy logic where we had to check which category was selected
- Add a simple filter popover to the gallery, which lets you select any number of image categories
2023-05-28 20:19:56 -04:00
6cc00ef4b7 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
f31e62afad feat(nodes): make list images route use offset pagination
Because we dynamically insert images into the DB and UI's images state, `page`/`per_page` pagination makes loading the images awkward.

Using `offset`/`limit` pagination lets us query for images with an offset equal to the number of images already loaded (which match the query parameters).

The result is that we always get the correct next page of images when loading more.
2023-05-28 20:19:56 -04:00
38fd2ad45d fix(ui): fix metadata viewer crash 2023-05-28 20:19:56 -04:00
05b99b5377 fix(ui): fix erroneously displays is_intermediate field on nodes 2023-05-28 20:19:56 -04:00
08a14ee6d5 fix(nodes): fix conflicts with controlnet 2023-05-28 20:19:56 -04:00
29fcc92da9 feat(ui): handle new image origin/category setup
- Update all thunks & network related things
- Update gallery

What I have not done yet is rename the gallery tabs and the relevant slices, but I believe the functionality is all there.

Also I fixed several bugs along the way but couldn't really commit them separately bc I was refactoring. Can't remember what they were, but related to the gallery image switching.
2023-05-28 20:19:56 -04:00
d78e3572e3 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
160267c71a feat(nodes): refactor image types
- Remove `ImageType` entirely, it is confusing
- Create `ResourceOrigin`, may be `internal` or `external`
- Revamp `ImageCategory`, may be `general`, `mask`, `control`, `user`, `other`. Expect to add more as time goes on
- Update images `list` route to accept `include_categories` OR `exclude_categories` query parameters to afford finer-grained querying. All services are updated to accomodate this change.

The new setup should account for our types of images, including the combinations we couldn't really handle until now:
- Canvas init and masks
- Canvas when saved-to-gallery or merged
2023-05-28 20:19:56 -04:00
fd47e70c92 feat(nodes): use higher precision timestamps in db 2023-05-28 20:19:56 -04:00
9317b42e5f feat(nodes, ui): wip image types 2023-05-28 20:19:56 -04:00
bdab73701f fix(ui): canvas images not added to staging 2023-05-28 20:19:56 -04:00
3ea5e78322 fix(nodes): fix list images route param descriptions 2023-05-28 20:19:56 -04:00
f609ee21a2 fix(ui): handle intermediates when fetching gallery 2023-05-28 20:19:56 -04:00
f51defeeb3 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
ee0225f4ba fix(nodes): handle intermediates during images.get_many() 2023-05-28 20:19:56 -04:00
33a0af4637 feat(nodes): add nameservice
Currenly only used to make names for images, but when latents, conditioning, etc are managed in DB, will do the same for them.

Intended to eventually support custom naming schemes.
2023-05-28 20:19:56 -04:00
10c55310c0 index.md, features and concepts documents updated 2023-05-28 19:51:18 -04:00
d37b08a7dd Merge branch 'main' into release/make-web-dist-startable 2023-05-28 19:46:09 -04:00
9a796364da Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services. 2023-05-26 21:44:00 -04:00
1ad4eb3a7b Progress toward improvement in fieldTemplateBuilder.ts getFieldType() 2023-05-26 21:44:00 -04:00
3767a453bb Added float to FIELD_TYPE_MAP ins constants.ts 2023-05-26 21:44:00 -04:00
b0892d30a4 Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements. 2023-05-26 21:44:00 -04:00
d9b1e4a98c Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput. 2023-05-26 21:44:00 -04:00
a4dec8c1d6 Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params. 2023-05-26 21:44:00 -04:00
8960ceb98b Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.
2023-05-26 21:44:00 -04:00
be79d088c0 fix(nodes): controlnet input accepts list or single controlnet 2023-05-26 21:44:00 -04:00
009407ea3f fix(ui): fix node ui type hints 2023-05-26 21:44:00 -04:00
6999d28c7f chore(ui): regen api client 2023-05-26 21:44:00 -04:00
324e9eb74b Extended node-based ControlNet support to LatentsToLatentsInvocation. 2023-05-26 21:44:00 -04:00
56cff40362 Cleaning up after ControlNet refactor in TextToLatentsInvocation 2023-05-26 21:44:00 -04:00
2ba40c5e52 Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier. 2023-05-26 21:44:00 -04:00
3ab147204c Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle. 2023-05-26 21:44:00 -04:00
e4c89cba9c Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names. 2023-05-26 21:44:00 -04:00
322ea84c4e Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it. 2023-05-26 21:44:00 -04:00
f2b41c60ff Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input. 2023-05-26 21:44:00 -04:00
754acec92f Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
11fc7e40a5 Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled. 2023-05-26 21:44:00 -04:00
d15bb88eb2 Removed last bits of dtype and and device hardwiring from controlnet section 2023-05-26 21:44:00 -04:00
70ba36eefc Cleaning up mistakes after rebase. 2023-05-26 21:44:00 -04:00
7e70391c2b Cleaning up TextToLatent arg testing 2023-05-26 21:44:00 -04:00
e2a94be336 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
63a86eefb4 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
b0727b9d47 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
d96e727dd5 Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
fe480886dc changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
8031d1827b Refactored controlnet node to output ControlField that bundles control info. 2023-05-26 21:44:00 -04:00
b5acdb322d Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
a4d1fe8819 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
10b7a58887 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
901a277959 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
aaa093bef1 Fixed use of ControlNet control_weight parameter 2023-05-26 21:44:00 -04:00
bb96543d66 Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this... 2023-05-26 21:44:00 -04:00
a2a2cfa765 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
18e6a2b410 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
db27263bc2 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
0e027ec3ef Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
5acbbeecaa Added HED, LineArt, and OpenPose ControlNet nodes 2023-05-26 21:44:00 -04:00
6ef2168b67 changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
6d958a214c Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke()) 2023-05-26 21:44:00 -04:00
4ae4bf4ff9 Resolving conflicts in rebase to origin/main 2023-05-26 21:44:00 -04:00
fdef53b2de Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
11bd038b9d Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
768cfe3aab Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
c4277b0662 Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor. 2023-05-26 21:44:00 -04:00
020f3ccf07 fix(nodes): controlnet input accepts list or single controlnet 2023-05-26 21:44:00 -04:00
7467fa5e57 fix(ui): fix node ui type hints 2023-05-26 21:44:00 -04:00
e19ef7ed2f fix(ui): add control field type 2023-05-26 21:44:00 -04:00
71003be6b8 fix(ui): add value to conditioning field 2023-05-26 21:44:00 -04:00
c1dbafc2df chore(ui): regen api client 2023-05-26 21:44:00 -04:00
dcebd71381 Extended node-based ControlNet support to LatentsToLatentsInvocation. 2023-05-26 21:44:00 -04:00
d855a65e73 Cleaning up after ControlNet refactor in TextToLatentsInvocation 2023-05-26 21:44:00 -04:00
a9007c7e0f Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier. 2023-05-26 21:44:00 -04:00
af60304f97 Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle. 2023-05-26 21:44:00 -04:00
6de241eead Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names. 2023-05-26 21:44:00 -04:00
51032dc0b2 Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it. 2023-05-26 21:44:00 -04:00
9ec3d2bc0c Added dependency on controlnet-aux v0.0.3 2023-05-26 21:44:00 -04:00
297931f5d9 Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input. 2023-05-26 21:44:00 -04:00
f613c073c1 Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
63d248622c Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled. 2023-05-26 21:44:00 -04:00
48485fe92f Removed last bits of dtype and and device hardwiring from controlnet section 2023-05-26 21:44:00 -04:00
07726af703 Cleaning up mistakes after rebase. 2023-05-26 21:44:00 -04:00
ad1004b485 Cleaning up TextToLatent arg testing 2023-05-26 21:44:00 -04:00
0096fb2790 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
9c8c2e49d6 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
2005a96847 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
00a8d60c1b Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
3aa182390a changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
e44f1d6d4e Refactored controlnet node to output ControlField that bundles control info. 2023-05-26 21:44:00 -04:00
dfdf8e2ead Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
3a645c4e80 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
113129daf9 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
940e3b6635 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
7fb29dabff Fixed lint-ish formatting error 2023-05-26 21:44:00 -04:00
714ad6dbb8 Fixed use of ControlNet control_weight parameter 2023-05-26 21:44:00 -04:00
c0863fa20f Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this... 2023-05-26 21:44:00 -04:00
78b0b37ba6 More rebase repair. 2023-05-26 21:44:00 -04:00
5d5cdc7716 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
93cd818f6a Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
598a628790 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
f3666eda63 Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
754017b59e Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node 2023-05-26 21:44:00 -04:00
21251ce12c Added HED, LineArt, and OpenPose ControlNet nodes 2023-05-26 21:44:00 -04:00
dc12fa6cd6 changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
f2f4c37f19 Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke()) 2023-05-26 21:44:00 -04:00
0864fca641 Resolving conflicts in rebase to origin/main 2023-05-26 21:44:00 -04:00
5e4c0217c7 Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
78cd106c23 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
6ed0efa938 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
ca0669c337 Resolving rebase conflict 2023-05-26 21:44:00 -04:00
b59a749627 Added example of using ControlNet with legacy Txt2Img generator 2023-05-26 21:44:00 -04:00
a91dee87d0 Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving. 2023-05-26 21:44:00 -04:00
5ff98a4179 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
36b2f12219 Merge branch 'main' into release/make-web-dist-startable 2023-05-26 12:56:24 -04:00
5569f205ee Update CODEOWNERS 2023-05-26 08:59:10 -04:00
a76cf8aab2 Update CODEOWNERS 2023-05-26 08:59:10 -04:00
5c0f0d1808 Merge branch 'main' into lstein/logging-improvements 2023-05-26 08:57:17 -04:00
951900a86a Merge branch 'main' into lstein/config-management-fixes 2023-05-26 08:56:41 -04:00
582f516fef Merge branch 'main' into release/make-web-dist-startable 2023-05-26 18:06:38 +10:00
a25bae2545 fix(ui): tweak log levels 2023-05-26 18:06:08 +10:00
0ea35b1e3d feat(ui): improve session canceled handling 2023-05-26 18:06:08 +10:00
c6f935bf1a feat(ui): improve gallery page handling 2023-05-26 18:06:08 +10:00
96b4d35d43 fix(ui): fix uploads not loading more images correctly after generation 2023-05-26 18:06:08 +10:00
7b0938e7e4 feat(ui): add comments for weird stuff 2023-05-26 18:06:08 +10:00
249522b568 fix(ui): fix gallery not loading more images correctly after generation 2023-05-26 18:06:08 +10:00
39088e42cc fix(ui): remove console logs 2023-05-26 18:06:08 +10:00
30e0033ebe fix(ui): fix results not added to gallery 2023-05-26 18:06:08 +10:00
b599c40099 feat(ui): improve session invoked handling 2023-05-26 18:06:08 +10:00
8f190169db feat(ui): improve session creation handling 2023-05-26 18:06:08 +10:00
1d4d705795 feat(ui): improve image urls handling 2023-05-26 18:06:08 +10:00
b3f71b3078 feat(ui): improve image metadata handling 2023-05-26 18:06:08 +10:00
6059db4f15 feat(ui): improve image delete handling 2023-05-26 18:06:08 +10:00
0d5f44b153 feat(ui): improve image upload handling 2023-05-26 18:06:08 +10:00
17164a37a8 fix(ui): fix gallery auto switch 2023-05-26 18:06:08 +10:00
f88ccabe30 fix(ui): gallery not loading on page load 2023-05-26 18:06:08 +10:00
e1c85f1234 Merge branch 'main' into release/make-web-dist-startable 2023-05-26 18:04:09 +10:00
f50293920e correct typo in tiled_vae field definition 2023-05-25 23:29:16 -04:00
1e2db3a17f hook tiled_decode up to configuration 2023-05-25 23:28:15 -04:00
57a3eb3652 feat(ui): unset progress image inside invocationComplete listener 2023-05-26 13:25:50 +10:00
82a8972bde create listener for imageMetdataReceived to swap our progressImage 2023-05-26 13:25:50 +10:00
497a885c85 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 22:49:18 -04:00
4d9f55d0f6 replace deleted get_root() 2023-05-25 22:48:50 -04:00
5f8f51436a merge with main; fix conflicts 2023-05-25 22:40:45 -04:00
0c3b4bb70d chore(ui): regen api client 2023-05-25 22:17:14 -04:00
33e13820fc feat(nodes): remove meta node field; use individual is_intermediate field instead
as suggested by @Kyle0654
2023-05-25 22:17:14 -04:00
43d991cfdb fix(ui): fix incorrect comment 2023-05-25 22:17:14 -04:00
291e9cf14b fix(nodes): add is_intermediate to all image-outputting nodes 2023-05-25 22:17:14 -04:00
a2de5c9963 feat(ui): change intermediates handling
- Update the canvas graph generation to flag its uploaded init and mask images as `intermediate`.
- During canvas setup, hit the update route to associate the uploaded images with the session id.
- Organize the socketio and RTK listener middlware better. Needed to facilitate the updated canvas logic.
- Add a new action `sessionReadyToInvoke`. The `sessionInvoked` action is *only* ever run in response to this event. This lets us do whatever complicated setup (eg canvas) and explicitly invoking. Previously, invoking was tied to the socket subscribe events.
- Some minor tidying.
2023-05-25 22:17:14 -04:00
5025f84627 chore(ui): regen api client 2023-05-25 22:17:14 -04:00
d2c8a53c55 feat(nodes): change intermediates handling
- `ImageType` is now restricted to `results` and `uploads`.
- Add a reserved `meta` field to nodes to hold the `is_intermediate` boolean. We can extend it in the future to support other node `meta`.
- Add a `is_intermediate` column to the `images` table to hold this. (When `latents`, `conditioning` etc are added to the DB, they will also have this column.)
- All nodes default to `*not* intermediate`. Nodes must explicitly be marked `intermediate` for their outputs to be `intermediate`.
- When building a graph, you can set `node.meta.is_intermediate=True` and it will be handled as an intermediate.
- Add a new `update()` method to the `ImageService`, and a route to call it. Updates have a strict model, currently only `session_id` and `image_category` may be updated.
- Add a new `update()` method to the `ImageRecordStorageService` to update the image record using the model.
2023-05-25 22:17:14 -04:00
5659d10778 remove unused function get_root() 2023-05-25 22:06:37 -04:00
46cab81d6f fix missing web_dir 2023-05-25 22:01:48 -04:00
dd157bce85 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 21:52:05 -04:00
2f25dd7d0d Merge branch 'main' into lstein/config-management-fixes 2023-05-25 21:10:12 -04:00
e56965ad76 documentation tweaks; fixed initialization in a couple more places 2023-05-25 21:10:00 -04:00
2273b3a8c8 fix potential race condition in config system 2023-05-25 20:41:26 -04:00
05fb0ac2b2 Update latent.py 2023-05-26 10:27:33 +10:00
d4acd49ee3 Update generate.py 2023-05-26 10:27:33 +10:00
d98868e524 Update generationSlice.ts to change Default Scheduler 2023-05-26 10:27:33 +10:00
93bb27f2c7 fix gallery navigation 2023-05-26 10:01:06 +10:00
a4c44edf8d more use parameter fixes 2023-05-26 10:01:06 +10:00
1e94d7739a fix metadata references, add support for negative_conditioning syntax 2023-05-26 10:01:06 +10:00
9110838fe4 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 19:06:09 -04:00
ca7b267326 raise error if syslogging requested and syslog lib not available 2023-05-25 10:10:46 -04:00
7f5992d6a5 Merge branch 'lstein/logging-improvements' of github.com:invoke-ai/InvokeAI into lstein/logging-improvements 2023-05-25 09:39:56 -04:00
88776fb2de get invokeai_configure working again 2023-05-25 09:39:45 -04:00
34f567abd4 Merge branch 'main' into lstein/logging-improvements 2023-05-25 08:48:47 -04:00
b87f3043ae add logging configuration 2023-05-24 23:57:15 -04:00
3829ffbe66 fix(tests): add --use_memory_db flag; use it in tests 2023-05-25 12:12:31 +10:00
ad619ae880 fix(tests): log db_location 2023-05-25 12:12:31 +10:00
d22ebe08be fix(tests): log db_location 2023-05-25 12:12:31 +10:00
ee0c6ad86e fix(cli): fix invocation services for cli 2023-05-25 12:12:31 +10:00
96adb56633 fix(tests): fix missing services in tests; fix ImageField instantiation 2023-05-25 12:12:31 +10:00
3000436121 chore(nodes): remove unused imports 2023-05-25 12:12:31 +10:00
37cdd91f5d fix(nodes): use forward declarations for InvocationServices
Also use `TYPE_CHECKING` to get IDE hints.
2023-05-25 12:12:31 +10:00
cf12c7b1d9 Rename contributing.md to CONTRIBUTING.md 2023-05-24 16:33:25 -04:00
1f4a9365a0 Create contributing.md 2023-05-24 16:33:10 -04:00
bf94a48a6c Update CHANGELOG.md 2023-05-24 16:29:06 -04:00
6f3c6ddf3f Update 020_INSTALL_MANUAL.md
Corrected a markdown formatting error (missing backtick).
2023-05-24 11:33:32 -04:00
0bfbda512d build(nodes): remove references to metadata service in tests 2023-05-24 11:30:47 -04:00
295b98a13c build(nodes): remove outdated metadata test
I will add tests for the new service soon
2023-05-24 11:30:47 -04:00
ff6b345d45 fix(nodes): rebase fixes 2023-05-24 11:30:47 -04:00
1fb307abf4 feat(nodes): restore canvas functionality (non-latents) 2023-05-24 11:30:47 -04:00
29c952dcf6 feat(ui): restore canvas functionality 2023-05-24 11:30:47 -04:00
010f63a50d feat(ui): misc tidy 2023-05-24 11:30:47 -04:00
068bbe3a39 fix(ui): fix uploads tab in gallery 2023-05-24 11:30:47 -04:00
ad39680feb feat(nodes): wip inpainting nodes prep 2023-05-24 11:30:47 -04:00
1e0ae8404c feat(nodes): comment out seamless
this will be a model config feature when model manager is ready
2023-05-24 11:30:47 -04:00
460d555a3d feat(nodes): add image mul, channel, convert nodes
also make img node names consistent
2023-05-24 11:30:47 -04:00
66ad04fcfc feat(nodes): add mask image category 2023-05-24 11:30:47 -04:00
c7c0836721 feat(ui): migrate linear workflows to latents 2023-05-24 11:30:47 -04:00
d2c223de8f feat(nodes): move fully* to new images service
* except i haven't rebuilt inpaint in latents
2023-05-24 11:30:47 -04:00
dd16f788ed fix(nodes): fix RangeOfSizeInvocation off-by-one error 2023-05-24 11:30:47 -04:00
b25c1af018 feat(nodes): add RangeOfSizeInvocation
The `RangeInvocation` is a simple wrapper around `range()`, but you must provide `stop > start`.

`RangeOfSizeInvocation` replaces the `stop` parameter with `size`, so that you can just provide the `start` and `step` and get a range of `size` length.
2023-05-24 11:30:47 -04:00
8f393b64b8 feat(nodes): add seed validator
If `seed>SEED_MAX`, we can still continue if we parse the seed as `seed % SEED_MAX`.
2023-05-24 11:30:47 -04:00
55b3193629 fix(nodes): add RangeInvocation validator
`stop` must be greater than `start`.
2023-05-24 11:30:47 -04:00
6f78c073ed fix(ui): fix uploads & other bugs 2023-05-24 11:30:47 -04:00
c406be6f4f fix(ui): fix image deletion 2023-05-24 11:30:47 -04:00
aeaf3737aa fix(ui): fix gallery bugs 2023-05-24 11:30:47 -04:00
23d9d58c08 fix(nodes): fix bugs with serving images
When returning a `FileResponse`, we must provide a valid path, else an exception is raised outside the route handler.

Add the `validate_path` method back to the service so we can validate paths before returning the file.

I don't like this but apparently this is just how `starlette` and `fastapi` works with `FileResponse`.
2023-05-24 11:30:47 -04:00
4c331a5d7e chore(ui): regen api client 2023-05-24 11:30:47 -04:00
035425ef24 feat(nodes): address feedback
- Address database feedback:
  - Remove all the extraneous tables. Only an `images` table now:
  - `image_type` and `image_category` are unrestricted strings. When creating images, the provided values are checked to ensure they are a valid type and category.
  - Add `updated_at` and `deleted_at` columns. `deleted_at` is currently unused.
  - Use SQLite's built-in timestamp features to populate these. Add a trigger to update `updated_at` when the row is updated. Currently no way to update a row.
  - Rename the `id` column in `images` to `image_name`
- Rename `ImageCategory.IMAGE` to `ImageCategory.GENERAL`
- Move all exceptions outside their base classes to make them more portable.
- Add `width` and `height` columns to the database. These store the actual dimensions of the image file, whereas the metadata's `width` and `height` refer to the respective generation parameters and are nullable.
- Make `deserialize_image_record` take a `dict` instead of `sqlite3.Row`
- Improve comments throughout
- Tidy up unused code/files and some minor organisation
2023-05-24 11:30:47 -04:00
021e5a2aa3 feat(nodes): improve metadata service comments 2023-05-24 11:30:47 -04:00
7a1de3887e feat(ui): wip update UI for migration 2023-05-24 11:30:47 -04:00
4a7a5234df fix(ui): fix image nodes losing image 2023-05-24 11:30:47 -04:00
6aebe1614d feat(ui): wip use new images service 2023-05-24 11:30:47 -04:00
74292eba28 chore(ui): regen api client 2023-05-24 11:30:47 -04:00
c31ff364ab fix(nodes): tidy images service 2023-05-24 11:30:47 -04:00
f310a39381 feat(nodes): finalize image routes 2023-05-24 11:30:47 -04:00
5a7e611e0a fix(nodes): fix image url 2023-05-24 11:30:47 -04:00
4e29a751d8 feat(ui): add POC image record fetching 2023-05-24 11:30:47 -04:00
3f94f81acd chore(ui): regen api client 2023-05-24 11:30:47 -04:00
5de3c41d19 feat(nodes): add metadata handling 2023-05-24 11:30:47 -04:00
f071b03ceb chore(ui): regen api client 2023-05-24 11:30:47 -04:00
b9375186a5 feat(nodes): consolidate image routers 2023-05-24 11:30:47 -04:00
11bd932cba feat(nodes): revert invocation_complete url hack 2023-05-24 11:30:47 -04:00
b77ccfaf32 chore(ui): regen api client 2023-05-24 11:30:47 -04:00
96653eebb6 build(ui): do not export schemas on api client generation 2023-05-24 11:30:47 -04:00
60d25f105f fix(nodes): restore metadata traverser 2023-05-24 11:30:47 -04:00
734b653a5f fix(nodes): add base images router 2023-05-24 11:30:47 -04:00
52c9e6ec91 feat(nodes): organise/tidy 2023-05-24 11:30:47 -04:00
c0f132e41a hack(nodes): hack to get image urls in the invocation complete event 2023-05-24 11:30:47 -04:00
cc1160a43a feat(nodes): streamline urlservice 2023-05-24 11:30:47 -04:00
adde8450bc fix(nodes): remove bad import 2023-05-24 11:30:47 -04:00
5bf9891553 feat(nodes): it works 2023-05-24 11:30:47 -04:00
22c34c343a feat(nodes): fix types for InvocationServices 2023-05-24 11:30:47 -04:00
f7804f6126 feat(nodes): add logger to images service 2023-05-24 11:30:47 -04:00
d14b02e93f feat(logger): fix logger type issues 2023-05-24 11:30:47 -04:00
1b75d899ae feat(nodes): wip image storage implementation 2023-05-24 11:30:47 -04:00
d4aa79acd7 fix(nodes): use save instead of set
`set` is a python builtin
2023-05-24 11:30:47 -04:00
33d199c007 feat(nodes): image records router 2023-05-24 11:30:47 -04:00
9c89d3452c feat(nodes): add high-level images service
feat(nodes): add ResultsServiceABC & SqliteResultsService

**Doesn't actually work bc of circular imports. Can't even test it.**

- add a base class for ResultsService and SQLite implementation
- use `graph_execution_manager` `on_changed` callback to keep `results` table in sync

fix(nodes): fix results service bugs

chore(ui): regen api

fix(ui): fix type guards

feat(nodes): add `result_type` to results table, fix types

fix(nodes): do not shadow `list` builtin

feat(nodes): add results router

It doesn't work due to circular imports still

fix(nodes): Result class should use outputs classes, not fields

feat(ui): crude results router

fix(ui): send to canvas in currentimagebuttons not working

feat(nodes): add core metadata builder

feat(nodes): add design doc

feat(nodes): wip latents db stuff

feat(nodes): images_db_service and resources router

feat(nodes): wip images db & router

feat(nodes): update image related names

feat(nodes): update urlservice

feat(nodes): add high-level images service
2023-05-24 11:30:47 -04:00
fb0b63c580 fix(nodes): fix seam painting
The problem was the same seed was getting used for the seam painting pass, causing the fried look.

Same issue as if you do img2img on a txt2img with the same seed/prompt.

Thanks to @hipsterusername for teaming up to debug this. We got pretty deep into the weeds.
2023-05-25 00:58:03 +10:00
bb2c6e5925 Merge branch 'main' into release/make-web-dist-startable 2023-05-24 10:55:51 -04:00
928caff2a6 fix: attempt to fix actions (#3454)
i think this conditional needs to be removed.
2023-05-25 02:37:39 +12:00
670c79f2c7 fix: attempt to fix actions
i think this conditional needs to be removed.
2023-05-25 00:31:48 +10:00
d6efb98953 build: fix test-invoke-pip.yml
- Restore conditional which ensures tests are only run on `main`
- Fix `yaml` syntax error
2023-05-24 21:48:12 +10:00
19da795274 fix(ui): send to canvas in currentimagebuttons not working 2023-05-24 21:46:58 +10:00
454ba9b893 add crossOrigin = anonymous attribute to konva image 2023-05-24 10:32:41 +10:00
8e419a4f97 Revert weak references as can be done without it 2023-05-23 04:29:40 +03:00
2533209326 Rewrite cache to weak references 2023-05-23 03:48:22 +03:00
d2dc1ed26f make InvokeAI package installable
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the
installer script.

Main changes.

1. Move static web pages into `invokeai/frontend/web` and modify the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root
to launch.

2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.

3. Fix a bug in the checkpoint converter script that was identified
during testing.

4. Better error reporting when checkpoint converter fails.

5. Rebuild front end.
2023-05-22 17:51:47 -04:00
d4fb16825e move static into invokeai.frontend.web directory for dist install 2023-05-22 16:48:17 -04:00
165c1adcf8 Merge branch 'main' into lstein/new-model-manager 2023-05-22 21:51:07 +03:00
650d69ef5b added optional middleware prop and new actions needed (#3437)
* added optional middleware prop and new actions needed

* accidental import

* make middleware an array

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-22 08:16:11 -04:00
ff0e79fa9a add id for invoke button 2023-05-19 21:44:31 +10:00
127b54f812 add some IDs 2023-05-19 21:44:31 +10:00
bdf33f13b3 fix bad merge in compel 2023-05-18 18:08:45 -04:00
27241cdde1 port more globals changes over 2023-05-18 17:17:45 -04:00
259d6ec90d fixup cachedir call 2023-05-18 14:52:16 -04:00
a77c4c87b2 fixed logic error in resolution of model path 2023-05-18 14:35:34 -04:00
d96175d127 resolve some undefined symbols in model_cache 2023-05-18 14:31:47 -04:00
7025c00581 Add configuration system, remove legacy globals, args, generate and CLI (#3340)
# Application-wide configuration service

This PR creates a new `InvokeAIAppConfig` object that reads
application-wide settings from an init file, the environment, and the
command line.

Arguments and fields are taken from the pydantic definition of the
model. Defaults can be set by creating a yaml configuration file that
has a top-level key of "InvokeAI" and subheadings for each of the
categories returned by `invokeai --help`.

The file looks like this:

[file: invokeai.yaml]
```
InvokeAI:
  Paths:
    root: /home/lstein/invokeai-main
    conf_path: configs/models.yaml
    legacy_conf_dir: configs/stable-diffusion
    outdir: outputs
    embedding_dir: embeddings
    lora_dir: loras
    autoconvert_dir: null
    gfpgan_model_dir: models/gfpgan/GFPGANv1.4.pth
  Models:
    model: stable-diffusion-1.5
    embeddings: true
  Memory/Performance:
    xformers_enabled: false
    sequential_guidance: false
    precision: float16
    max_loaded_models: 4
    always_use_cpu: false
    free_gpu_mem: false
  Features:
    nsfw_checker: true
    restore: true
    esrgan: true
    patchmatch: true
    internet_available: true
    log_tokenization: false
  Cross-Origin Resource Sharing:
    allow_origins: []
    allow_credentials: true
    allow_methods:
    - '*'
    allow_headers:
    - '*'
  Web Server:
    host: 127.0.0.1
    port: 8081

```

The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can use any OmegaConf dictionary by passing it to
the config object at initialization time:

```
 omegaconf = OmegaConf.load('/tmp/init.yaml')
 conf = InvokeAIAppConfig(conf=omegaconf)
```
The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can replace supersede this by providing
anyOmegaConf dictionary object initialization time:

```
omegaconf = OmegaConf.load('/tmp/init.yaml')
conf = InvokeAIAppConfig(conf=omegaconf)
```

By default, InvokeAIAppConfig will parse the contents of `sys.argv` at
initialization time. You may pass a list of strings in the optional
`argv` argument to use instead of the system argv:

```
conf = InvokeAIAppConfig(arg=['--xformers_enabled'])
```

It is also possible to set a value at initialization time. This value
has highest priority.
```
conf = InvokeAIAppConfig(xformers_enabled=True)
```
Any setting can be overwritten by setting an environment variable of
form: "INVOKEAI_<setting>", as in:

```
export INVOKEAI_port=8080
```

Order of precedence (from highest):
   1) initialization options
   2) command line options
   3) environment variable options
   4) config file options
   5) pydantic defaults

Typical usage:

```
from invokeai.app.services.config import InvokeAIAppConfig

# get global configuration and print its nsfw_checker value
conf = InvokeAIAppConfig()
print(conf.nsfw_checker)
```
Finally, the configuration object is able to recreate its (modified)
yaml file, by calling its `to_yaml()` method:

```
conf = InvokeAIAppConfig(outdir='/tmp', port=8080)
print(conf.to_yaml())
```

# Legacy code removal and porting

This PR replaces Globals with the InvokeAIAppConfig system throughout,
and therefore removes the `globals.py` and `args.py` modules. It also
removes `generate` and the legacy CLI. ***The old CLI and web servers
are now gone.***

I have ported the functionality of the configuration script, the model
installer, and the merge and textual inversion scripts. The `invokeai`
command will now launch `invokeai-node-cli`, and `invokeai-web` will
launch the web server.

I have changed the continuous invocation tests to accommodate the new
command syntax in `invokeai-node-cli`. As a convenience function, you
can also pass invocations to `invokeai-node-cli` (or its alias
`invokeai`) on the command line as as standard input:

```
invokeai-node-cli "t2i --positive_prompt 'banana sushi' --seed 42"
invokeai < invocation_commands.txt
```
2023-05-18 13:37:09 -04:00
b1a99d772c added method to convert vaes 2023-05-18 13:31:11 -04:00
7ea995149e fixes to env parsing, textual inversion & help text
- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
fd82763412 Model manager draft 2023-05-18 03:56:52 +03:00
f9710dd6ed remove reference to legacy opt.hf_token, clean up whitespace in invokeai_configure 2023-05-17 20:39:00 -04:00
4e7dd7d3f6 ci: remove reference to Globals in a workflow 2023-05-17 20:26:26 -04:00
20ca9e1fc1 config: move 'CORS' settings to 'Web Server' in the docstring to match the actual category 2023-05-17 19:45:51 -04:00
8a8b09a953 api_app: rename web_config to app_config for consistency 2023-05-17 19:42:13 -04:00
9e4e386c9b web and formatting fixes
- remove non-existent import InvokeAIWebConfig
- fix workflow file formatting
- clean up whitespace
2023-05-17 19:12:03 -04:00
eca1e449a8 Merge branch 'lstein/global-configuration' of github.com:invoke-ai/InvokeAI into lstein/global-configuration 2023-05-17 15:23:21 -04:00
ffaadb9d05 reorder options in help text 2023-05-17 15:22:58 -04:00
8adff96e29 Merge branch 'main' into lstein/global-configuration 2023-05-17 14:37:09 -04:00
7593dc19d6 complete several steps needed to make 3.0 installable
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
2023-05-17 14:13:27 -04:00
b7c5a39685 make invokeai.yaml more hierarchical; fix list configuration bug 2023-05-17 12:19:19 -04:00
bd1b84f7d0 tell user to refresh page on image load error (#3425)
* refetch images list if error loading

* tell user to refresh instead of refetching

* unused import

* feat(ui): use `useAppToaster` to make toast

* fix(ui): clear selected/initial image on error

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-05-17 11:52:37 -04:00
eadfd239a8 update config script to work with new config system 2023-05-17 00:18:19 -04:00
e971a7f35c when migrating models.yaml, rename original models.yaml.orig 2023-05-16 22:37:53 -04:00
8d75e50435 partial port of invokeai-configure 2023-05-16 01:50:01 -04:00
6ab84741a0 fix(nodes): make ModelsList an enum-keyed dict
The `ModelsList` OpenAPI schema is generated as being keyed by plain strings. This means that API consumers do not know the shape of the dict. It _should_ be keyed by the `SDModelType` enum.

Unfortunately, `fastapi` does not actually handle this correctly yet; it still generates the schema with plain string keys.

Adding this anyways though in hopes that it will be resolved upstream and we can get the correct schema. Until then, I'll implement the (simple but annoying) logic on the frontend.

https://github.com/pydantic/pydantic/issues/4393
2023-05-16 15:02:58 +10:00
cd16857f38 fix None in model_type 2023-05-16 00:13:44 -04:00
1442f1cb8d change model filter to None in second place 2023-05-16 00:03:57 -04:00
eea0d6f7bc default to no filter in list_models() 2023-05-15 23:52:29 -04:00
1d9c115225 feat(nodes): add low and high to RandomIntInvocation 2023-05-16 13:50:52 +10:00
4fe94a9315 list_models() now returns a dict of {type,{name: info}} 2023-05-15 23:44:08 -04:00
30af20a056 ui: cleanup (#3418)
- tidy up a lot of cruft
- `sampler` --> `scheduler`
2023-05-16 15:27:12 +12:00
cc21fb216c chore(ui): clean up GalleryPanel 2023-05-16 10:43:26 +10:00
6fe62a2705 feat(ui): sampler --> scheduler 2023-05-16 10:40:26 +10:00
da87378713 chore(ui): regen api client 2023-05-16 10:39:40 +10:00
b6f5267385 chore(ui): clean up generationSlice 2023-05-16 10:21:18 +10:00
f9e78d3c64 chore(ui): clean up gallerySlice 2023-05-16 10:16:36 +10:00
b7b5bd1b46 chore(ui): clean up uiSlice 2023-05-16 09:57:19 +10:00
9a3727d3ad chore(ui): clean up systemSlice 2023-05-16 09:48:58 +10:00
d68c14516c chore(ui): clean up persist denylists 2023-05-16 09:46:03 +10:00
9f4d39aa42 chore(ui): clean up modelSlice 2023-05-16 09:45:49 +10:00
84b801d88f ui: restore canvas and upload functionality (#3414)
- refactor image uploading, fix init image upload button 
- refactor toast and hotkey hooks into logical components
- restore canvas save/download/copy/merge functionality
- clean up unused files and packages
- fix canvas rendering issue resulting from fractional stage coords
2023-05-16 02:23:39 +12:00
2fc70c509b Merge branch 'main' into feat/ui/fix-uploading 2023-05-16 02:20:59 +12:00
34fb1c4b19 make conditioning.py work with compel 1.1.5 (#3383)
This PR fixes the ValueError issue that was preventing all prompts from
working.
2023-05-15 09:46:04 -04:00
80bdd550cf Merge branch 'main' into lstein/bugfix/compel 2023-05-15 09:25:21 -04:00
7ef0d2aa35 merge with main 2023-05-15 09:07:17 -04:00
2359b92b46 chore(ui): tidy unused component ref 2023-05-15 22:58:15 +10:00
a404fb2d32 docs(ui): update PACKAGE_SCRIPTS.md 2023-05-15 22:49:28 +10:00
513eb11616 chore(ui): clean up unused files/packages 2023-05-15 22:48:06 +10:00
d2c9140e69 feat(ui): restore save/copy/download/merge functionality 2023-05-15 22:21:03 +10:00
d95fe5925a feat(ui): restore image post-upload actions
eg set init image if on img2img when uploading
2023-05-15 18:52:48 +10:00
835922ea8f fix(ui): floor canvas coords to prevent partial pixel offset rendering issues 2023-05-15 18:50:34 +10:00
e1e5266fc3 feat(ui): refactor base image uploading logic 2023-05-15 17:45:05 +10:00
5e4457445f feat(ui): make toast/hotkey into logical components 2023-05-15 15:25:27 +10:00
0221ca8f49 fix(ui): use cloned canvas for retrieving dataURL/Blobs 2023-05-15 13:54:30 +10:00
c8f765cc06 improve debugging messages 2023-05-14 18:29:55 -04:00
cf36e4029e fix(ui): fix syntax error in the logo component flexbox 2023-05-15 08:24:33 +10:00
b9e9087dbe do not manage GPU for pipelines if sequential_offloading is True 2023-05-14 18:09:38 -04:00
63e465eb5c tweaks to get_model() behavior
1. If an external VAE is specified in config file, then
   get_model(submodel=vae) will return the external VAE, not the one
   burnt into the parent diffusers pipeline.

2. The mechanism in (1) is generalized such that you can now have
   "unet:", "text_encoder:" and similar stanzas in the config file.
   Valid formats of these subsections:

       unet:
          repo_id: foo/bar

       unet:
          path: /path/to/local/folder

       unet:
          repo_id: foo/bar
	  subfolder: unet

    In the near future, these will also be used to attach external
    parts to the pipeline, generalizing VAE behavior.

3. Accommodate callers (i.e. the WebUI) that are passing the
   model key ("diffusers/stable-diffusion-1.5") to get_model()
   instead of the tuple of model_name and model_type.

4. Fixed bug in VAE model attaching code.

5. Rebuilt web front end.
2023-05-14 16:50:59 -04:00
c8a98a9a22 Merge branch 'main' into lstein/bugfix/compel 2023-05-14 14:43:18 -04:00
38ecca9362 Logging Improvements (#3401)
This PR improves the logging module a tad bit along with the
documentation.

**New Look:**


![WindowsTerminal_XaijwCqFpo](https://github.com/invoke-ai/InvokeAI/assets/54517381/49a97411-1927-4a49-80ff-f4d9665be55f)

## Usage

**General Logger**

InvokeAI has a module level logger. You can call it this way.

In this below example, you will use the default logger `InvokeAI` and
all your messages will be logged under that name.

```python

from invokeai.backend.util.logging import logger

logger.critical("CriticalMessage") // In Bold Red
logger.error("Info Message") // In Red
logger.warning("Info Message") // In Yellow
logger.info("Info Message") // In Grey 
logger.debug("Debug Message") // In Grey
```

Results:

```
[12-05-2023 20]::[InvokeAI]::CRITICAL --> This is an info message [In Bold Red]
[12-05-2023 20]::[InvokeAI]::ERROR --> This is an info message [In Red]
[12-05-2023 20]::[InvokeAI]::WARNING --> This is an info message [In Yellow]
[12-05-2023 20]::[InvokeAI]::INFO --> This is an info message [In Grey]
[12-05-2023 20]::[InvokeAI]::DEBUG --> This is an info message [In Grey]
```

**Custom Logger**

If you want to use a custom logger for your module, you can import it
the following way.

```python

from invokeai.backend.util.logging import logging
logger = logging.getLogger(name='Model Manager')

logger.critical("CriticalMessage") // In Bold Red
logger.error("Info Message") // In Red
logger.warning("Info Message") // In Yellow
logger.info("Info Message") // In Grey 
logger.debug("Debug Message") // In Grey
```

Results:

```
[12-05-2023 20]::[Model Manager]::CRITICAL --> This is an info message [In Bold Red]
[12-05-2023 20]::[Model Manager]::ERROR --> This is an info message [In Red]
[12-05-2023 20]::[Model Manager]::WARNING --> This is an info message [In Yellow]
[12-05-2023 20]::[Model Manager]::INFO --> This is an info message [In Grey]
[12-05-2023 20]::[Model Manager]::DEBUG --> This is an info message [In Grey]
```

**When to use custom logger?**

It is recommended to use a custom logger if your module is not a part of
base InvokeAI. For example: custom extensions / nodes.
2023-05-15 02:18:20 +12:00
c4681774a5 Merge branch 'main' into logging-facelift 2023-05-15 02:08:29 +12:00
050add58d2 fix getting conditionings 2023-05-14 12:20:54 +02:00
3d60c958c7 ui: commercial fixes (#3409)
minor commercial fixes
2023-05-14 20:44:06 +12:00
f5df150097 feat(ui): add callback to signal app is ready
needed for commercial
2023-05-14 18:42:15 +10:00
dac82adb5b fix(ui): make logo component non-selectable 2023-05-14 18:41:11 +10:00
b72c9787a9 Revert "comment out customer_attention_context"
This reverts commit 8f8cd90787.

Due to NameError: name 'options' is not defined
2023-05-14 00:37:55 -04:00
426f4eaf7e adjusted regression tests to work with new SDModelTypes 2023-05-13 22:29:33 -04:00
2623941d91 Merge branch 'main' into lstein/bugfix/compel 2023-05-13 22:23:59 -04:00
baf5451fa0 Merge branch 'main' into lstein/new-model-manager 2023-05-13 22:01:34 -04:00
d3a7fea939 Revert "fix: Rework the layout of the parameters scrollbar"
This reverts commit 6f1fc397f7.
2023-05-14 11:45:08 +10:00
5a7b687c84 fix(ui): add missing packages 2023-05-14 11:45:08 +10:00
0020457fc7 fix(ui): tweak settings scheduler styling 2023-05-14 11:45:08 +10:00
658b556544 feat(ui): IAICustomSelect v2, implement for scheduler & model 2023-05-14 11:45:08 +10:00
37da0fc075 feat(ui): IAICustomSelect v1 2023-05-14 11:45:08 +10:00
6d3e8507cc fix(ui): fix "no image" fallbacks 2023-05-14 11:45:08 +10:00
0e9470503f fix: Rework the layout of the parameters scrollbar 2023-05-14 11:45:08 +10:00
d2ebc6741b feat: Add setting to hide / display schedulers 2023-05-14 11:45:08 +10:00
026d3260b4 Add Heun Karras Scheduler 2023-05-14 11:45:08 +10:00
1103ab2844 merge with main 2023-05-13 21:35:19 -04:00
11b2076b46 implement change to web_config suggested by ebr 2023-05-13 21:33:19 -04:00
b31a6ff605 fix reversed args in _model_key() call 2023-05-13 21:11:06 -04:00
1f602e6143 Fix - apply precision to text_encoder 2023-05-14 03:46:13 +03:00
039fa73269 Change SDModelType enum to string, fixes(model unload negative locks count, scheduler load error, saftensors convert, wrong logic in del_model, wrong parse metadata in web) 2023-05-14 03:06:26 +03:00
78533714e3 Merge branch 'main' into logging-facelift 2023-05-14 09:07:51 +12:00
691e1bf829 Make debug messages cyan/blue 2023-05-14 09:06:57 +12:00
2204e47596 allow submodels to be fetched independent of parent pipeline 2023-05-13 16:54:47 -04:00
d8b1f29066 proxy SDModelInfo so that it can be used directly as context 2023-05-13 16:29:18 -04:00
b23c9f1da5 get Tuple type hint syntax right 2023-05-13 14:59:21 -04:00
5e8e3cf464 correct typos in model_manager_service 2023-05-13 14:55:59 -04:00
72967bf118 convert add_model(), del_model(), list_models() etc to use bifurcated names 2023-05-13 14:44:44 -04:00
bc96727cbe Rewrite latent nodes to new model manager 2023-05-13 16:08:03 +03:00
3b2a054f7a Add model loader node; unet, clip, vae fields; change compel node to clip field 2023-05-13 04:37:20 +03:00
47a088d685 rehydrate selectedImage URL when results and uploads are fetched 2023-05-13 09:48:38 +10:00
63db3fc22f reduce queue check interval to 0.5s 2023-05-12 17:54:26 -04:00
ad0bb3f61a fix: queue error should not crash InvocationProcessor
1. if retrieving an item from the queue raises an exception, the
   InvocationProcessor thread crashes, but the API continues running in
   a non-functional state. This fixes the issue
2. when there are no items in the queue, sleep 1 second before checking
   again.
3. Also ensures the thread isn't crashed if an exception is raised from
   invoker, and emits the error event

Intentionally using base Exceptions because for now we don't know which
specific exception to expect.

Fixes (sort of)? #3222
2023-05-12 17:54:26 -04:00
131145eab1 A big refactor of model manager(according to IMHO) 2023-05-12 23:13:34 +03:00
4492044d29 Redo compel node to separate model loading 2023-05-12 23:09:33 +03:00
5431dd5f50 Fix event args 2023-05-12 23:08:03 +03:00
79fecba274 Fix model manager initialization in web ui 2023-05-12 23:05:08 +03:00
8f8cd90787 comment out customer_attention_context 2023-05-12 13:59:00 -04:00
d796ea7bec feat: Logging Improvements 2023-05-13 02:13:49 +12:00
e5b7dd63e9 fix(nodes): temporarily disable librarygraphs
- Do not retrieve graph from DB until we resolve the issue of changing node schemas causing application to fail to start up due to invalid graphs
2023-05-12 22:33:49 +10:00
af060188bd Merge branch 'main' into lstein/bugfix/compel 2023-05-12 08:22:18 -04:00
4270e7ae25 Feat/ui/improve-language (#3399) 2023-05-12 23:32:50 +12:00
60a565d7de feat(ui): use chakra menu for theme changer 2023-05-12 20:04:29 +10:00
78cf70eaad fix(ui): tweak lang picker style 2023-05-12 20:04:10 +10:00
eebaa50710 fix(ui): fix language picker tooltip 2023-05-12 19:52:21 +10:00
7d582553f2 feat(ui): use chakra menu for language picker 2023-05-12 19:50:34 +10:00
4d6eea7e81 feat(ui): store language in redux 2023-05-12 19:35:03 +10:00
f44593331d ui: misc fixes (#3398)
- do not show canvas intermediates in gallery
- do not show progress image in uploads gallery category
- use custom dark mode `localStorage` key (prevents collision with
commercial)
- use variable font (reduce bundle size by factor of 10)
- change how custom headers are used
- use style injection for building package
- fix tab icon sizes
2023-05-12 21:00:47 +12:00
3d9ecbf3c7 fix(ui): add missing package 2023-05-12 18:55:59 +10:00
032aa1d59c fix(ui): excise most zIndexs
our stacking contexts are accurate, `zIndex` isn't needed
2023-05-12 18:50:54 +10:00
35e0863bdb fix(ui): fix tab icon sizes 2023-05-12 17:56:18 +10:00
14070d674e build(ui): add style injection plugin
when building for package, CSS is all in JS files. when used as a package, it is then injected into the page. bit of a hack to missing CSS in commercial product
2023-05-12 17:56:18 +10:00
108ce06c62 feat(ui): change custom header to be a prop instead of children 2023-05-12 17:56:18 +10:00
da364f3444 feat(ui): use variable font
reduces package build's CSS by an order of magnitude
2023-05-12 17:56:18 +10:00
df5ba75c14 feat(ui): use custom dark mode localStorage key 2023-05-12 17:56:18 +10:00
e4fb9cb33f chore(ui): regen api client 2023-05-12 17:56:18 +10:00
65b527eb20 fix(ui): do not show progress images in uploads gallery category 2023-05-12 17:56:18 +10:00
7dc9d18052 fix(ui): do not show intermediates uploads in gallery 2023-05-12 17:56:18 +10:00
2ef79b8bf3 fix bug in persistent model scheme 2023-05-12 00:14:56 -04:00
5013a4b9f3 feat(ui): expand config options (#3393)
now may disable individual SD features eg Noise, Variation, etc - stuff
which is not ready for consumption in commercial.
2023-05-12 16:10:17 +12:00
f929359322 Merge branch 'main' into feat/ui/expand-config 2023-05-12 16:06:31 +12:00
6522c71971 feat(nodes): add RandomIntInvocation (#3390)
just outputs a single random int
2023-05-12 16:06:06 +12:00
9c1e65f3a3 Merge branch 'main' into feat/nodes/add-randomintinvocation 2023-05-12 15:56:41 +12:00
ebec200ba6 Remove unused import 2023-05-12 13:56:02 +10:00
e559730b6e feat(nodes): add w/h to latents outputs (#3389)
This reduces the number of nodes needed when working with latents (ie
fewer plain integer value nodes)

Also correct a few mistakes in the fields
2023-05-12 15:40:46 +12:00
11ecf438f5 latents.py converted to use model manager service; events emitted 2023-05-11 23:33:24 -04:00
0acb8ed85d Merge branch 'main' into feat/nodes/add-w-h-latentsoutput 2023-05-12 15:23:29 +12:00
8c1c9cd702 Merge branch 'main' into feat/nodes/add-randomintinvocation 2023-05-12 15:21:49 +12:00
0ece4686aa fix(nodes): remove Optionals on ImageOutputs (#3392) 2023-05-12 15:21:42 +12:00
af95cef7f9 Merge branch 'main' into fix/nodes/fix-imageoutput-optionals 2023-05-12 15:08:19 +12:00
1eca7a918a feat(ui): make core parameters layout consistent (#3394) 2023-05-12 15:08:07 +12:00
9e6b958023 Merge branch 'main' into feat/ui/consistent-param-layout 2023-05-12 15:06:16 +12:00
f7b99d93ae docs(ui): update ui readme (#3396) 2023-05-12 15:05:55 +12:00
85d03dcd90 Merge branch 'main' into docs/ui/update-ui-readme 2023-05-12 15:04:12 +12:00
032555bcfe fix(model manager): fix string formatting error on model checksum timer (#3397)
The error occurs when loading a model for the first time. (or after
removing its checksum file, probably.)
2023-05-12 15:04:01 +12:00
4caa1f19b2 fix(model manager): fix string formatting error on model checksum timer 2023-05-11 19:06:02 -07:00
df5b968954 model manager now running as a service 2023-05-11 21:24:29 -04:00
95d4bd3012 Merge branch 'lstein/bugfix/compel' of github.com:invoke-ai/InvokeAI into lstein/bugfix/compel 2023-05-11 21:13:29 -04:00
037078c8ad make InvokeAIDiffuserComponent.custom_attention_control a classmethod 2023-05-11 21:13:18 -04:00
6de2f66b50 docs(ui): update ui readme 2023-05-12 11:11:59 +10:00
cd7b248eda Add UniPC / Euler Karras / DPMPP_2 Karras / DEIS / DDPM Schedulers (#3388)
**Features:**

- Add UniPC Scheduler
- Add Euler Karras Scheduler
- Add DPMPP_2 Karras Scheduler
- Add DEIS Scheduler
- Add DDPM Scheduler

**Other:**

- Renamed schedulers to their accurate names: _a = Ancestral, _k =
Karras
- Fix scheduler not defaulting correctly to DDIM.
- Code split SCHEDULER_MAP so its consistently loaded from the same
place.

**Known Bugs:**

- dpmpp_2s not working in img2img for denoising values < 0.8 ==> // This
seems to be an upstream bug. I've disabled it in img2img and canvas
until the upstream bug is fixed.
https://github.com/huggingface/diffusers/issues/1866
2023-05-12 09:06:22 +12:00
6d8c077f4e Merge branch 'main' into unipc-sched 2023-05-12 05:59:13 +12:00
97127e560e Disable dpmpp_2s in img2img & unifiedCanvas
... until upstream bug is fixed.
2023-05-12 04:51:58 +12:00
27dc07d95a Set zero eta by default(fix ddim scheduler error) 2023-05-11 18:49:27 +03:00
f7dc171c4f Rename default schedulers across the app 2023-05-12 03:44:20 +12:00
4b957edfec Add DDPM Scheduler 2023-05-12 03:18:34 +12:00
46ca7718d9 Add DEIS Scheduler 2023-05-12 03:10:30 +12:00
b928d7a6e6 Change scheduler names to be accurate
_a = Ancestral
_k = Karras
2023-05-12 02:59:43 +12:00
8a836247c8 Add DPMPP Single, Euler Karras and DPMPP2 Multi Karras Schedulers 2023-05-12 02:23:33 +12:00
95c3644564 fix it again 2023-05-12 00:10:39 +10:00
799cd07174 feat(ui): make core parameters layout consistent 2023-05-11 22:45:53 +10:00
9af385468d feat(ui): expand config options
now may disable individual SD features eg Noise, Variation, etc - stuff which is not ready for consumption in commercial.
2023-05-11 22:42:13 +10:00
3487388788 Merge branch 'unipc-sched' of https://github.com/blessedcoolant/InvokeAI into unipc-sched 2023-05-12 00:40:24 +12:00
9a383e456d Codesplit SCHEDULER_MAP for reusage 2023-05-12 00:40:03 +12:00
805f9f8f4a Merge branch 'main' into unipc-sched 2023-05-12 00:24:55 +12:00
52aa0c9bbd ui: miscellaneous fixes (#3386) 2023-05-12 00:21:29 +12:00
7f5f4689cc fix(ui): clear progress image on cancel 2023-05-11 22:20:37 +10:00
a3f81f4b98 fix(ui): fix results not displaying
- fix for commercial product
2023-05-11 22:20:37 +10:00
15c59e606f feat(ui): add spinner to gallery progress images
- otherwise you may think you can click it but you cannot
2023-05-11 22:20:37 +10:00
40d4cabecd feat(ui): improve image overlay 2023-05-11 22:20:37 +10:00
3493c8119b feat(ui): improve image preview css and fallback 2023-05-11 22:20:30 +10:00
c1e7460d39 Merge branch 'main' into unipc-sched 2023-05-12 00:11:09 +12:00
3ffff023b2 Add missing key to scheduler_map
It was breaking coz the sampler was not being reset. So needs a key on each. Will simplify this later.
2023-05-12 00:08:50 +12:00
f9384be59b fix(ui): fix init image causing overflow 2023-05-11 20:55:30 +10:00
6cf308004a fix(nodes): remove Optionals on ImageOutputs 2023-05-11 20:54:57 +10:00
d1029138d2 Default to DDIM if scheduler is missing 2023-05-11 22:54:35 +12:00
06b5800d28 Add UniPC Scheduler 2023-05-11 22:43:18 +12:00
483f2ccb56 feat(nodes): add RandomIntInvocation
just outputs a single random int
2023-05-11 20:33:32 +10:00
93ced0bec6 feat(nodes): add w/h to latents outputs
This reduces the number of nodes needed when working with latents (ie fewer plain integer value nodes)

Also correct a few mistakes in the fields
2023-05-11 20:32:55 +10:00
4333852c37 fix(nodes): fix missing context arg in LatentsToLatents 2023-05-11 19:28:42 +10:00
3baa230077 Merge branch 'main' into lstein/bugfix/compel 2023-05-11 00:50:45 -04:00
9e594f9018 pad conditioning tensors to same length
fixes crash when prompt length is greater than 75 tokens
2023-05-11 00:34:15 -04:00
8ad8c5c67a resolve conflicts with main 2023-05-11 00:19:20 -04:00
590942edd7 Merge branch 'main' into lstein/new-model-manager 2023-05-11 00:16:03 -04:00
4627910c5d added a wrapper model_manager_service and model events 2023-05-11 00:09:19 -04:00
b0c41b4828 filter our websocket errors (#3382)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-11 01:58:40 +00:00
e0d6946b6b fix(nodes): fix metadata test
- `progress_images` is no longer a parameter
- `seamless` needs to be reworked as a model config, removed as a param
2023-05-11 11:55:51 +10:00
bf7ea8309f fix(ui): change tab to img2img when selected initial image 2023-05-11 11:55:51 +10:00
54b65f725f fix(ui): rescale canvas on gallery resize 2023-05-11 11:55:51 +10:00
8ef49c2640 fix(ui): fix canvas img2img if no init image selected 2023-05-11 11:55:51 +10:00
f488b1a7f2 fix(nodes): fix usage of Optional 2023-05-11 11:55:51 +10:00
d2edb7c402 build(ui): add yalc to gitignore 2023-05-11 11:55:51 +10:00
f0a3f07b45 feat(ui): antialias progress images 2023-05-11 11:55:51 +10:00
b42b630583 fix(ui): h/w disabled bug 2023-05-11 11:55:51 +10:00
31a78d571b feat(ui): canvas antialiasing 2023-05-11 11:55:51 +10:00
fdc2232ea0 feat(ui): progress images in gallery and viewer 2023-05-11 11:55:51 +10:00
e94d0b2d40 fix(ui): fix janky gallery image delete 2023-05-11 11:55:51 +10:00
75ccbaee9c fix(ui): disable invoke button as soon as pressed 2023-05-11 11:55:51 +10:00
2848c8397c fix(ui): fix missing images on reload issue
- Mainly an issue for commercial due to incomplete metadata handling
2023-05-11 11:55:51 +10:00
fe8b5193de feat(ui): half-baked use all parameters
until we have a better system for metadata, this will remain half-baked
2023-05-11 11:55:51 +10:00
3d1470399c fix(ui): fix metadataviewer styling 2023-05-11 11:55:51 +10:00
fcf9c63049 fix(ui): fix copying image link 2023-05-11 11:55:51 +10:00
7bfb5640ad cleanup(ui): Remove unused vars + minor bug fixes 2023-05-11 11:55:51 +10:00
15e57e3a3d fix(ui): duplicate gallery in nodes editor 2023-05-11 11:55:51 +10:00
279468c0e8 feat(ui): restore tab names 2023-05-11 11:55:51 +10:00
c565812723 feat(ui): organize parameters panels 2023-05-11 11:55:51 +10:00
ec6c8e2a38 feat(ui): wip layout 2023-05-11 11:55:51 +10:00
77f2690711 fix(ui): remove duplicate gallery 2023-05-11 11:55:51 +10:00
c4b3a24ed7 feat(ui): revert tabs to txt2img/img2img 2023-05-11 11:55:51 +10:00
33c69359c2 feat(ui): add IAICollapse for parameters 2023-05-11 11:55:51 +10:00
864f4bb4af feat(ui): wip img2img layouting 2023-05-11 11:55:51 +10:00
5365f42a04 feat(ui): wip layouting 2023-05-11 11:55:51 +10:00
3dc60254b9 feat(ui): support collect nodes 2023-05-11 11:55:51 +10:00
027a8562d7 fix(ui): default node model selection 2023-05-11 11:55:51 +10:00
34f3a0f0e3 feat(nodes): improve default model choosing output 2023-05-11 11:55:51 +10:00
d0bac1675e fix(nodes): fix ImageOutput Config 2023-05-11 11:55:51 +10:00
4e56c962f4 fix(nodes): fix infill docstrings 2023-05-11 11:55:51 +10:00
4ef0e43759 fix(nodes): remove dataURL invocation 2023-05-11 11:55:51 +10:00
6945d10297 chore(ui): regen api client 2023-05-11 11:55:51 +10:00
4d6cef7ac8 fix(ui): fix types bug 2023-05-11 11:55:51 +10:00
a7786d5ff2 fix(nodes): restore seamless to TextToLatents 2023-05-11 11:55:51 +10:00
6c1de975d9 feat(nodes): add infill nodes 2023-05-11 11:55:51 +10:00
a1079e455a feat(nodes): cleanup unused params, seed generation 2023-05-11 11:55:51 +10:00
5457c7f069 fix(ui): use lodash-es instead of lodash 2023-05-11 11:55:51 +10:00
b8c1a3f96c chore(ui): remove unused babelrc & npm script 2023-05-11 11:55:51 +10:00
cee8e85f76 chore(ui): bump redux-remember 2023-05-11 11:55:51 +10:00
09f166577e feat(ui): migrate to redux-remember 2023-05-11 11:55:51 +10:00
bcc21531fb feat(ui): update for InfillInvocation 2023-05-11 11:55:51 +10:00
da4eacdffe feat(nodes): add InfillInvocation 2023-05-11 11:55:51 +10:00
6102e560ba feat(nodes): add LatentsToImage node (VAE encode) 2023-05-11 11:55:51 +10:00
ff3aa57117 feat(ui): fix endless gallery scroll for single col layout 2023-05-11 11:55:51 +10:00
49db6f4fac fix(nodes): fix trivial typing issues 2023-05-11 11:55:51 +10:00
20f6a597ab fix(nodes): add MetadataColorField 2023-05-11 11:55:51 +10:00
04c453721c feat(ui): tweak gallery loading indicator 2023-05-11 11:55:51 +10:00
350ffecc1f feat(ui): endless gallery scroll 2023-05-11 11:55:51 +10:00
b0557aa16b fix(ui): fix currentimagepreview not working for uploads 2023-05-11 11:55:51 +10:00
1c9429a6ea feat(ui): wip canvas 2023-05-11 11:55:51 +10:00
206e6b1730 feat(nodes): wip inpaint node 2023-05-11 11:55:51 +10:00
357cee2849 fix(nodes): fix cfg scale min value 2023-05-11 11:55:51 +10:00
0b49997bb6 feat(nodes): allow uploaded images to be any ImageType (eg intermediates) 2023-05-11 11:55:51 +10:00
5e09dd380d Revert "feat(nodes): free gpu mem after invocation"
This reverts commit 99cb33f477306d5dcc455efe04053ce41b8d85bd.
2023-05-11 11:55:51 +10:00
c7303adb0d feat(ui): fix generation mode logic 2023-05-11 11:55:51 +10:00
ed1f096a6f feat(ui): wip canvas migration 4 2023-05-11 11:55:51 +10:00
6ab5d28cf3 feat(ui): wip canvas migration, createListenerMiddleware 2023-05-11 11:55:51 +10:00
a75148cb16 feat(nodes): free gpu mem after invocation 2023-05-11 11:55:51 +10:00
f7bbc4004a feat(ui): wip canvas nodes migration 3 2023-05-11 11:55:51 +10:00
cee21ca082 feat(ui): wip canvas nodes migration 2 2023-05-11 11:55:51 +10:00
08ec12b391 feat(ui): wip canvas nodes migration 2023-05-11 11:55:51 +10:00
ff5e2a9a8c chore(ui): regen api client 2023-05-11 11:55:51 +10:00
e0b9b5cc6c feat(nodes): add dataURL to image node 2023-05-11 11:55:51 +10:00
aca4770481 fixed compel.py as requested 2023-05-10 21:40:44 -04:00
5d5157fc65 make conditioning.py work with compel 1.1.5 2023-05-10 18:08:33 -04:00
fb6ef61a4d change path for locale (#3381)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-10 10:30:17 -04:00
ee24ad7b13 fix(nodes): fix broken docs routes 2023-05-10 08:28:17 -04:00
f8e90ba3f0 feat(nodes): add ui build static route 2023-05-10 08:28:17 -04:00
ad0b70ca23 fix(nodes): fix #3306 (#3377)
Check if the cache has the object before deleting it.
2023-05-10 17:39:45 +12:00
7dfa135b2c fix(nodes): fix #3306
Check if the cache has the object before deleting it.
2023-05-10 15:29:10 +10:00
beeaa05658 Update dependencies to get deterministic image generation behavior (main branch) (#3354)
This PR updates to `xformers ~= 0.0.19` and `torch ~= 2.0.0`, which
together seem to solve the non-deterministic image generation issue that
was previously seen with earlier versions of `xformers`.
2023-05-10 00:10:51 -04:00
fa6a580452 merge with main 2023-05-10 00:03:32 -04:00
6b6d654f60 Merge branch 'main' into enhance/update-dependencies 2023-05-09 23:56:46 -04:00
99c692f397 check that model name matches format 2023-05-09 23:46:59 -04:00
3d85e769ce clean up ckpt handling
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
9cb962cad7 ckpt model conversion now done in ModelCache 2023-05-08 23:39:44 -04:00
853c83d0c2 surface detail field for 403 errors 2023-05-09 12:40:19 +10:00
a108155544 added StALKeR779's great model size calculating routine 2023-05-08 21:47:03 -04:00
1809990ed4 if backend returns an error, show it in toast 2023-05-09 11:09:36 +10:00
79d49853d2 use websocket transport first for socket.io 2023-05-09 11:01:02 +10:00
1f608d3743 add v2.3 branch to push trigger (#3363)
Update the push trigger with the branch which should deploy the docs,
also bring over the updates to the workflow from the v2.3 branch and:

- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method (`pip install ".[docs]"`)
2023-05-08 16:26:06 -04:00
df024dd982 bring changes from v2.3 branch over
- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method
2023-05-08 21:50:00 +02:00
45da85765c add v2.3 branch to push trigger 2023-05-08 21:10:20 +02:00
c15b49c805 implement StALKeR7779 requested API for fetching submodels 2023-05-07 23:18:17 -04:00
fd63e36822 optimize subfolder so that it returns submodel if parent is in RAM 2023-05-07 21:39:11 -04:00
4649920074 adjust t2i to work with new model structure 2023-05-07 19:06:49 -04:00
667171ed90 cap model cache size using bytes, not # models 2023-05-07 18:07:28 -04:00
bd0ad59c27 bump compel version 2023-05-07 15:22:46 -04:00
cce40acba5 Merge branch 'enhance/update-dependencies' of github.com:invoke-ai/InvokeAI into enhance/update-dependencies 2023-05-07 15:22:31 -04:00
bc9491ab69 bump compel version 2023-05-07 15:21:24 -04:00
f28632980d Merge branch 'main' into lstein/global-configuration 2023-05-07 07:52:46 -04:00
b909bac0dc Merge branch 'main' into enhance/update-dependencies 2023-05-07 21:44:43 +12:00
8618e41b32 Deploy documentation from v2.3 branch rather than main (#3356)
This PR instructs github to deploy documentation pages from the v2.3
branch.
2023-05-07 21:43:44 +12:00
4687f94141 Merge branch 'main' into actions/mkdocs-deploy 2023-05-07 21:43:18 +12:00
440912dcff feat(ui): make base log level debug 2023-05-07 15:36:37 +10:00
8b87a26e7e feat(ui): support collect nodes 2023-05-07 15:36:37 +10:00
44ae93df3e Deploy documentation from v2.3 branch rather than main 2023-05-06 23:56:04 -04:00
42d938fda5 remove debugging statement 2023-05-06 23:54:11 -04:00
8f80ba9520 update dependencies to get deterministic image generation 2023-05-06 23:09:24 -04:00
25ce47c44f remove reference to globals in compel.py 2023-05-06 22:49:35 -04:00
647ffb2a0f defined abstract baseclass for model manager service 2023-05-06 22:41:19 -04:00
afd2e32092 Merge branch 'main' into lstein/global-configuration 2023-05-06 21:20:25 -04:00
05a27bda5e generalize model loading support, include loras/embeds 2023-05-06 15:58:44 -04:00
2b213da967 add -y to the automated install instructions (#3349)
hi there, love the project! i noticed a small typo when going over the
install process.

when copying the automated install instructions from the docs into a
terminal, the line to install the python packages failed as it was
missing the `-y` flag.
2023-05-06 13:34:37 -04:00
e91e1eb9aa Merge branch 'main' into patch-1 2023-05-06 13:34:12 -04:00
b24129fb3e Fix logger namespace clash in web server (#3344)
This PR fixes a bug that appeared in the legacy web server after the
logging PR was merged.

closes #3343
2023-05-06 08:35:13 -04:00
350b1421bb Merge branch 'main' into lstein/bugfix/logger-namespace 2023-05-06 08:14:44 -04:00
a8cfa3565c Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager 2023-05-06 08:14:15 -04:00
e0214a32bc mostly ported to new manager API; needs testing 2023-05-06 00:44:12 -04:00
f01c79a94f add -y to the automated install instructions
when copying the automated install instructions from the docs into a terminal, the line to install the python packages failed as it was missing the `-y` flag.
2023-05-05 21:28:00 -04:00
463f6352ce Add compel node and conditioning field type (#3265)
Done as I said in title, but need to test(and understand) how cli works,
as previously it uses single prompt and now it's positive and negative.
2023-05-06 13:05:04 +12:00
af8c7c7d29 model manager rewritten to use model_cache; API changed! 2023-05-05 19:32:28 -04:00
a80fe05e23 Rename compel node 2023-05-05 21:30:16 +03:00
58d7833c5c Review changes 2023-05-05 21:09:29 +03:00
5012f61599 Separate conditionings back to positive and negative 2023-05-05 15:47:51 +03:00
a4e36bc02a when model is forcibly moved into RAM update loaded_models set 2023-05-04 23:28:03 -04:00
2e9bec15e7 Merge branch 'main' into lstein/new-model-manager 2023-05-04 23:19:38 -04:00
68bc0112fa implement lazy GPU offloading and ref counting 2023-05-04 23:15:32 -04:00
85c33823c3 Merge branch 'main' into feat/compel_node 2023-05-05 14:41:45 +12:00
c83a112669 Fix inpaint node (#3284)
Seems like this is the only change needed for the existing inpaint code
to work as a node. Kyle said on Discord that inpaint shouldn't be a
node, so feel free to just reject this if this code is going to be gone
soon.
2023-05-05 14:41:13 +12:00
e04ada1319 Merge branch 'main' into patch-1 2023-05-05 10:38:45 +10:00
d866dcb3d2 close #3343 2023-05-04 20:30:59 -04:00
81ec476f3a Revert seed field addition 2023-05-04 21:50:40 +03:00
1e6adf0a06 Fix default graph and test 2023-05-04 21:14:31 +03:00
7d221e2518 Combine conditioning to one field(better fits for multiple type conditioning like perp-neg) 2023-05-04 20:14:22 +03:00
742ed19d66 add missing config module 2023-05-04 01:20:30 -04:00
29c2ada23c add test for the configuration module 2023-05-04 00:45:52 -04:00
e4196bbe5b adjust non-app modules to use new config system 2023-05-04 00:43:51 -04:00
15ffb53e59 remove globals, args, generate and the legacy CLI 2023-05-03 23:36:51 -04:00
90054ddf0d use InvokeAISettings for app-wide configuration 2023-05-03 22:30:30 -04:00
a273bdbdc1 Merge branch 'main' into lstein/new-model-manager 2023-05-03 18:09:29 -04:00
56d3cbead0 Merge branch 'main' into feat/compel_node 2023-05-04 00:28:33 +03:00
5e8c97f1ba [Enhancement] Regularize logging messages (#3176)
# Intro

This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions:

```
 ### A critical error
 *** A non-fatal error
 ** A warning
  >> Informational message
        | Debugging message
```

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add InvokeAI's
informational message decorations, while `ialog.error("foo")` will add
the decorations.
    
# Usage:

This is a thin wrapper around the standard Python logging module. It can
be used in several ways:


## Module-level logging style
 
This style logs everything through a single default logging object and
is identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus to
logging:
    
```
      import invokeai.backend.util.logging as logger
    
      logger.debug('this is a debugging message')
      logger.info('this is a informational message')
      logger.log(level=logging.CRITICAL, 'get out of dodge')

      logger.disable(level=logging.INFO)
      logger.basicConfig(filename='/var/log/invokeai.log')
      logger.error('this will be logged to console and to invokeai.log')
```    

Internally these functions all go through a custom logging object named
"invokeai". You can access it to perform additional customization in
either of these ways:

```
logger = logger.getLogger()
logger = logger.getLogger('invokeai')
```
    
## Object-oriented style

For more control, the logging module's object-oriented logging style is
also supported. The API is identical to the vanilla logging usage. In
fact, the only thing that has changed is that the getLogger() method
adds a custom formatter to the log messages.
    
```
     import logging
     from invokeai.backend.util.logging import InvokeAILogger
    
     logger = InvokeAILogger.getLogger(__name__)
     fh = logging.FileHandler('/var/invokeai.log')
     logger.addHandler(fh)
     logger.critical('this will be logged to both the console and the log file')
```

## Within the nodes API

From within the nodes API, the logger module is stored in the `logger`
slot of InvocationServices during dependency initialization. For
example, in a router, the idiom is:

```
from ..dependencies import ApiDependencies
logger = ApiDependencies.invoker.services.logger
logger.warning('uh oh')
```

Currently, to change the logger used by the API, one must change the
logging module passed to `ApiDependencies.initialize()` in `api_app.py`.
However, this will eventually be replaced with a method to select the
preferred logging module using the configuration file (dependent on
merging of PR #3221)
2023-05-03 15:00:05 -04:00
4687ad4ed6 Merge branch 'main' into enhance/invokeai-logs 2023-05-03 13:36:06 -04:00
8a0ec0fa0f Merge branch 'main' into lstein/new-model-manager 2023-05-03 13:30:50 -04:00
e1fed52c66 work on model cache and its regression test finished 2023-05-03 12:38:18 -04:00
994b247f8e feat(ui): do not persist gallery images
- I've sorted out the issues that make *not* persisting troublesome, these will be rolled out with canvas
- Also realized that persisting gallery images very quickly fills up localStorage, so we can't really do it anyways
2023-05-03 23:41:48 +10:00
bb959448c1 implement hashing for local & remote models 2023-05-02 16:52:27 -04:00
0419f50ab0 chore(ui): bump react-virtuoso
- Resolves an issue with gallery not rendering all items
2023-05-02 20:15:29 +10:00
f9f40adcdc fix(nodes): fix t2i graph
Removed width and height edges.
2023-05-02 13:11:28 +10:00
2e2abf6ea6 caching of subparts working 2023-05-01 22:57:30 -04:00
3264d30b44 feat(nodes): allow multiples of 8 for dimensions 2023-05-02 12:01:52 +10:00
4d885653e9 feat(ui): tidy 2023-05-02 11:27:08 +10:00
475b6bef53 feat(ui): use windowing for gallery
vastly improves the gallery performance when many images are loaded.

- `react-virtuoso` to do the virtualized list
- `overlayscrollbars` for a scrollbar
2023-05-02 11:27:08 +10:00
d39de0ad38 fix(nodes): fix duplicate Invoker start/stop events 2023-05-01 18:24:37 -04:00
d14a7d756e nodes-api: enforce single thread for the processor
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.

Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)

Fixes #3289
2023-05-01 18:24:37 -04:00
b050c1bb8f use logger in ApiDependencies 2023-05-01 16:27:44 -04:00
276dfc591b feat(ui): disable w/h when img2img & not fit 2023-05-01 17:28:22 +10:00
b49d76ebee feat(nodes): fix image to image fit param
it was ignored previously.
2023-05-01 17:28:22 +10:00
a6be44789b fix(ui): progress image rerender, checkbox 2023-05-01 11:16:49 +10:00
a4313c26cb fix: Do not hide Preview button & color code it 2023-05-01 11:16:49 +10:00
d4b250d509 feat(ui): Add auto show progress previews setting 2023-05-01 11:16:49 +10:00
29743a9e02 fix(ui): next/prev image buttons 2023-05-01 11:16:49 +10:00
fecb77e344 feat(ui): dndkit --> rnd for draggable 2023-05-01 11:16:49 +10:00
779671753d feat(ui): tweak floating preview 2023-05-01 11:16:49 +10:00
d5e152b35e fix(ui): ignore events after canceling session 2023-05-01 11:16:49 +10:00
270657a62c feat(ui): gallery & progress image refactor 2023-05-01 11:16:49 +10:00
3601b9c860 feat(ui): revamp status indicator 2023-05-01 11:16:49 +10:00
c8fe12cd91 feat(ui): init image tweaks 2023-05-01 11:16:49 +10:00
deae5fbaec fix(ui): socket event types 2023-05-01 11:16:49 +10:00
5b558af2b3 fix(ui): fix metadata viewer scroll 2023-05-01 11:16:49 +10:00
4150d5306f chore(ui): regen api client 2023-05-01 11:16:49 +10:00
8c2e4700f9 feat(ui): persist gallery state 2023-05-01 11:16:49 +10:00
adaecada20 fix(ui): fix current image seed button 2023-05-01 11:16:49 +10:00
258895bcc9 feat(ui): being dismantling old sio stuff, fix recall seed/prompt/init
- still need to fix up metadataviewer's recall features
2023-05-01 11:16:49 +10:00
2eb7c25bae feat(ui): clean up and simplify socketio middleware 2023-05-01 11:16:49 +10:00
2e4e9434c1 fix(ui): fix initial image for uploads 2023-05-01 11:16:49 +10:00
0cad204e74 feat(ui): add error handling for linear graph generation 2023-05-01 11:16:49 +10:00
0bc2edc044 Merge branch 'main' into enhance/invokeai-logs 2023-04-29 11:00:18 -04:00
16488e7db8 fix tests 2023-04-29 10:59:50 -04:00
974841926d logger is a interchangeable service 2023-04-29 10:48:50 -04:00
8db20e0d95 rename log to logger throughout 2023-04-29 09:43:40 -04:00
d00d29d6b5 feat(ui): update settings modal 2023-04-29 18:28:19 +10:00
dc976cd665 feat(ui): add switch for logging 2023-04-29 18:28:19 +10:00
6d6b986a66 feat(ui): remove Console and redux logging state 2023-04-29 18:28:19 +10:00
bffdede0fa feat(ui): improve log messages 2023-04-29 18:28:19 +10:00
a4c258e9ec feat(ui): add roarr logger 2023-04-29 18:28:19 +10:00
8d837558ac fix(ui): fix spelling of systemPersistDenylist.ts 2023-04-29 18:28:19 +10:00
e673ed08ec fix(ui): restore missing chakra-cli package
(amending to try and get the workflow to run)
2023-04-29 12:21:11 +10:00
f0e07bff5a fix bad logging path in config script 2023-04-28 15:39:00 -04:00
3ec06a1fc3 Merge branch 'main' into enhance/invokeai-logs 2023-04-28 10:10:33 -04:00
6b79e2b407 Merge branch 'main' into enhance/invokeai-logs
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
0eed9dbc44 fix(ui): fix packaging import issue (#3294)
I accidentally merged a broken #3292 (merge conflicts incorrectly
resolved). Fixing it
2023-04-29 00:39:56 +12:00
53c7832fd1 fix(ui): fix packaging import issue 2023-04-28 22:37:51 +10:00
ca1cc0e2c2 feat(ui): rerender mitigation sweep 2023-04-28 22:00:18 +10:00
5d8728c7ef feat(ui): persist socket session ids and re-sub on connect 2023-04-28 22:00:18 +10:00
a8cec4c7e6 fix(ui): improve schema parsing error handling 2023-04-28 22:00:18 +10:00
2b5ccdc55f build(ui): treeshake lodash via lodash-es 2023-04-28 21:56:43 +10:00
d92d5b5258 build(ui): fix types exports 2023-04-28 21:56:43 +10:00
a591184d2a build(ui): remove unneeded types file 2023-04-28 21:56:43 +10:00
ee881e4c78 build(ui): add react/react-dom peer deps 2023-04-28 21:56:43 +10:00
61fbb24e36 feat(ui): set up for packaging 2023-04-28 21:56:43 +10:00
d582949488 feat(ui): rename main app components 2023-04-28 21:56:43 +10:00
de574eb4d9 chore(ui): upgrade all packages 2023-04-28 21:56:43 +10:00
bfd90968f1 chore(ui): tidy npm structure 2023-04-28 21:56:43 +10:00
956ad6bcf5 add redesigned model cache for diffusers & transformers 2023-04-28 00:41:52 -04:00
4a924c9b54 feat(nodes): hardcode resize latents downsampling 2023-04-28 09:52:09 +10:00
0453d60c64 fix(nodes): fix slatents and rlatents bugs 2023-04-28 09:52:09 +10:00
c4f4f8b1b8 fix(nodes): remove unused width and height from t2l 2023-04-28 09:52:09 +10:00
3e80eaa342 feat(nodes): add resize and scale latents nodes
- this resize/scale latents is what is needed for hires fix
- also remove unused `seed` from t2l
2023-04-28 09:52:09 +10:00
00a0cb3403 fix(ui): update exported types 2023-04-28 09:20:09 +10:00
ea93cad5ff fix(ui): update to match change in route params 2023-04-28 09:19:03 +10:00
4453a0d20d feat(ui): remove toasts for network bc we have status to tell us 2023-04-28 09:18:19 +10:00
1e837e3c9d fix(ui): add formatted neg prompt for linear nodes (#3282)
* fix(ui): add formatted neg prompt for linear nodes

* remove conditional

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-27 15:05:35 -04:00
0f95f7cea3 Fix inpaint node
Seems like this is the only change needed for the existing inpaint node to work.
2023-04-27 11:03:07 -07:00
0b0068ab86 Merge branch 'main' into feat/compel_node 2023-04-27 14:53:10 +03:00
31c7fa833e feat(ui): simplify image display 2023-04-27 14:10:44 +10:00
db16ca0079 fix(ui): Current Image Buttons position 2023-04-27 14:10:44 +10:00
a824f47bc6 fix(nodes): use absolute path when deleting 2023-04-27 14:10:44 +10:00
99392debe8 feat(ui): refactor DeleteImageModal
- refactor the component
- use translations
- add config for systems where deleted images are not sent to bin (only changes the messaging)
2023-04-27 14:10:44 +10:00
0cc739afc8 feat(nodes): use send2trash to delete images, fix thumbnail_path 2023-04-27 14:10:44 +10:00
0ab62b0343 feat(ui): "blacklist" -> "denylist" 2023-04-27 14:10:44 +10:00
75d25dd5cc feat(ui): restore image deletion functionality 2023-04-27 14:10:44 +10:00
2e54da13d8 chore(ui): regen api client 2023-04-27 14:10:44 +10:00
f34f416bf5 fix(ui): handle floats in NumberInputFieldComponent 2023-04-27 14:10:44 +10:00
021c63891d fix(ui): fix config types and merging 2023-04-27 14:10:44 +10:00
a968862e6b feat(ui): Move img2img badge info to top right 2023-04-27 14:10:44 +10:00
a08189d457 ui: Match styling of img2img to the rest of the accordions 2023-04-27 14:10:44 +10:00
0a936696c3 feat(ui): add config slice, configuration default values 2023-04-27 14:10:44 +10:00
55e33eaf4c docs: add note on README about migration (#3277) 2023-04-27 13:17:43 +12:00
3da5fb223f docs: add note on README about migration 2023-04-27 11:05:32 +10:00
a3c5a664e5 fix(ui): update UI to handle uploads with alternate URLs (#3274) 2023-04-26 07:14:08 -07:00
b638fb2f30 fix(ui): use name in response instead of parsing out of URL to handle alternative URLs 2023-04-26 09:48:16 -04:00
c1b10b2222 feat(ui): open in new tab @ hoverable image 2023-04-26 12:40:10 +10:00
bee29714d9 fix(ui): fix templates not refreshing correctly 2023-04-26 12:40:10 +10:00
d40d5276dd feat(ui): wip img2img ui 2023-04-26 12:40:10 +10:00
568f0aad71 feat(ui): wip img2img ui 2023-04-26 12:40:10 +10:00
38474fa9d4 feat(ui): add lil spinner to loading 2023-04-26 12:17:01 +10:00
f7f974a28b fix(ui): fix inverted conditional 2023-04-26 12:17:01 +10:00
3c150b384c fix(ui): fix export of ApplicationFeature type 2023-04-26 12:17:01 +10:00
65816049ba feat(ui): add secret loading screen override button 2023-04-26 12:17:01 +10:00
c1c881ded5 feat(ui): support disabledFeatures, add nicer loading
- `disabledParametersPanels` -> `disabledFeatures`
- handle disabling `faceRestore`, `upscaling`, `lightbox`, `modelManager` and OSS header links/buttons
- wait until models are loaded to hide loading screen
- also wait until schema is parsed if `nodes` is an enabled tab
2023-04-26 12:17:01 +10:00
82c4dd8b86 fix(api): return same URL on location header 2023-04-26 06:29:30 +10:00
711d09a107 feat(nodes): add get_uri method to image storage
- gets the external URI of an image
2023-04-26 06:29:30 +10:00
74013b6611 fix(nodes): address feedback 2023-04-26 06:29:30 +10:00
790f399986 feat(nodes): tidy images routes 2023-04-26 06:29:30 +10:00
73cdd36594 feat(nodes): raise HTTPExceptions instead of returning Reponses 2023-04-26 06:29:30 +10:00
50ac3eb28d feat(nodes): add delete_image & delete_images routes 2023-04-26 06:29:30 +10:00
d753cff91a Undo debug message 2023-04-25 13:18:50 +03:00
89f1909e4b Update default graph 2023-04-25 13:11:50 +03:00
37916a22ad Use textual inversion manager from pipeline, remove extra conditioning info for uc 2023-04-25 12:53:13 +03:00
76e5d0595d fix(ui): fix no progress images when gallery is empty (#3268)
When gallery was empty (and there is therefore no selected image), no
progress images were displayed.

- fix by correcting the logic in CurrentImageDisplay
- also fix app crash introduced by fixing the first bug
2023-04-25 17:48:24 +12:00
f03cb8f134 fix(ui): fix no progress images when gallery is empty 2023-04-25 15:00:54 +10:00
c2a0e8afc3 [Bugfix] prevent cli crash (#3132)
Prevent legacy CLI crash caused by removal of convert option
    
- Compensatory change to the CLI that prevents it from crashing when it
tries to import a model.
- Bug introduced when the "convert" option removed from the model
manager.
2023-04-25 03:55:33 +01:00
31a904b903 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:28:45 +01:00
c174cab3ee [Bugfix] fixes and code cleanup to update and installation routines (#3101)
- Fix the update script to work again and fixes the ambiguity between
when a user wants to update to a tag vs updating to a branch, by making
these two operations explicitly separate.
- Remove dangling functions and arguments related to legacy checkpoint
conversion. These are no longer needed now that all legacy models are
either converted at import time, or on-the-fly in RAM.
2023-04-25 03:28:23 +01:00
fe12938c23 update to diffusers 0.15 and fix code for name changes (#3201)
- This is a port of #3184 to the main branch
2023-04-25 03:23:24 +01:00
4fa5c963a1 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:10:51 +01:00
48ce256ba2 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-25 02:49:59 +01:00
8cb2fa8600 Restore log_tokenization check 2023-04-25 04:29:17 +03:00
8f460b92f1 Make latent generation nodes use conditions instead of prompt 2023-04-25 04:21:03 +03:00
d99a08a441 Add compel node and conditioning field type 2023-04-25 03:48:44 +03:00
7555b1f876 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly (#3256)
I noticed that the current invokeai-new.py was using almost all of a CPU
core. After a bit of profileing I noticed that there were many thousands
of calls to epoll() which suggested to me that something wasn't sleeping
properly in asyncio's loop.

A bit of further investigation with Python profiling revealed that the
__dispatch_from_queue() method in FastAPIEventService
(app/api/events.py:33) was also being called thousands of times.

I believe the asyncio.sleep(0.001) in that method is too aggressive (it
means that the queue will be polled every 1ms) and that 0.1 (100ms) is
still entirely reasonable.
2023-04-24 19:35:27 +12:00
a537231f19 Merge branch 'main' into reduce-event-polling 2023-04-24 19:14:10 +12:00
8044d1b840 translationBot(ui): update translation (Turkish)
Currently translated at 11.3% (58 of 512 strings)

translationBot(ui): added translation (Turkish)

Co-authored-by: ismail ihsan bülbül <e-ben@msn.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
2b58ce4ae4 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 75.0% (380 of 506 strings)

Co-authored-by: Patrick Tien <ivetien@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
ef605cd76c translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
a84b5b168f translationBot(ui): update translation (Swedish)
Currently translated at 34.7% (176 of 506 strings)

translationBot(ui): added translation (Swedish)

Co-authored-by: figgefigge <qvintuz@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/sv/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
16f6ee04d0 translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

translationBot(ui): update translation (German)

Currently translated at 80.8% (409 of 506 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
44be057aa3 translationBot(ui): update translation (Ukrainian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (English)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Ukrainian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
422f6967b2 translationBot(ui): update translation (Ukrainian)
Currently translated at 75.8% (384 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 85.5% (433 of 506 strings)

Co-authored-by: mitien <mitien@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
4528cc8ba6 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
87e91ebc1d translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
fd00d111ea translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
b8dc9000bd translationBot(ui): update translation (German)
Currently translated at 73.4% (370 of 504 strings)

Co-authored-by: Jaulustus <jaulustus@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
58c1066765 translationBot(ui): update translation (Finnish)
Currently translated at 18.2% (92 of 504 strings)

translationBot(ui): added translation (Finnish)

Co-authored-by: Juuso V <juuso.vantola@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fi/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
37096a697b translationBot(ui): added translation (Mongolian)
Co-authored-by: Bouncyknighter <gebifirm@gmail.com>
2023-04-24 16:05:16 +10:00
17d0920186 translationBot(ui): update translation (Japanese)
Currently translated at 73.0% (368 of 504 strings)

Co-authored-by: 唐澤 克幸 <4ranci0ne@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
1e05538364 translationBot(ui): added translation (Vietnamese)
Co-authored-by: techybrain-dev <techybrain.dev@gmail.com>
2023-04-24 16:05:16 +10:00
cf28617cd6 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly 2023-04-23 21:27:02 +01:00
d0d8640711 feat(ui): add reload schema button (#3252) 2023-04-23 19:51:37 +12:00
e6158d1874 feat(ui): add reload schema button 2023-04-23 17:49:02 +10:00
2e9d1ea8a3 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL (#3250)
* if `shouldFetchImages` is passed in, UI will make an additional
request to get valid image URL when an invocation is complete
* this is necessary in order to have optional authorization for images
2023-04-23 16:00:13 +10:00
59b0153236 add to types 2023-04-23 15:59:55 +10:00
9f8ff912c4 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL 2023-04-23 15:59:55 +10:00
f0e4a2124a [Nodes UI] More Work (#3248)
- Style the Minimap
- Made the Node UI Legend Responsive
- Set Min Width for nodes on Spawn so resize doesn't snap.
- Initial Implementation of Node Search
- Added FuseJS to handle the node filtering
2023-04-23 17:51:40 +12:00
11ab5c7d56 fix(ui): Fix up arrow not working on unfiltered list 2023-04-23 15:18:35 +12:00
3f334d9e5e feat(ui): Add fusejs to NodeSearch 2023-04-23 15:14:44 +12:00
ff891b1ff2 feat(ui): Basic Node Search Component
Very buggy
2023-04-23 13:35:02 +12:00
2914ee10b0 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-22 20:21:59 +01:00
e29c2fb782 Merge branch 'more-nodes-work' of https://github.com/blessedcoolant/InvokeAI into more-nodes-work 2023-04-23 02:53:25 +12:00
b763f1809e feat(ui): Stylize Node Minimap 2023-04-23 02:52:32 +12:00
d26b44104a fix(ui): minor tidy 2023-04-23 00:45:03 +10:00
b73fd2a6d2 fix(ui): Set Min Width for Nodes 2023-04-23 00:55:43 +12:00
f258aba6d1 chore(ui): Make the Node UI Legend Responsive 2023-04-23 00:55:22 +12:00
2e70848aa0 Responsive Mobile Layout (#3207)
The first draft for a Responsive Mobile Layout for InvokeAI. Some basic
documentation to help contributors. // Notes from: @blessedcoolant

---

The whole rework needs to be done using the `mobile first` concept where
the base design will be catered to mobile and we add responsive changes
as we grow to larger screens.

**Added**

- Basic breakpoints have been added to the `theme.ts` file that indicate
at which values Chakra makes the responsive changes.
- A basic `useResolution` hook has been added that either returns
`mobile`, `tablet` or `desktop` based on the breakpoint. We can
customize this hook further to do more complex checks for us if need be.

**Syntax**

- Any Chakra component is directly capable of taking different values
for the different breakpoints set in our `theme.ts` file. These can be
passed in a few ways with the most descriptive being an object. For
example:

`flexDir={{ base: 'column', xl: 'row' }}` - This would set the `0em and
above` to be column for the flex direction but change to row
automatically when we hit `xl` and above resolutions which in our case
is `80em or 1280px`. This same format is applicable for any element in
Chakra.

`flexDir={['column', null, null, 'row', null]}` - The above syntax can
also be passed as an array to the property with each value in the array
corresponding to each breakpoint we have. Setting `null` just bypasses
it. This is a good short hand but I think we stick to the above syntax
for readability.

**Note**: I've modified a few elements here and there to give an idea on
how the responsive syntax works for reference.

---

**Problems to be solved** @SammCheese 

- Some issues you might run into are with the Resizable components.
We've decided we will get not use resizable components for smaller
resolutions. Doesn't make sense. So you'll need to make conditional
renderings around these.
- Some components that need custom layouts for different screens might
be better if ported over to `Grid` and use `gridTemplateAreas` to swap
out the design layout. I've demonstrated an example of this in a commit
I've made. I'll let you be the judge of where we might need this.
- The header will probably need to be converted to a burger menu of some
sort with the model changing being handled correctly UX wise. We'll
discuss this on discord.

---

Anyone willing to contribute to this PR can feel free to join the
discussion on discord.

https://discord.com/channels/1020123559063990373/1020839344170348605/threads/1097323866780606615
2023-04-22 22:34:30 +10:00
e973aeef0d Merge branch 'main' into responsive-ui 2023-04-22 14:31:19 +02:00
50e1ac731d fix(ui): make input/outputs renderfn callback 2023-04-22 22:25:17 +10:00
43addc1548 fix(ui): memoize everything nodes 2023-04-22 22:25:17 +10:00
4901911c1a fix(ui): improve nodes performance 2023-04-22 22:25:17 +10:00
44a653925a feat(ui): node styling, controls
- custom node controls
- fix some types
- fix badge colors via colorScheme
- style nodes
2023-04-22 22:25:17 +10:00
94a07a8da7 feat(ui): Make Nodes always spawn in center of work area 2023-04-22 22:25:17 +10:00
ad41afe65e feat(ui): Make Nodes Resizable 2023-04-22 22:25:17 +10:00
77fa7519c4 chore(ui): Cleanup Invocation Component 2023-04-22 22:25:17 +10:00
6e29148d4d delete ImageToImageContent.tsx 2023-04-22 08:43:14 +02:00
3044f3bfe5 fix(ui): adapt NodeEditor for smaller screens 2023-04-22 08:33:05 +02:00
67a8627cf6 add dev:host script 2023-04-22 08:30:09 +02:00
3fb433cb91 Merge branch 'main' of https://github.com/invoke-ai/InvokeAI into responsive-ui 2023-04-22 08:27:00 +02:00
5f498e10bd Partial migration of UI to nodes API (#3195)
* feat(ui): add axios client generator and simple example

* fix(ui): update client & nodes test code w/ new Edge type

* chore(ui): organize generated files

* chore(ui): update .eslintignore, .prettierignore

* chore(ui): update openapi.json

* feat(backend): fixes for nodes/generator

* feat(ui): generate object args for api client

* feat(ui): more nodes api prototyping

* feat(ui): nodes cancel

* chore(ui): regenerate api client

* fix(ui): disable OG web server socket connection

* fix(ui): fix scrollbar styles typing and prop

just noticed the typo, and made the types stronger.

* feat(ui): add socketio types

* feat(ui): wip nodes

- extract api client method arg types instead of manually declaring them
- update example to display images
- general tidy up

* start building out node translations from frontend state and add notes about missing features

* use reference to sampler_name

* use reference to sampler_name

* add optional apiUrl prop

* feat(ui): start hooking up dynamic txt2img node generation, create middleware for session invocation

* feat(ui): write separate nodes socket layer, txt2img generating and rendering w single node

* feat(ui): img2img implementation

* feat(ui): get intermediate images working but types are stubbed out

* chore(ui): add support for package mode

* feat(ui): add nodes mode script

* feat(ui): handle random seeds

* fix(ui): fix middleware types

* feat(ui): add rtk action type guard

* feat(ui): disable NodeAPITest

This was polluting the network/socket logs.

* feat(ui): fix parameters panel border color

This commit should be elsewhere but I don't want to break my flow

* feat(ui): make thunk types more consistent

* feat(ui): add type guards for outputs

* feat(ui): load images on socket connect

Rudimentary

* chore(ui): bump redux-toolkit

* docs(ui): update readme

* chore(ui): regenerate api client

* chore(ui): add typescript as dev dependency

I am having trouble with TS versions after vscode updated and now uses TS 5. `madge` has installed 3.9.10 and for whatever reason my vscode wants to use that. Manually specifying 4.9.5 and then setting vscode to use that as the workspace TS fixes the issue.

* feat(ui): begin migrating gallery to nodes

Along the way, migrate to use RTK `createEntityAdapter` for gallery images, and separate `results` and `uploads` into separate slices. Much cleaner this way.

* feat(ui): clean up & comment results slice

* fix(ui): separate thunk for initial gallery load so it properly gets index 0

* feat(ui): POST upload working

* fix(ui): restore removed type

* feat(ui): patch api generation for headers access

* chore(ui): regenerate api

* feat(ui): wip gallery migration

* feat(ui): wip gallery migration

* chore(ui): regenerate api

* feat(ui): wip refactor socket events

* feat(ui): disable panels based on app props

* feat(ui): invert logic to be disabled

* disable panels when app mounts

* feat(ui): add support to disableTabs

* docs(ui): organise and update docs

* lang(ui): add toast strings

* feat(ui): wip events, comments, and general refactoring

* feat(ui): add optional token for auth

* feat(ui): export StatusIndicator and ModelSelect for header use

* feat(ui) working on making socket URL dynamic

* feat(ui): dynamic middleware loading

* feat(ui): prep for socket jwt

* feat(ui): migrate cancelation

also updated action names to be event-like instead of declaration-like

sorry, i was scattered and this commit has a lot of unrelated stuff in it.

* fix(ui): fix img2img type

* chore(ui): regenerate api client

* feat(ui): improve InvocationCompleteEvent types

* feat(ui): increase StatusIndicator font size

* fix(ui): fix middleware order for multi-node graphs

* feat(ui): add exampleGraphs object w/ iterations example

* feat(ui): generate iterations graph

* feat(ui): update ModelSelect for nodes API

* feat(ui): add hi-res functionality for txt2img generations

* feat(ui): "subscribe" to particular nodes

feels like a dirty hack but oh well it works

* feat(ui): first steps to node editor ui

* fix(ui): disable event subscription

it is not fully baked just yet

* feat(ui): wip node editor

* feat(ui): remove extraneous field types

* feat(ui): nodes before deleting stuff

* feat(ui): cleanup nodes ui stuff

* feat(ui): hook up nodes to redux

* fix(ui): fix handle

* fix(ui): add basic node edges & connection validation

* feat(ui): add connection validation styling

* feat(ui): increase edge width

* feat(ui): it blends

* feat(ui): wip model handling and graph topology validation

* feat(ui): validation connections w/ graphlib

* docs(ui): update nodes doc

* feat(ui): wip node editor

* chore(ui): rebuild api, update types

* add redux-dynamic-middlewares as a dependency

* feat(ui): add url host transformation

* feat(ui): handle already-connected fields

* feat(ui): rewrite SqliteItemStore in sqlalchemy

* fix(ui): fix sqlalchemy dynamic model instantiation

* feat(ui, nodes): metadata wip

* feat(ui, nodes): models

* feat(ui, nodes): more metadata wip

* feat(ui): wip range/iterate

* fix(nodes): fix sqlite typing

* feat(ui): export new type for invoke component

* tests(nodes): fix test instantiation of ImageField

* feat(nodes): fix LoadImageInvocation

* feat(nodes): add `title` ui hint

* feat(nodes): make ImageField attrs optional

* feat(ui): wip nodes etc

* feat(nodes): roll back sqlalchemy

* fix(nodes): partially address feedback

* fix(backend): roll back changes to pngwriter

* feat(nodes): wip address metadata feedback

* feat(nodes): add seeded rng to RandomRange

* feat(nodes): address feedback

* feat(nodes): move GET images error handling to DiskImageStorage

* feat(nodes): move GET images error handling to DiskImageStorage

* fix(nodes): fix image output schema customization

* feat(ui): img2img/txt2img -> linear

- remove txt2img and img2img tabs
- add linear tab
- add initial image selection to linear parameters accordion

* feat(ui): tidy graph builders

* feat(ui): tidy misc

* feat(ui): improve invocation union types

* feat(ui): wip metadata viewer recall

* feat(ui): move fonts to normal deps

* feat(nodes): fix broken upload

* feat(nodes): add metadata module + tests, thumbnails

- `MetadataModule` is stateless and needed in places where the `InvocationContext` is not available, so have not made it a `service`
- Handles loading/parsing/building metadata, and creating png info objects
- added tests for MetadataModule
- Lifted thumbnail stuff to util

* fix(nodes): revert change to RandomRangeInvocation

* feat(nodes): address feedback

- make metadata a service
- rip out pydantic validation, implement metadata parsing as simple functions
- update tests
- address other minor feedback items

* fix(nodes): fix other tests

* fix(nodes): add metadata service to cli

* fix(nodes): fix latents/image field parsing

* feat(nodes): customise LatentsField schema

* feat(nodes): move metadata parsing to frontend

* fix(nodes): fix metadata test

---------

Co-authored-by: maryhipp <maryhipp@gmail.com>
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-22 13:10:20 +10:00
fdad62e88b chore: add ".version" and ".last_model" to gitignore (#3208)
Mistakenly closed the previous pr.
2023-04-20 18:26:27 +01:00
955c81acef Merge branch 'main' into patch-1 2023-04-20 18:26:06 +01:00
e1058f3416 update CODEOWNERS for changed team composition (#3234)
Remove @mauwii and @keturn until they are able to reengage with the
development effort. @GreggHelt2 is designated co-codeowner for the
backend.
2023-04-20 17:19:10 +01:00
edf16a253d Merge branch 'main' into patch-1 2023-04-20 14:16:10 +02:00
46f5ef4100 Merge branch 'main' into dev/codeowner-fix-main 2023-04-19 22:40:56 +01:00
b843255236 update CODEOWNERS for changed team composition 2023-04-19 17:37:48 -04:00
3a968e5072 Update NSFW.md
Outdated doc said to change the '.invokeai' file, but it's now named 'invokeai.init' afaik.
2023-04-18 21:18:32 -04:00
b164330e3c replaced remaining print statements with log.*() 2023-04-18 20:49:00 -04:00
69433c9f68 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-18 19:21:53 -04:00
bd8ffd36bf bump to diffusers 0.15.1, remove dangling module 2023-04-18 19:20:38 -04:00
fd80e84ea6 Merge branch 'main' into patch-1 2023-04-18 19:14:28 -04:00
4824237a98 Added CPU instruction for README (#3225)
Since the change itself is quite straight-forward, I'll just describe
the context. Tried using automatic installer on my laptop, kept erroring
out on line 140-something of installer.py, "ERROR: Can not perform a
'--user' install. User site-packages are not visible in this
virtualenv."
Got tired of of fighting with pip so moved on to command line install.
Worked immediately, but at the time lacked instruction for CPU, so
instead of opening any helpful hyperlinks in the readme, took a few
minutes to grab the link from installer.py - thus this pr.
2023-04-18 19:07:37 -04:00
2c9a05eb59 Added CPU instruction for README 2023-04-18 18:46:55 +03:00
ecb5bdaf7e [bug] #3218 HuggingFace API off when --no-internet set (#3219)
#3218 

Huggingface API will not be queried if --no-internet flag is set
2023-04-18 14:34:34 +12:00
2feeb1f44c fix(ui): more responsive layout work 2023-04-18 04:29:31 +12:00
554f353773 fix(ui): Fix Width and Height showing 0 as input 2023-04-18 04:28:58 +12:00
f6cdff2c5b [bug] #3218 HuggingFace API off when --no-internet set
https://github.com/invoke-ai/InvokeAI/issues/3218

Huggingface API will not be queried if --no-internet flag is set
2023-04-17 16:53:31 +02:00
aee27e94c9 fix(ui): Fix site header on really small screens 2023-04-18 01:25:53 +12:00
695893e1ac fix(ui): Improve parameters panel and preview display 2023-04-18 01:09:48 +12:00
b800a8eb2e feat(ui): responsive wip
- Fixed a bunch of padding and margin issues across the app
- Fixed the Invoke logo compressing
- Disabled the visibility of the options panel pin button in tablet and mobile views
- Refined the header menu options in mobile and tablet views
- Refined other site header elements in mobile and tablet views
- Aligned Tab Icons to center in mobile and tablet views
2023-04-18 00:50:09 +12:00
9749ef34b5 layout improvements 2023-04-17 13:30:33 +02:00
9a43362127 Revert "Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207"
This reverts commit 866024ea6c, reversing
changes made to 601cc1f92c.
2023-04-17 13:51:08 +12:00
866024ea6c Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207 2023-04-17 13:50:44 +12:00
601cc1f92c help(ui): Basic responsive updates to demonstrate
Made some basic responsive changes to demonstrate how to go about making changes.

There are a bunch of problems not addressed yet. Like dealing with the resizeable component and etc.
2023-04-17 13:50:13 +12:00
d6a9a4464d feat(ui): Add Basic useResolution Component
This component just classifies `base` and `sm` as mobile, `md` and `lg` as tablet and `xl` and `2xl` as desktop.

This is a basic hook for quicker work with resolutions. Can be modified and adjusted to our needs. All resolution related work can go into this hook.
2023-04-17 13:48:42 +12:00
dac271725a feat(ui): Add Basic Breakpoints 2023-04-17 13:26:10 +12:00
e1fbecfcf7 fix(ui): Syntax issue with the HidePreview icon 2023-04-17 12:42:06 +12:00
63d10027a4 nodes: invocation queue item - make more pydantic 2023-04-16 09:39:33 -04:00
ef0773b8a3 nodes: set default for InvocationQueueItem.invoke_all 2023-04-16 09:39:33 -04:00
3daaddf15b nodes: remove duplicate LatentsToLatentsInvocation 2023-04-16 09:39:33 -04:00
570c3fe690 nodes: ensure Graph and GraphExecutionState ids are cast to str on instantiation 2023-04-16 09:39:33 -04:00
cbd1a7263a nodes: fix typing of GraphExecutionState.id 2023-04-16 09:39:33 -04:00
7fc5fbd4ce nodes: convert InvocationQueueItem to Pydantic class 2023-04-16 09:39:33 -04:00
6f6de402ad make InvocationQueueItem serializable 2023-04-16 09:39:33 -04:00
2ec4f5af10 remove unused import to pass lint & revert package.json 2023-04-15 21:53:33 +02:00
281662a6e1 chore: add ".version" and ".last_model" to gitignore
Mistakenly closed the previous pr
2023-04-15 21:46:47 +02:00
2edd032ec7 draft mobile layout 2023-04-15 21:34:03 +02:00
50eb02f68b chore(ui): build 2023-04-15 20:45:17 +10:00
d73f3adc43 moving shouldHidePreview from gallery to ui slice. 2023-04-15 20:45:17 +10:00
116107f464 chore(ui): build 2023-04-15 20:45:17 +10:00
da44bb1707 rename setter 2023-04-15 20:45:17 +10:00
f43aed677e chore(ui): build 2023-04-15 20:45:17 +10:00
0d051aaae2 rename hidden variable to something more descriptive 2023-04-15 20:45:17 +10:00
e4e48ff995 i forgor to push the locale 2023-04-15 20:45:17 +10:00
442a6bffa4 feat: add "Hide Preview" Button 2023-04-15 20:45:17 +10:00
aab262d991 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-14 20:12:38 -04:00
47b9910b48 update to diffusers 0.15 and fix code for name changes
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
0b0e6fe448 convert remainder of print() to log.info() 2023-04-14 15:15:14 -04:00
23d65e7162 [nodes] Add subgraph library, subgraph usage in CLI, and fix subgraph execution (#3180)
* Add latent to latent (img2img equivalent)
Fix a CLI bug with multiple links per node

* Using "latents" instead of "latent"

* [nodes] In-progress implementation of graph library

* Add linking to CLI for graph nodes (still broken)

* Fix subgraph execution, fix subgraph linking in CLI

* Fix LatentsToLatents
2023-04-14 06:41:06 +00:00
024fd54d0b Fixed a Typo. (#3190) 2023-04-14 14:33:31 +12:00
c44c19e911 Fixed a Typo. 2023-04-13 17:42:34 +02:00
c132dbdefa change "ialog" to "log" 2023-04-11 18:48:20 -04:00
f3081e7013 add module-level getLogger() method 2023-04-11 12:23:13 -04:00
f904f14f9e add missing module-level methods 2023-04-11 11:10:43 -04:00
8917a6d99b add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)

This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:

  import invokeai.backend.util.logging as ialog

  ialog.debug('this is a debugging message')
  ialog.info('this is a informational message')
  ialog.log(level=logging.CRITICAL, 'get out of dodge')
  ialog.disable(level=logging.INFO)
  ialog.basicConfig(filename='/var/log/invokeai.log')

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.

For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.

 import logging
 from invokeai.backend.util.logging import InvokeAILogger

 logger = InvokeAILogger.getLogger(__name__)
 fh = logging.FileHandler('/var/invokeai.log')
 logger.addHandler(fh)
 logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00
5a4765046e add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)
2023-04-11 09:33:28 -04:00
d923d1d66b fix(nodes): fix naming of CvInvocationConfig 2023-04-11 12:13:53 +10:00
1f2c1e14db fix(nodes): move InvocationConfig to baseinvocation.py 2023-04-11 12:13:53 +10:00
07e3a0ec15 feat(nodes): add invocation schema customisation, add model selection
- add invocation schema customisation

done via fastapi's `Config` class and `schema_extra`. when using `Config`, inherit from `InvocationConfig` to get type hints.

where it makes sense - like for all math invocations - define a `MathInvocationConfig` class and have all invocations inherit from it.

this customisation can provide any arbitrary additional data to the UI. currently it provides tags and field type hints.

this is necessary for `model` type fields, which are actually string fields. without something like this, we can't reliably differentiate  `model` fields from normal `string` fields.

can also be used for future field types.

all invocations now have tags, and all `model` fields have ui type hints.

- fix model handling for invocations

added a helper to fall back to the default model if an invalid model name is chosen. model names in graphs now work.

- fix latents progress callback

noticed this wasn't correct while working on everything else.
2023-04-11 12:13:53 +10:00
427db7c7e2 feat(nodes): fix typo in PasteImageInvocation 2023-04-10 21:33:08 +10:00
dad3a7f263 fix(nodes): sampler_name --> scheduler
the name of this was changed at some point. nodes still used the old name, so scheduler selection did nothing. simple fix.
2023-04-10 19:54:09 +10:00
5bd0bb637f fix(nodes): add missing type to ImageField 2023-04-10 19:33:15 +10:00
f05095770c Increase chunk size when computing diffusers SHAs (#3159)
When running this app first time in WSL2 environment, which is
notoriously slow when it comes to IO, computing the SHAs of the models
takes an eternity.

Computing shas for sd2.1
```
| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 510.87s)
```

I increased the chunk size to 16MB reduce the number of round trips for
loading the data. New results:

```
| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 59.89s)
```

Higher values don't seem to make an impact.
2023-04-09 22:29:43 -04:00
de189f2db6 Increase chunk size when computing SHAs 2023-04-09 21:53:59 +02:00
cee159dfa3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-09 12:08:09 -04:00
4463124bdd feat(nodes): mark ImageField properties required, add docs 2023-04-09 22:53:17 +10:00
34402cc46a feat(nodes): add list_images endpoint
- add `list_images` endpoint at `GET api/v1/images`
- extend `ImageStorageBase` with `list()` method, implemented it for `DiskImageStorage`
- add `ImageReponse` class to for image responses, which includes urls, metadata
- add `ImageMetadata` class (basically a stub at the moment)
- uploaded images now named `"{uuid}_{timestamp}.png"`
- add `models` modules. besides separating concerns more clearly, this helps to mitigate circular dependencies
- improve thumbnail handling
2023-04-09 13:48:44 +10:00
54d9833db0 Else. 2023-04-08 12:08:51 -04:00
5fe8cb56fc Correct response note 2023-04-08 12:08:51 -04:00
7919d81fb1 Update to address feedback 2023-04-08 12:08:51 -04:00
9d80b28a4f Begin Convert Work 2023-04-08 12:08:51 -04:00
1fcd91bcc5 Add/Update and Delete Models 2023-04-08 12:08:51 -04:00
e456e2e63a fix typo (#3147)
fix typo.

reference:
21f79e5919/invokeai/configs/INITIAL_MODELS.yaml (L21-L25)
2023-04-08 20:25:31 +12:00
ee41b99049 Update 050_INSTALLING_MODELS.md
fix typo
2023-04-08 17:02:47 +09:00
111d674e71 fix(nodes): use correct torch device in NoiseInvocation 2023-04-08 12:32:03 +10:00
8f048cfbd9 Add python-multipart, which is needed by nodes (#3141)
I'm not quite sure why this isn't being installed by fastapi's
dependencies, but running without it installed yields:

```
root@gnubert:/srv/ssdtank/docker/invokeai/git/InvokeAI# docker run --gpus all -p 9989:9090 -v /srv/ssdtank/docker/invokeai/data:/data -v /srv/ssdtank/docker/invokeai/git/InvokeAI/static/dream_web/:/static/dream_web --rm -ti -u root --entrypoint /bin/bash ghcr.io/cmsj/invokeai-nodes@sha256:426ebc414936cb67e02f5f64d963196500a77b2f485df8122a2d462797293938
root@7a77b56a5771:/usr/src# /invoke-new.py --web
Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /invoke-new.py:22 in <module>                                                                    │
│                                                                                                  │
│   19                                                                                             │
│   20                                                                                             │
│   21 if __name__ == '__main__':                                                                  │
│ ❱ 22 │   main()                                                                                  │
│   23                                                                                             │
│                                                                                                  │
│ /invoke-new.py:13 in main                                                                        │
│                                                                                                  │
│   10 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))                │
│   11 │                                                                                           │
│   12 │   if '--web' in sys.argv:                                                                 │
│ ❱ 13 │   │   from invokeai.app.api_app import invoke_api                                         │
│   14 │   │   invoke_api()                                                                        │
│   15 │   else:                                                                                   │
│   16 │   │   # TODO: Parse some top-level args here.                                             │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/invokeai/app/api_app.py:17 in <module>            │
│                                                                                                  │
│    14                                                                                            │
│    15 from ..backend import Args                                                                 │
│    16 from .api.dependencies import ApiDependencies                                              │
│ ❱  17 from .api.routers import images, sessions, models                                          │
│    18 from .api.sockets import SocketIO                                                          │
│    19 from .invocations import *                                                                 │
│    20 from .invocations.baseinvocation import BaseInvocation                                     │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/invokeai/app/api/routers/images.py:45 in <module> │
│                                                                                                  │
│   42 │   │   404: {"description": "Session not found"},                                          │
│   43 │   },                                                                                      │
│   44 )                                                                                           │
│ ❱ 45 async def upload_image(file: UploadFile, request: Request):                                 │
│   46 │   if not file.content_type.startswith("image"):                                           │
│   47 │   │   return Response(status_code=415)                                                    │
│   48                                                                                             │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:630 in decorator               │
│                                                                                                  │
│    627 │   │   ),                                                                                │
│    628 │   ) -> Callable[[DecoratedCallable], DecoratedCallable]:                                │
│    629 │   │   def decorator(func: DecoratedCallable) -> DecoratedCallable:                      │
│ ❱  630 │   │   │   self.add_api_route(                                                           │
│    631 │   │   │   │   path,                                                                     │
│    632 │   │   │   │   func,                                                                     │
│    633 │   │   │   │   response_model=response_model,                                            │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:569 in add_api_route           │
│                                                                                                  │
│    566 │   │   current_generate_unique_id = get_value_or_default(                                │
│    567 │   │   │   generate_unique_id_function, self.generate_unique_id_function                 │
│    568 │   │   )                                                                                 │
│ ❱  569 │   │   route = route_class(                                                              │
│    570 │   │   │   self.prefix + path,                                                           │
│    571 │   │   │   endpoint=endpoint,                                                            │
│    572 │   │   │   response_model=response_model,                                                │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:444 in __init__                │
│                                                                                                  │
│    441 │   │   │   │   0,                                                                        │
│    442 │   │   │   │   get_parameterless_sub_dependant(depends=depends, path=self.path_format),  │
│    443 │   │   │   )                                                                             │
│ ❱  444 │   │   self.body_field = get_body_field(dependant=self.dependant, name=self.unique_id)   │
│    445 │   │   self.app = request_response(self.get_route_handler())                             │
│    446 │                                                                                         │
│    447 │   def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:    │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/dependencies/utils.py:756 in              │
│ get_body_field                                                                                   │
│                                                                                                  │
│   753 │   │   alias="body",                                                                      │
│   754 │   │   field_info=BodyFieldInfo(**BodyFieldInfo_kwargs),                                  │
│   755 │   )                                                                                      │
│ ❱ 756 │   check_file_field(final_field)                                                          │
│   757 │   return final_field                                                                     │
│   758                                                                                            │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/dependencies/utils.py:111 in              │
│ check_file_field                                                                                 │
│                                                                                                  │
│   108 │   │   │   │   raise RuntimeError(multipart_incorrect_install_error) from None            │
│   109 │   │   except ImportError:                                                                │
│   110 │   │   │   logger.error(multipart_not_installed_error)                                    │
│ ❱ 111 │   │   │   raise RuntimeError(multipart_not_installed_error) from None                    │
│   112                                                                                            │
│   113                                                                                            │
│   114 def get_param_sub_dependant(                                                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart
```
2023-04-07 19:17:37 -04:00
cd1b350dae Merge branch 'main' into bugfix/release-updater 2023-04-07 18:56:21 -04:00
8334757af9 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-07 18:55:54 -04:00
7103ac6a32 Add python-multipart, which is needed by nodes 2023-04-07 19:43:42 +01:00
f6b131e706 remove vestiges of non-functional autoimport code for legacy checkpoints (#3076)
- the functionality to automatically import and run legacy checkpoint
files in a designated folder has been removed from the backend but there
are vestiges of the code remaining in the frontend that are causing
crashes.
- This fixes the problem.

- Closes #3075
2023-04-08 02:21:23 +12:00
d1b2b99226 Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 09:59:58 -04:00
e356f2511b chore: configure stale bot 2023-04-07 20:45:08 +10:00
e5f8b22a43 add a new method to model_manager that retrieves individual pipeline components (#3120)
This PR introduces a new set of ModelManager methods that enables you to
retrieve the individual parts of a stable diffusion pipeline model,
including the vae, text_encoder, unet, tokenizer, etc.

To use:

```
from invokeai.backend import ModelManager

manager = ModelManager('/path/to/models.yaml')

# get the VAE
vae = manager.get_model_vae('stable-diffusion-1.5')

# get the unet
unet = manager.get_model_unet('stable-diffusion-1.5')

# get the tokenizer
tokenizer = manager.get_model_tokenizer('stable-diffusion-1.5')

# etc etc
feature_extractor = manager.get_model_feature_extractor('stable-diffusion-1.5')
scheduler = manager.get_model_scheduler('stable-diffusion-1.5')
text_encoder = manager.get_model_text_encoder('stable-diffusion-1.5')

# if no model provided, then defaults to the one currently in GPU, if any
vae = manager.get_model_vae()
```
2023-04-07 01:39:57 -04:00
45b84fb4bb Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 17:07:25 +12:00
f022c89249 Merge branch 'main' into feat/return-submodels 2023-04-06 22:03:31 -04:00
ab05144716 Change where !replay looks for its infile (#3129)
!fetch puts its output file into the output directory; it may be
beneficial to have !replay look in the output directory as well.
2023-04-06 22:02:06 -04:00
aeb4914e67 Merge branch 'main' into replay-file_path 2023-04-06 21:45:23 -04:00
76bcd4d44f Fix typo (#3133)
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-07 12:38:05 +12:00
50f5e1bc83 Fix typo
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-06 16:47:57 -07:00
4c339dd4b0 refactor get_submodels() into individual methods 2023-04-06 17:08:23 -04:00
bc2b9500e3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-06 15:38:46 -04:00
32857d81c5 prevent legacy CLI crash caused by removal of convert option
- Compensatory change to the CLI that prevents it from crashing
  when it tries to import a model.
- Bug introduced when the "convert" option removed from the model
  manager.
2023-04-06 15:36:05 -04:00
7268131f57 change where !replay looks for its infile
!fetch puts its output file into the output directory; it may be beneficial to have !replay look in the output directory as well.
2023-04-06 08:14:11 -04:00
85b020f76c [nodes] Add latent nodes, storage, and fix iteration bugs (#3091)
* Add latents nodes.
* Fix iteration expansion.
* Add collection generator nodes, math nodes.
* Add noise node.
* Add some graph debug commands to the CLI.
* Fix negative id linking in CLI.
* Fix a CLI bug with multiple links per node.
2023-04-06 04:06:05 +00:00
a7833cc9a9 [api] Add models router and list model API. 2023-04-05 23:59:07 -04:00
28f75d80d5 Merge branch 'main' into bugfix/release-updater 2023-04-05 18:25:33 -04:00
919294e977 fix build-container.yml (#3117)
Add permission go write packages to GITHUB_TOKEN
2023-04-06 00:25:00 +02:00
b917ffa4d7 Merge branch 'main' into bugfix/release-updater 2023-04-05 17:37:27 -04:00
d44151d6ff add a new method to model_manager that retrieves individual pipeline parts
- New method is ModelManager.get_sub_model(model_name:str,model_part:SDModelComponent)

To use:

```
from invokeai.backend import ModelManager, SDModelComponent as sdmc
manager = ModelManager('/path/to/models.yaml')
vae = manager.get_sub_model('stable-diffusion-1.5', sdmc.vae)
```
2023-04-05 17:25:42 -04:00
7640acfb1f update build-container.yml
- add packages write permission
2023-04-05 15:44:26 +02:00
aed9ecef2a feat(nodes): add thumbnail generation to DiskImageStorage 2023-04-05 08:22:23 +10:00
18cddd7972 Right link on pytorch installer for linux rocm (#3084)
Right link on pytorch installer for linux rocm
2023-04-04 17:40:42 -04:00
e6b25f4ae3 Merge branch 'main' into patch-1 2023-04-04 17:40:12 -04:00
d1c0050e65 fix(nodes): fix typo in list_sessions handler (#3109)
The typo accidentally did not affect functionality; when `query==""`, it
`search()`ed but found everything due to empty query, then paginated
results, so it worked the same as `list()`.

Still fix it
2023-04-03 21:24:48 -04:00
ecdfa136a0 fix(nodes): fix typo in list_sessions handler 2023-04-04 00:34:32 +10:00
5cd513ee63 [deps] bump compel version to fix crash on invalid (auto111) syntax (#3107)
currently if users input eg `happy (camper:0.3)` it gets parsed
incorrectly, which causes crashes if it's in the negative prompt. bump
to compel 1.0.5 fixes the parser to avoid this (note the weight is
parsed as plain text, it's not converted to proper invoke syntax)
2023-04-04 02:30:17 +12:00
ab45086546 Merge branch 'main' into deps_bump_compel 2023-04-04 02:05:40 +12:00
77ba7359f4 fix(nodes): commit changes to db 2023-04-03 19:09:49 +10:00
8cbe2e14d9 bump compel version to fix on invalid (auto111) syntax 2023-04-03 10:37:01 +02:00
f682fb8040 fix invokeai-update script
- This commit fixes the update script to work again, as well as fixing
  the ambiguity between updating to a tag and updating to a branch.
2023-04-02 11:08:12 -04:00
ee86eedf01 Right link on pytorch installer for linux rocm
Right link on pytorch installer for linux rocm
2023-03-31 17:22:00 -03:00
1f89cf3343 remove vestiges of non-functional autoimport code for legacy checkpoints
- Closes #3075
2023-03-31 04:27:03 -04:00
c4e6511a59 Add support for yet another TI embedding format (main version) (#3050)
- This PR adds support for embedding files that contain a single key
"emb_params". The only example I know of this format is the
"EasyNegative" embedding on HuggingFace, but there are certainly others.

- This PR also adds support for loading embedding files that have been
saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
selecting the right format parser is clear.

- This is the same as #3045, which is on the 2.3 branch.
2023-03-31 03:57:57 -04:00
44843be4c8 Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 23:16:52 -04:00
054e963bef add basic autocomplete functionality to node cli (#3035)
- Commands, invocations and their parameters will now autocomplete using
introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet,
so path autocompletion is not implemented
2023-03-30 08:25:36 -04:00
afb66a7884 Merge branch 'main' into feat/node-cli-autocompleter 2023-03-30 07:51:51 -04:00
b9df9e26f2 Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 07:51:23 -04:00
25ae36ceb5 I18n build mode (#3051)
Add build mode option to bundle english translation with UI
2023-03-29 22:26:45 -04:00
3ae8daedaa Merge branch 'main' into i18n-build-mode 2023-03-29 22:26:17 -04:00
e11c1d66ab handle multiple tokens and embeddings in single file 2023-03-29 22:05:06 -04:00
b913e1e11e improve importation and conversion of legacy checkpoint files (#3053)
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

## Model configuration file selection

To improve the user experience, the model manager's `heuristic_import()`
method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
The yaml file is then used as the configuration file for importation and
conversion.

3. If no such file is found, then the method opens up the checkpoint and
probes it to determine whether it is V1, V1-inpaint or V2. If it is a V1
format, then the appropriate v1-inference.yaml config file is used.
Unfortunately there are two V2 variants that cannot be distinguished by
introspection.

4. If the probe algorithm is unable to determine the model type, then
its last-ditch effort is to execute an optional callback function that
can be provided by the caller. This callback, named
`config_file_callback` receives the path to the legacy checkpoint and
returns the path to the config file to use. The CLI uses to put up a
multiple choice prompt to the user. The WebUI **could** use this to
prompt the user to choose from a radio-button selection.

5. If the config file cannot be determined, then the import is
abandoned.

## Custom VAE Selection

The user can attach a custom VAE to the imported and converted model by
copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file can
be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.

Note that this is the same fix that was applied to the 2.3 branch in
#3043 . This applies to `main`.
2023-03-29 17:22:15 -04:00
3c4b6d5735 Merge branch 'main' into enhance/heuristic-import-improvements 2023-03-29 16:54:43 -04:00
e6123eac19 Merge branch 'main' into i18n-build-mode 2023-03-29 05:33:14 -07:00
30ca25897e Fix bugs in online ckpt conversion of 2.0 models (#3057)
## Enable the on-the-fly conversion of models based on SD 2.0/2.1 into
diffusers

This commit fixes bugs related to the on-the-fly conversion and loading
of legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
on-the-fly using --ckpt_convert, generation would crash with a precision
incompatibility error. This problem has been found and fixed.
2023-03-28 23:34:53 -04:00
abaee6b9ed Merge branch 'main' into feat/node-cli-autocompleter 2023-03-28 23:32:10 -04:00
4d7c9e1ab7 Merge branch 'main' into bugfix/convert-2.0-models 2023-03-28 23:01:36 -04:00
cc5687f26c [nodes] downgrade fastapi+uvicorn to fix openapi schema 2023-03-28 22:53:20 -04:00
cdb3616dca Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-28 21:03:06 -04:00
78e76f26f9 Merge branch 'main' into i18n-build-mode 2023-03-28 11:04:32 -04:00
9a7580dedd fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.
2023-03-28 00:17:20 -04:00
dc2da8cff4 Doc: updating ROCm version in documentation (#3041)
The Pytorch ROCm version in the documentation in outdated (`rocm5.2`)
which leads to errors during the installation of InvokeAI.

This PR updates the documentation with the latest Pytorch ROCm `5.4.2`
version.
2023-03-27 22:37:43 -04:00
019a9f0329 address change requests in PR
1. Prompt has changed to "invoke> ".
2. Function to initialize the autocompleter has been renamed "set_autocompleter()"
2023-03-27 12:20:24 -04:00
fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
dbc0093b31 Merge remote-tracking branch 'origin' into i18n-build-mode 2023-03-27 10:57:41 -04:00
92e512b8b6 add package mode option for i18next 2023-03-27 10:49:52 -04:00
abe4dc8ac1 Add support for yet another textual inversion embedding format
- This PR adds support for embedding files that contain a single key
  "emb_params". The only example I know of this format is the
  "EasyNegative" embedding on HuggingFace, but there are certainly
  others.

- This PR also adds support for loading embedding files that have been
  saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
  selecting the right format parser is clear.
2023-03-27 09:39:03 -04:00
dc14701d20 Merge branch 'main' into feat/node-cli-autocompleter 2023-03-26 23:46:10 -04:00
737e0f3085 doc: fixing error in rocm version 2023-03-26 12:40:20 +02:00
81b7ea4362 doc: updating ROCm version for pip install 2023-03-26 12:32:12 +02:00
09dfde0ba1 fix(ui): fix viewer tooltip localisation strings (#3037)
fixes #2923
2023-03-26 20:35:52 +13:00
3ba7e966b5 Merge branch 'main' into fix/ui/viewer-localisation 2023-03-26 20:35:12 +13:00
a1cd4834d1 nodes: add cancelation, updated progress callback, typing fixes (#3036)
keeping `main` up to date with my api nodes branch:
- bd7e515290: [nodes] Add cancelation to
the API @Kyle0654
- 5fe38f7: fix(backend): simple typing fixes
  - just picking some low-hanging fruit to improve IDE hinting
- c34ac91: fix(nodes): fix cancel; fix callback for img2img, inpaint
- makes nodes cancel immediate, use fix progress images on nodes, fix
callbacks for img2img/inpaint
- 4221cf7: fix(nodes): fix schema generation for output classes
- did this previously for some other class; needed to not have node
outputs be optional
2023-03-26 20:34:27 +13:00
a724038dc6 fix(ui): fix viewer tooltip localisation strings
fixes #2923
2023-03-26 17:43:00 +11:00
4221cf7731 fix(nodes): fix schema generation for output classes
All output classes need to have their properties flagged as `required` for the schema generation to work as needed.
2023-03-26 17:20:10 +11:00
c34ac91ff0 fix(nodes): fix cancel; fix callback for img2img, inpaint 2023-03-26 17:07:40 +11:00
5fe38f7c88 fix(backend): simple typing fixes 2023-03-26 17:07:03 +11:00
bd7e515290 [nodes] Add cancelation to the API 2023-03-26 15:47:32 +11:00
076fac07eb feat[web]: use the predicted denoised image for previews (#2915)
Some schedulers report not only the noisy latents at the current
timestep, but also their estimate so far of what the de-noised latents
will be.

It makes for a more legible preview than the noisy latents do.

I think this is a huge improvement, but there are a few considerations:
- Need to not spook @JPPhoto by changing how previews look.
- Some schedulers (most notably **DPM Solver++**) don't provide this
data, and it falls back to the current behavior there. That's not
terrible, but seeing such a big difference in how _previews_ look from
one scheduler to the next might mislead people into thinking there's a
bigger difference in their overall effectiveness than there really is.

My fear of configuration-option-overwhelm leaves me inclined to _not_
add a configuration option for this, but we could.
2023-03-26 00:29:00 -04:00
9348161600 add basic autocomplete functionality to node cli
- Commands, invocations and their parameters will now autocomplete
  using introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet, so
  path autocompletion is not implemented
2023-03-26 00:24:27 -04:00
dac3c158a5 Merge branch 'main' into feat/preview_predicted_x0
- resolve conflicts with generate.py invocation
- remove unused symbols that pyflakes complains about
- add **untested** code for passing intermediate latent image to the
  step callback in the format expected.
2023-03-25 16:07:18 -04:00
17d8bbf330 ask for escalated privileges in push workflows 2023-03-25 15:22:25 -04:00
9344687a56 installer: fix indentation in invoke.sh template (tabs -> spaces) 2023-03-25 13:57:09 -04:00
cf534d735c duplicate of PR #3016, but based on main 2023-03-25 13:57:09 -04:00
501924bc60 do not reexport PipelineIntermediateState 2023-03-25 13:57:09 -04:00
d117251747 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-25 13:57:09 -04:00
6ea61a8486 fix issue with embeddings being loaded twice (#3029)
This bug was causing a bunch of annoying warnings about not overwriting
previously loaded tokens.

- as noted by JPPhoto
2023-03-26 04:45:20 +13:00
e4d903af20 Merge branch 'main' into bugfix/load-embeddings-once 2023-03-26 04:19:43 +13:00
2d9797da35 (fix)[docs] Fixed snippet/code formatting (#2918)
It was pasted as plain text, now it's a code fence.
2023-03-25 10:49:13 -04:00
07ea806553 Merge branch 'main' into patch-1 2023-03-25 10:48:25 -04:00
5ac0316c62 fix issue with embeddings being loaded twice
- as noted by JPPhoto
2023-03-25 10:45:03 -04:00
9536ba22af Convert custom VAEs during legacy checkpoint loading (#3010)
- When a legacy checkpoint model is loaded via --convert_ckpt and its
models.yaml stanza refers to a custom VAE path (using the 'vae:' key),
the custom VAE will be converted and used within the diffusers model.
Otherwise the VAE contained within the legacy model will be used.
    
- Note that the checkpoint import functions in the CLI or Web UIs
continue to default to the standard stabilityai/sd-vae-ft-mse VAE. This
can be fixed after the fact by editing VAE key using either the CLI or
Web UI.
   
- Fixes issue #2917
2023-03-25 00:37:12 -04:00
5503749085 Merge branch 'main' into feat/use-custom-vaes 2023-03-25 17:09:38 +13:00
9bfe2fa371 add github API token to mkdocs workflow (#3023)
The mkdocs-workflow has been failing over the past week due to
permission denied errors. I *think* this is the result of not passing
the GitHub API token to the workflow, and this is a speculative fix for
the issue.
2023-03-24 17:59:53 -04:00
d8ce6e4426 Merge branch 'bugfix/mkdocs-workflow' of github.com:invoke-ai/InvokeAI into bugfix/mkdocs-workflow 2023-03-24 17:58:16 -04:00
43d2d6d98c add blessedcoolant as backup to mauwii codeowner 2023-03-24 17:58:03 -04:00
64c233efd4 Merge branch 'main' into bugfix/mkdocs-workflow 2023-03-24 17:47:14 -04:00
2245a4e117 doc(readme): fix incorrect install command (#3024)
Hi, I am trying to install InvokeAI on my linux machine, the command in
README.md cannot install correct dependency
2023-03-24 17:46:58 -04:00
9ceec40b76 Merge branch 'main' into feat/use-custom-vaes 2023-03-24 17:45:02 -04:00
0f13b90059 doc(readme): fix incorrect install command 2023-03-24 23:21:51 +08:00
d91fc16ae4 add github API token to mkdocs workflow 2023-03-24 09:17:30 -04:00
bc01a96f9d re-implement model scanning when loading legacy checkpoint files (#3012)
- This PR turns on pickle scanning before a legacy checkpoint file is
loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.

- See also #3011 for a similar patch to the 2.3 branch.
2023-03-24 08:57:07 -04:00
85b2822f5e Merge branch 'main' into security/scan-ckpt-files-main 2023-03-24 08:39:59 -04:00
c33d8694bb build: do not run python tests on ui build (#2987)
`invokeai/frontend/web/dist/**` should not be triggering the full test
suite.
2023-03-25 00:54:40 +13:00
685bd027f0 Merge branch 'main' into build/no-test-on-ui-build 2023-03-25 00:51:26 +13:00
f592d620d5 ui: translations update from weblate (#3021)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-24 19:25:17 +11:00
Tom
2b127b73ac translationBot(ui): update translation (French)
Currently translated at 82.7% (417 of 504 strings)

Co-authored-by: Tom <tom.fouthier@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
8855902cfe translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (504 of 504 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
9d8ddc6a08 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
4ca5189e73 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (504 of 504 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (501 of 501 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (500 of 500 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
873597cb84 Allow loading all types of dreambooth models - Fix issue #2932 (#2933)
Allows to load models with EMA using `model_ema.diffusion_model.xxxx` or
`model_ema.xxxx` weights.

Fixes #2932
2023-03-23 23:40:04 -04:00
44d742f232 Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:33:49 -04:00
6e7dbf99f3 Merge branch 'main' into bugfix/dreambooth_ema 2023-03-23 23:24:15 -04:00
1ba1076888 Tidy up Tests and Provide Documentation (#2869)
Bit of basic housekeeping and documentation to explain to people how to
get local development environment running (including the tests).
2023-03-23 23:23:20 -04:00
cafa108f69 Merge branch 'main' into tests 2023-03-23 23:22:27 -04:00
deeff36e16 Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:20:52 -04:00
d770b14358 [deps] upgrade compel for better .swap defaults and a bugfix (#3014) 2023-03-23 19:01:12 -04:00
20414ba4ad Merge branch 'main' into deps_upgrade_compel 2023-03-23 18:38:46 -04:00
92721a1d45 do not reexport PipelineIntermediateState 2023-03-24 09:32:47 +11:00
f329fddab9 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-24 09:32:47 +11:00
f2efde27f6 load embeddings after a ckpt legacy model is converted to diffusers (#3013)
This PR corrects a bug in which embeddings were not being applied when a
non-diffusers model was loaded.

- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 18:10:19 -04:00
02c58f22be upgrade compel for better .swap defaults and a bugfix 2023-03-23 22:34:54 +01:00
f751dcd245 load embeddings after a ckpt legacy model is converted to diffusers
- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 15:21:58 -04:00
a97107bd90 handle VAEs that do not have a "state_dict" key 2023-03-23 15:11:29 -04:00
b2ce45a417 re-implement model scanning when loading legacy checkpoint files
- This PR turns on pickle scanning before a legacy checkpoint file
  is loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.
2023-03-23 15:03:30 -04:00
4e0b5d85ba convert custom VAEs into diffusers
- When a legacy checkpoint model is loaded via --convert_ckpt and its
  models.yaml stanza refers to a custom VAE path (using the 'vae:'
  key), the custom VAE will be converted and used within the diffusers
  model. Otherwise the VAE contained within the legacy model will be
  used.

- Note that the heuristic_import() method, which imports arbitrary
  legacy files on disk and URLs, will continue to default to the
  the standard stabilityai/sd-vae-ft-mse VAE. This can be fixed after
  the fact by editing the models.yaml stanza using the Web or CLI
  UIs.

- Fixes issue #2917
2023-03-23 13:14:19 -04:00
a958ae5e29 Merge branch 'main' into feat/use-custom-vaes 2023-03-23 10:32:56 -04:00
4d50fbf8dc Merge branch 'main' into build/no-test-on-ui-build 2023-03-23 01:08:24 +13:00
485f6e5954 Export more for header (#2996)
* export more items needed for dynamic header
* remove build mode that is no longer needed
2023-03-23 01:07:16 +13:00
1f6ce838ba Merge branch 'main' into export-more-for-header 2023-03-22 07:49:15 -04:00
0dc5773849 [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) (#3004) 2023-03-22 19:12:45 +13:00
bc347f749c [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) 2023-03-21 19:45:17 -07:00
1b215059e7 Merge branch 'main' into export-more-for-header 2023-03-21 16:29:53 -04:00
db079a2733 remove unneeded build:package code 2023-03-21 10:29:27 -04:00
26f71d3536 change back 2023-03-21 10:28:29 -04:00
eb7ae2588c unused var 2023-03-21 10:21:58 -04:00
278c14ba2e try jsx.element 2023-03-21 10:18:38 -04:00
74e83dda54 update type 2023-03-21 10:10:48 -04:00
28c1fca477 Merge branch 'main' into build/no-test-on-ui-build 2023-03-20 02:21:40 +13:00
1f0324102a chore(ui): build 2023-03-19 23:16:29 +11:00
a782ad092d feat(ui): localise iaialertdialog defaults 2023-03-19 23:16:29 +11:00
eae4eb419a fix(ui): popovers trigger on click (accessibility) 2023-03-19 23:16:29 +11:00
fb7f38f46e fix(ui): make alertdialogs centered 2023-03-19 23:16:29 +11:00
93d0cae455 fix(ui): fix alertdialogs closing immediately 2023-03-19 23:16:29 +11:00
35f6b5d562 fix(ui): make invoketabs not lazy 2023-03-19 23:16:29 +11:00
2aefa06ef1 fix(ui): Clean up manual add forms 2023-03-19 23:16:29 +11:00
5906888477 feat(ui): add current image loading fallback 2023-03-19 23:16:29 +11:00
f22c7d0da6 feat(ui): add more w/h options 2023-03-19 23:16:29 +11:00
93b38707b2 feat(ui): tidy up model manager styling
fixes #2970
2023-03-19 23:16:29 +11:00
6ecf53078f fix(ui): Misalignment of model search entries 2023-03-19 23:16:29 +11:00
9c93b7cb59 build: do not run python tests on ui build
`invokeai/frontend/web/dist/**` should not be triggering the full test suite.
2023-03-19 23:01:30 +11:00
7789e8319c Fix some text and a link (#2910)
- Fix link to `070_INSTALL_XFORMERS.md`.
- Fix some spelling.
2023-03-19 05:55:18 +13:00
7d7a28beb3 Merge branch 'main' into main-text-fixup-PR 2023-03-18 09:54:41 -07:00
27a113d872 nodes: api fixes (#2959)
- 86932469e76f1315ee18bfa2fc52b588241dace1 add image_to_dataURL util
- 0c2611059711b45bb6142d30b1d1343ac24268f3 make fast latents method
static
- this method doesn't really need `self` and should be able to be called
without instantiating `Generator`
- 2360bfb6558ea511e9c9576f3d4b5535870d84b4 fix schema gen for
GraphExecutionState
- `GraphExecutionState` uses `default_factory` in its fields; the result
is the OpenAPI schema marks those fields as optional, which propagates
to the generated API client, which means we need a lot of unnecessary
type guards to use this data type. the [simple
fix](https://github.com/pydantic/pydantic/discussions/4577) is to add
config to explicitly say all class properties are required. looks this
this will be resolved in a future pydantic release
- 3cd7319cfdb0f07c6bb12d62d7d02efe1ab12675 fix step callback and fast
latent generation on nodes. have this working in UI. depends on the
small change in #2957
2023-03-16 20:24:28 +11:00
67f8f222d9 fix(nodes): fix step_callback + fast latents generation
this depends on the small change in #2957
2023-03-16 20:03:08 +11:00
5347c12fed fix(nodes): fix schema gen for GraphExecutionState 2023-03-16 20:03:08 +11:00
b194180f76 feat(backend): make fast latents method static 2023-03-16 20:03:08 +11:00
fb30b7d17a feat(backend): add image_to_dataURL util 2023-03-16 20:03:08 +11:00
c341dcaa3d update compel to fix black screens and use new downweighting algorithm (#2961)
Update `compel` to 1.0.0.

This fixes #2832.

It also changes the way downweighting is applied. In particular,
downweighting should now be much better and more controllable.

From the [compel
changelog](https://github.com/damian0815/compel#changelog):

> Downweighting now works by applying an attention mask to remove the
downweighted tokens, rather than literally removing them from the
sequence. This behaviour is the default, but the old behaviour can be
re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of
the `Compel` instance.
>
> Formerly, downweighting a token worked by both multiplying the
weighting of the token's embedding, and doing an inverse-weighted blend
with a copy of the token sequence that had the downweighted tokens
removed. The intuition is that as weight approaches zero, the tokens
being downweighted should be actually removed from the sequence.
However, removing the tokens resulted in the positioning of all
downstream tokens becoming messed up. The blend ended up blending a lot
more than just the tokens in question.
> 
> As of v1.0.0, taking advice from @keturn and @bonlime
(https://github.com/damian0815/compel/issues/7) the procedure is by
default different. Downweighting still involves a blend but what is
blended is a version of the token sequence with the downweighted tokens
masked out, rather than removed. This correctly preserves positioning
embeddings of the other tokens.
2023-03-16 17:49:47 +13:00
b695a2574b bump compel version 2023-03-16 01:55:39 +01:00
aa68a326c8 update compel 2023-03-15 23:05:55 +01:00
c2922d5991 add settingsmodal 2023-03-15 16:12:51 -04:00
85888030c3 more things needed for header 2023-03-15 14:38:22 -04:00
7cf59c1e60 Merge branch 'main' into main-text-fixup-PR 2023-03-16 04:43:22 +13:00
9738b0ff69 [nodes] Add Edge data type (#2958)
Adds an `Edge` data type, replacing the current tuple used for edges.
2023-03-15 18:41:56 +11:00
3021c78390 [nodes] Add Edge data type 2023-03-14 23:09:30 -07:00
6eeaf8d9fb Allow for dynamic header (#2955)
* Update root component to allow optional children that will render as
dynamic header of UI
* Export additional components (logo & themeChanger) for use in said
dynamic header (more to come here)
2023-03-15 07:41:24 +13:00
fa9afec0c2 fix npm deps 2023-03-14 14:15:03 -04:00
d6862bf8c1 fix npm deps 2023-03-14 14:14:16 -04:00
de01c38bbe fresh build 2023-03-14 14:11:42 -04:00
7e811908e0 remove 2023-03-14 14:09:16 -04:00
5f59f24f92 cleanup 2023-03-14 14:08:42 -04:00
e414fcf3fb bump version 2023-03-14 13:26:49 -04:00
079ad8f35a fix props 2023-03-14 13:22:57 -04:00
a4d7e0c78e export other components 2023-03-14 12:37:28 -04:00
e9c2f173c5 fix(inpaint): Seam painting being broken (#2952)
After #2942, seed needs to be passed down from inpaint to seam_paint.
Not doing so breaks inpainting and outpainting. This PR fixes it.
2023-03-15 00:38:26 +13:00
44f489d581 Merge branch 'main' into fix-seampaint 2023-03-14 06:19:25 -05:00
cb48bbd806 Removed file-extension-based arbitrary code execution attack vector (#2946)
# The Problem
Pickle files (.pkl, .ckpt, etc) are extremely unsafe as they can be
trivially crafted to execute arbitrary code when parsed using
`torch.load`
Right now the conventional wisdom among ML researchers and users is to
simply `not run untrusted pickle files ever` and instead only use
Safetensor files, which cannot be injected with arbitrary code. This is
very good advice.

Unfortunately, **I have discovered a vulnerability inside of InvokeAI
that allows an attacker to disguise a pickle file as a safetensor and
have the payload execute within InvokeAI.**

# How It Works
Within `model_manager.py` and `convert_ckpt_to_diffusers.py` there are
if-statements that decide which `load` method to use based on the file
extension of the model file. The logic (written in a slightly more
readable format than it exists in the codebase) is as follows:
```
if Path(file).suffix == '.safetensors':
   safetensor_load(file)
else:
   unsafe_pickle_load(file)
```

A malicious actor would only need to create an infected .ckpt file, and
then rename the extension to something that does not pass the `==
'.safetensors'` check, but still appears to a user to be a safetensors
file.
For example, this might be something like `.Safetensors`,
`.SAFETENSORS`, `SafeTensors`, etc.

InvokeAI will happily import the file in the Model Manager and execute
the payload.

# Proof of Concept
1. Create a malicious pickle file.
(https://gist.github.com/CodeZombie/27baa20710d976f45fb93928cbcfe368)
2. Rename the `.ckpt` extension to some variation of `.Safetensors`,
ensuring there is a capital letter anywhere in the extension (eg.
`malicious_pickle.SAFETENSORS`)
3. Import the 'model' like you would normally with any other safetensors
file with the Model Manager.
4. Upon trying to select the model in the web ui, it will be loaded (or
attempt to be converted to a Diffuser) with `torch.load` and the payload
will execute.


![image](https://user-images.githubusercontent.com/466103/224835490-4cf97ff3-41b3-4a31-85df-922cc99042d2.png)


# The Fix
This pull request changes the logic InvokeAI uses to decide which model
loader to use so that the safe behavior is the default. Instead of
loading as a pickle if the extension is not exactly `.safetensors`, it
will now **always** load as a safetensors file unless the extension is
**exactly** `.ckpt`.

# Notes:
I think support for pickle files should be totally dropped ASAP as a
matter of security, but I understand that there are reasons this would
be difficult.

In the meantime, I think `RestrictedUnpickler` or something similar
should be implemented as a replacement for `torch.load`, as this
significantly reduces the amount of Python methods that an attacker has
to work with when crafting malicious payloads
inside a pickle file. 
Automatic1111 already uses this with some success.
(https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/safe.py)
2023-03-15 00:09:17 +13:00
0a761d7c43 fix(inpaint): Seam painting being broken 2023-03-15 00:00:08 +13:00
a0f47aa72e Merge branch 'main' into main 2023-03-14 11:41:29 +01:00
f9abc6fc85 fix --png_compression command line argument (#2950)
- The value of png_compression was always 6, despite the value provided
to the --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of
png_compression and the help text.

- Closes #2945
2023-03-14 18:20:17 +13:00
d840c597b5 fix --png_compression command line argument
- The value of png_compression was always 6, despite the value provided to the
  --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of png_compression
  and the help text.

- Closes #2945
2023-03-14 00:24:05 -04:00
3ca654d256 speculative fix for alternative vaes 2023-03-13 23:27:29 -04:00
e0e01f6c50 Reduced Pickle ACE attack surface
Prior to this commit, all models would be loaded with the extremely unsafe `torch.load` method, except those with the exact extension `.safetensors`. Even a change in casing (eg. `saFetensors`, `Safetensors`, etc) would cause the file to be loaded with torch.load instead of the much safer `safetensors.toch.load_file`.
If a malicious actor renamed an infected `.ckpt` to something like `.SafeTensors` or `.SAFETENSORS` an unsuspecting user would think they are loading a safe .safetensor, but would in fact be parsing an unsafe pickle file, and executing an attacker's payload. This commit fixes this vulnerability by reversing the loading-method decision logic to only use the unsafe `torch.load` when the file extension is exactly `.ckpt`.
2023-03-13 16:16:30 -04:00
d9dab1b6c7 Update BUG_REPORT.yml 2023-03-13 11:17:38 -04:00
3b2ef6e1a8 Update BUG_REPORT.yml 2023-03-13 11:14:53 -04:00
c125a3871a Update BUG_REPORT.yml 2023-03-13 11:14:04 -04:00
0996bd5acf Merge branch 'main' into patch-1 2023-03-14 03:18:58 +13:00
ea77d557da add additional build mode (#2904)
*`yarn build:package` will build default component 
* moved some devDependencies to dependencies that are needed for
postinstall script
2023-03-14 03:15:51 +13:00
1b01161ea4 Merge branch 'main' into pr/2904 2023-03-14 03:14:35 +13:00
2230cb9562 chore(UI, accessibility): Icons. Header links & radio button (#2935)
# Overview
- Links should be parent of icon
- _Added style to link still so they still line up with sibling
components_
- Radio icon buttons
2023-03-14 03:13:19 +13:00
9e0c7c46a2 Merge branch 'main' into add-a-build-config 2023-03-13 09:58:17 -04:00
be305588d3 merged and rebuilt 2023-03-13 09:55:56 -04:00
9f994df814 Merge branch 'main' into chore/UI_more-accessibility-items 2023-03-14 02:49:47 +13:00
3062580006 Fix bug #2931 (#2942)
#2931 was caused by new code that held onto the PRNG in `get_make_image`
and used it in `make_image` for img2img and inpainting. This
functionality has been moved elsewhere so that we can generate multiple
images again.
2023-03-14 02:48:07 +13:00
596ba754b1 Removed seed from get_make_image. 2023-03-13 08:15:46 -05:00
b980e563b9 Fix bug #2931 2023-03-13 08:11:09 -05:00
7fe2606cb3 [nodes] Fixes calls into image to image and inpaint from nodes (#2940) 2023-03-13 19:05:32 +13:00
0c3b1fe3c4 [nodes] Fixes calls into image to image and inpaint from nodes 2023-03-12 22:12:42 -07:00
c9ee2e351c yarn build 2023-03-12 23:29:29 -05:00
e3aef20f42 chore(UI, accessibility): more items
- radio icon buttons
- links should be parent of icon
styled links to still line up with sibling components
2023-03-12 23:27:47 -05:00
60614badaf [nodes-api] Fix API generation to correctly reference outputs (#2939)
Correctly reference output types in node schemas
2023-03-13 17:02:55 +13:00
288cee9611 Merge remote-tracking branch 'origin/main' into feat/preview_predicted_x0
# Conflicts:
#	invokeai/app/invocations/generate.py
2023-03-12 20:56:02 -07:00
24aca37538 Just set output value in node schemas. Don't use additionalProperties, which would impact the schema. 2023-03-12 20:40:29 -07:00
b853ceea65 [nodes-api] Fix API generation to correctly reference outputs 2023-03-12 20:03:26 -07:00
3ee2798ede [fix] Get the model again if current model is empty 2023-03-12 22:26:11 -04:00
5c5106c14a Add keys when non EMA 2023-03-12 16:22:22 -05:00
c367b21c71 Fix issue #2932 2023-03-12 15:40:33 -05:00
2eef6df66a [ui]: add resizable pinnable drawer component (#2874)
wip

this is based off the branch in #2873
2023-03-12 22:46:48 +13:00
300aa8d86c chore(ui): build 2023-03-12 20:13:58 +11:00
727f1638d7 chore(ui): lint 2023-03-12 20:13:58 +11:00
ee6df5852a fix(ui): fix lightbox 2023-03-12 20:13:38 +11:00
90525b1c43 fix(ui): fix scrollable shadow 2023-03-12 20:13:38 +11:00
bbb95dbc5b fix(ui): add color mode watcher 2023-03-12 20:13:38 +11:00
f4b7f80d59 fix(ui): remove key prop 2023-03-12 20:13:38 +11:00
220f7373c8 feat(ui): Basic IAIOption Component & Fix Select Dropdown 2023-03-12 20:13:38 +11:00
4bb5785f29 fix(ui): Move Form Components to the correct folder 2023-03-12 20:13:38 +11:00
f9a7a7d161 fix(ui): set colorMode to fix native selects 2023-03-12 20:13:38 +11:00
de94c780d9 fix(ui): fix canvas status text bg 2023-03-12 20:13:38 +11:00
0b9230380c fix(ui): default gallery category buttons to icon 2023-03-12 20:13:38 +11:00
209a55b681 fix(ui): canvas rescale when toggle gallery 2023-03-12 20:13:38 +11:00
dc2f69f5d1 fix(ui): process buttons display on canvas beta 2023-03-12 20:13:38 +11:00
ad2f1b7b36 fix(ui): hack for hiding pinned panels 2023-03-12 20:13:38 +11:00
dd2d96a50f fix(ui): Bad styling on form elements 2023-03-12 20:13:38 +11:00
2bff28e305 fix(ui): Remove size limitation off the theme changer button 2023-03-12 20:13:38 +11:00
d68234d879 fix(ui): Gallery placeholder text not being centered 2023-03-12 20:13:38 +11:00
b3babf26a5 fix(ui): Fix current image buttons overflow 2023-03-12 20:13:38 +11:00
ecca0eff31 fix(ui): hotkey accordion spacing 2023-03-12 20:13:38 +11:00
28677f9621 fix(ui): process buttons display on canvas beta layout 2023-03-12 20:13:38 +11:00
caecfadf11 fix(ui): fix shadow 2023-03-12 20:13:38 +11:00
5cf8e3aa53 chore(ui): build 2023-03-12 20:13:38 +11:00
76cf2c61db feat(ui): drawer almost done
TODO:
- hide while pinned
- lightbox interaction with gallery
2023-03-12 20:13:38 +11:00
b4d976f2db fix(ui): fix flash of mini preview image
Restored the code that fixes this after having ripped it out thinking it didn't do anything. Spotted in #2915
2023-03-12 20:13:38 +11:00
777d127c74 feat(ui): wip drawer component and build 2023-03-12 20:13:38 +11:00
0678803803 lang(ui): update show canvas debug info string 2023-03-12 20:13:37 +11:00
d2fbc9f5e3 feat(ui): Add ThemeTypes & Move Grid Line Color 2023-03-12 20:13:37 +11:00
d81088dff7 feat(ui): wip resizable pinnable drawer
fix(ui): remove old scrollbar css

fix(ui): make guidepopover lazy

feat(ui): wip resizable drawer

feat(ui): wip resizable drawer

feat(ui): add scroll-linked shadow

feat(ui): organize files

Align Scrollbar next to content

Move resizable drawer underneath the progress bar

Add InvokeLogo to unpinned & align

Adds Invoke Logo to Unpinned Parameters panel and aligns to make it feel seamless.
2023-03-12 20:13:37 +11:00
1aaad9336f Remove image generation node dependencies on generate.py (#2902)
# Remove node dependencies on generate.py

This is a draft PR in which I am replacing `generate.py` with a cleaner,
more structured interface to the underlying image generation routines.
The basic code pattern to generate an image using the new API is this:

```
from invokeai.backend import ModelManager, Txt2Img, Img2Img

manager = ModelManager('/data/lstein/invokeai-main/configs/models.yaml')
model = manager.get_model('stable-diffusion-1.5')
txt2img = Txt2Img(model)
outputs = txt2img.generate(prompt='banana sushi', steps=12, scheduler='k_euler_a', iterations=5)

# generate() returns an iterator
for next_output in outputs:
    print(next_output.image, next_output.seed)

outputs = Img2Img(model).generate(prompt='strawberry` sushi', init_img='./banana_sushi.png')
output = next(outputs)
output.image.save('strawberries.png')
```

### model management

The `ModelManager` handles model selection and initialization. Its
`get_model()` method will return a `dict` with the following keys:
`model`, `model_name`,`hash`, `width`, and `height`, where `model` is
the actual StableDiffusionGeneratorPIpeline. If `get_model()` is called
without a model name, it will return whatever is defined as the default
in `models.yaml`, or the first entry if no default is designated.

### InvokeAIGenerator

The abstract base class `InvokeAIGenerator` is subclassed into into
`Txt2Img`, `Img2Img`, `Inpaint` and `Embiggen`. The constructor for
these classes takes the model dict returned by
`model_manager.get_model()` and optionally an
`InvokeAIGeneratorBasicParams` object, which encapsulates all the
parameters in common among `Txt2Img`, `Img2Img` etc. If you don't
provide the basic params, a reasonable set of defaults will be chosen.
Any of these parameters can be overridden at `generate()` time.

These classes are defined in `invokeai.backend.generator`, but they are
also exported by `invokeai.backend` as shown in the example below.

```
from invokeai.backend import InvokeAIGeneratorBasicParams, Img2Img
params = InvokeAIGeneratorBasicParams(
    perlin = 0.15
    steps = 30
   scheduler = 'k_lms'
)
img2img = Img2Img(model, params)
outputs = img2img.generate(scheduler='k_heun')
```

Note that we were able to override the basic params in the call to
`generate()`

The `generate()` method will returns an iterator over a series of
`InvokeAIGeneratorOutput` objects. These objects contain the PIL image,
the seed, the model name and hash, and attributes for all the parameters
used to generate the object (you can also get these as a dict). The
`iterations` argument controls how many objects will be returned,
defaulting to 1. Pass `None` to get an infinite iterator.

Given the proposed use of `compel` to generate a templated series of
prompts, I thought the API would benefit from a style that lets you loop
over the output results indefinitely. I did consider returning a single
`InvokeAIGeneratorOutput` object in the event that `iterations=1`, but I
think it's dangerous for a method to return different types of result
under different circumstances.

Changing the model is as easy as this:
```
model = manager.get_model('inkspot-2.0`)
txt2img = Txt2Img(model)
```

### Node and legacy support

With respect to `Nodes`, I have written `model_manager_initializer` and
`restoration_services` modules that return `model_manager` and
`restoration` services respectively. The latter is used by the face
reconstruction and upscaling nodes. There is no longer any reference to
`Generate` in the `app` tree.

I have confirmed that `txt2img` and `img2img` work in the nodes client.
I have not tested `embiggen` or `inpaint` yet. pytests are passing, with
some warnings that I don't think are related to what I did.

The legacy WebUI and CLI are still working off `Generate` (which has not
yet been removed from the source tree) and fully functional.

I've finished all the tasks on my TODO list:

- [x] Update the pytests, which are failing due to dangling references
to `generate`
- [x] Rewrite the `reconstruct.py` and `upscale.py` nodes to call
directly into the postprocessing modules rather than going through
`Generate`
- [x] Update the pytests, which are failing due to dangling references
to `generate`
2023-03-11 21:48:23 -05:00
1f3c024d9d Merge branch 'main' into refactor/nodes-on-generator 2023-03-11 21:31:42 -05:00
74a480f94e add back static web directory 2023-03-11 21:23:41 -05:00
c6e8d3269c build: exclude ui from test-invoke-pip (#2892)
Prior to the folder restructure, the `paths` for `test-invoke-pip` did
not include the UI's path `invokeai/frontend/`:

```yaml
    paths:
      - 'pyproject.toml'
      - 'ldm/**'
      - 'invokeai/backend/**'
      - 'invokeai/configs/**'
      - 'invokeai/frontend/dist/**'
```

After the restructure, more code was moved into the `invokeai/frontend/`
folder, and `paths` was updated:

```yaml
    paths:
      - 'pyproject.toml'
      - 'invokeai/**'
      - 'invokeai/backend/**'
      - 'invokeai/configs/**'
      - 'invokeai/frontend/web/dist/**'
```

Now, the second path includes the UI. The UI now needs to be excluded,
and must be excluded prior to `invokeai/frontend/web/dist/**` being
included.

On `test-invoke-pip-skip`, we need to do a bit of logic juggling to
invert the folder selection. First, include the web folder, then exclude
everying around it and finally exclude the `dist/` folder
2023-03-12 14:18:51 +13:00
dcb5a3a740 Merge branch 'main' into build/exclude-ui-actions 2023-03-12 14:18:03 +13:00
c0ef546b02 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-11 18:31:47 -05:00
7a78a83651 raise operations-per-run for issue workflow to 500 (#2925)
- default value is 30
- limit per hour is 1000

This should help getting the count of open issues down.
2023-03-12 00:10:55 +01:00
10cbf99310 add TODO comments 2023-03-11 18:08:45 -05:00
b63aefcda9 Merge branch 'main' into refactor/nodes-on-generator 2023-03-11 16:22:29 -06:00
6a77634b34 remove unneeded generate initializer routines 2023-03-11 17:14:03 -05:00
8ca91b1774 add restoration services to nodes 2023-03-11 17:00:00 -05:00
1c9d9e79d5 raise operations-per-run to 500
- default value is 30
- limit per hour is 1000
2023-03-11 22:32:13 +01:00
3aa1ee1218 restore NSFW checker 2023-03-11 16:16:44 -05:00
06aa5a8120 Merge branch 'main' into feat/preview_predicted_x0 2023-03-11 14:50:30 -06:00
580f9ecded simplify passing of config options 2023-03-11 11:32:57 -05:00
270032670a build: exclude ui from test-invoke-pip 2023-03-12 03:27:49 +11:00
4f056cdb55 ui: translations update from weblate (#2922)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-12 03:18:23 +11:00
c14241436b move ModelManager initialization into its own module and restore embedding support 2023-03-11 10:56:53 -05:00
50b56d6088 translationBot(ui): update translation (Portuguese)
Currently translated at 99.2% (496 of 500 strings)

Co-authored-by: ssantos <ssantos@web.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-11 16:56:06 +01:00
8ec2ae7954 translationBot(ui): update translation (Russian)
Currently translated at 86.3% (416 of 482 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-03-11 16:56:05 +01:00
40d82b29cf translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 7.0% (34 of 480 strings)

Co-authored-by: wa.code <adt107118@gm.ntcu.edu.tw>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-03-11 16:56:05 +01:00
0b953d98f5 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 98.1% (471 of 480 strings)

Co-authored-by: Felipe Nogueira <contato.fnog@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-03-11 16:56:04 +01:00
8833d76709 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (482 of 482 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (480 of 480 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-03-11 16:56:04 +01:00
027b316fd2 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (482 of 482 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (480 of 480 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-03-11 16:56:03 +01:00
d612f11c11 initialize InvokeAIGenerator object with model, not manager 2023-03-11 09:06:46 -05:00
250b0ab182 add seamless tiling support 2023-03-11 08:33:23 -05:00
675dd12b6c add attention map images to output object 2023-03-11 08:07:01 -05:00
7e76eea059 add embiggen, remove complicated constructor 2023-03-11 07:50:39 -05:00
f45483e519 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 22:25:26 -06:00
65047bf976 Chore/accessibility add all aria labels to translation (#2919)
# Overview
Setting up the `aria-label` props as translations
2023-03-11 16:16:02 +13:00
d586a82a53 yarn build 2023-03-10 20:54:59 -06:00
28709961e9 add import 2023-03-10 20:53:42 -06:00
e9f237f39d chore(accessibility): add all aria-labels 2023-03-10 20:49:16 -06:00
4156bfd810 Fixed snippet/code formatting
It was pasted as plain text, now it's a code fence.
2023-03-11 02:08:59 +01:00
fe75b95464 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-10 19:36:40 -05:00
95954188b2 remove factory pattern
Factory pattern is now removed. Typical usage of the InvokeAIGenerator is now:

```
from invokeai.backend.generator import (
    InvokeAIGeneratorBasicParams,
    Txt2Img,
    Img2Img,
    Inpaint,
)
    params = InvokeAIGeneratorBasicParams(
        model_name = 'stable-diffusion-1.5',
        steps = 30,
        scheduler = 'k_lms',
        cfg_scale = 8.0,
        height = 640,
        width = 640
        )
    print ('=== TXT2IMG TEST ===')
    txt2img = Txt2Img(manager, params)
    outputs = txt2img.generate(prompt='banana sushi', iterations=2)

    for i in outputs:
        print(f'image={output.image}, seed={output.seed}, model={output.params.model_name}, hash={output.model_hash}, steps={output.params.steps}')
```

The `params` argument is optional, so if you wish to accept default
parameters and selectively override them, just do this:

```
    outputs = Txt2Img(manager).generate(prompt='banana sushi',
                                        steps=50,
					scheduler='k_heun',
					model_name='stable-diffusion-2.1'
					)
```
2023-03-10 19:33:04 -05:00
63f59201f8 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 12:34:07 -06:00
370e8281b3 Merge branch 'main' into refactor/nodes-on-generator 2023-03-10 12:34:00 -06:00
685df33584 fix bug that caused black images when converting ckpts to diffusers in RAM (#2914)
Cause of the problem was inadvertent activation of the safety checker.

When conversion occurs on disk, the safety checker is disabled during loading.
However, when converting in RAM, the safety checker was not removed, resulting
in it activating even when user specified --no-nsfw_checker.

This PR fixes the problem by detecting when the caller has requested the InvokeAi
StableDiffusionGeneratorPipeline class to be returned and setting safety checker
to None. Do not do this with diffusers models destined for disk because then they
will be incompatible with the merge script!!

Closes #2836
2023-03-10 18:11:32 +00:00
4332c9c7a6 add generic jsx type definition for default export 2023-03-10 12:14:49 -05:00
4a00f1cc74 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 09:20:01 -06:00
7ff77504cb Make sure command also works with Oh-my-zsh (#2905)
Many people use oh-my-zsh for their command line: https://ohmyz.sh/ 

Adding `""` should work both on ohmyzsh and native bash
2023-03-10 19:05:22 +13:00
0d1854e44a Merge branch 'main' into patch-1 2023-03-10 19:04:42 +13:00
fe6858f2d9 feat: use the predicted denoised image for previews
Some schedulers report not only the noisy latents at the current timestep,
but also their estimate so far of what the de-noised latents will be.

It makes for a more legible preview than the noisy latents do.
2023-03-09 20:28:06 -08:00
12c7db3a16 backend: more post-ldm-removal cleanup (#2911) 2023-03-09 23:11:10 -05:00
3ecdec02bf Merge branch 'main' into cleanup/more_ldm_cleanup 2023-03-09 22:49:09 -05:00
d6c24d59b0 Revert "Remove label from stale issues on comment event" (#2912)
Reverts invoke-ai/InvokeAI#2903

@mauwii has a point here. It looks like triggering on a comment results
in an action for each of the stale issues, even ones that have been
previously dealt with. I'd like to revert this back to the original
behavior of running once every time the cron job executes.

What's the original motivation for having more frequent labeling of the
issues?
2023-03-09 22:28:49 -05:00
bb3d1bb6cb Revert "Remove label from stale issues on comment event" 2023-03-09 22:24:43 -05:00
14c8738a71 fix dangling reference to _model_to_cpu and missing variable model_description 2023-03-09 21:41:45 -05:00
1a829bb998 pipeline: remove code for legacy model 2023-03-09 18:15:12 -08:00
9d339e94f2 backend..conditioning: remove code for legacy model 2023-03-09 18:15:12 -08:00
ad7b1fa6fb model_manager: model to/from CPU methods are implemented on the Pipeline 2023-03-09 18:15:12 -08:00
42355b70c2 fix(Pipeline.debug_latents): fix import for moved utility function 2023-03-09 18:15:12 -08:00
faa2558e2f chore: add new argument to overridden method to match new signature upstream 2023-03-09 18:15:12 -08:00
081397737b typo: docstring spelling fixes
looks like they've already been corrected in the upstream copy
2023-03-09 18:15:12 -08:00
55d36eaf4f fix: image_resized_to_grid_as_tensor: reconnect dropped multiple_of argument 2023-03-09 18:15:12 -08:00
26cd1728ac Fix some text and a link 2023-03-09 20:03:11 -06:00
a0065da4a4 Remove label from stale issues on comment event (#2903)
I found it to be a chore to remove labels manually in order to
"un-stale" issues. This is contrary to the bot message which says
commenting should remove "stale" status. On the current `cron` schedule,
there may be a delay of up to 24 hours before the label is removed. This
PR will trigger the workflow on issue comments in addition to the
schedule.

Also adds a condition to not run this job on PRs (Github treats issues
and PRs equivalently in this respect), and rewords the messages for
clarity.
2023-03-09 20:17:54 -05:00
c11e823ff3 remove unused _wrap_results 2023-03-09 16:30:06 -05:00
197e50a298 unstage some changes 2023-03-09 15:33:18 -05:00
507e12520e Make sure command also works with Oh-my-zsh
Many people use oh-my-zsh for their command line: https://ohmyz.sh/ 

Adding `""` should work both on ohmyzsh and native bash
2023-03-09 19:21:57 +01:00
2cc04de397 dont care about linting build 2023-03-09 11:46:20 -05:00
f4150a7829 add new build command for building package 2023-03-09 11:10:18 -05:00
5418bd3b24 (ci) unlabel stale issues when commented 2023-03-09 09:22:29 -05:00
76d5fa4694 Bypass the 77 token limit (#2896)
This ought to be working but i don't know how it's supposed to behave so
i haven't been able to verify. At least, I know the numbers are getting
pushed all the way to the SD unet, i just have been unable to verify if
what's coming out is what is expected. Please test.

You'll `need to pip install -e .` after switching to the branch, because
it's currently pulling from a non-main `compel` branch. Once it's
verified as working as intended i'll promote the compel branch to pypi.
2023-03-09 23:52:28 +13:00
386dda8233 Merge branch 'main' into feat_longer_prompts 2023-03-09 22:37:10 +13:00
8076c1697c Merge branch 'feat_longer_prompts' of github.com:damian0815/InvokeAI into feat_longer_prompts 2023-03-09 10:28:13 +01:00
65fc9a6e0e bump compel version to address issues 2023-03-09 10:28:07 +01:00
cde0b6ae8d Merge branch 'main' into refactor/nodes-on-generator 2023-03-09 01:52:45 -05:00
b12760b976 [ui] chore(Accessibility): various additions (#2888)
# Overview
Adding a few accessibility items (I think 9 total items). Mostly
`aria-label`, but also a `<VisuallyHidden>` to the left-side nav tab
icons. Tried to match existing copy that was being used. Feedback
welcome
2023-03-09 19:14:42 +13:00
b679a6ba37 model manager defaults to consistent values of device and precision 2023-03-09 01:09:54 -05:00
2f5f08c35d yarn build 2023-03-08 23:51:46 -06:00
8f48c14ed4 Merge branch 'main' into chore/accessability_various-additions 2023-03-08 23:49:08 -06:00
5d37fa6e36 node-based txt2img working without generate 2023-03-09 00:18:29 -05:00
f51581bd1b Merge branch 'main' into feat_longer_prompts 2023-03-08 23:08:49 -06:00
50ca6b6ffc add back pytorch-lightning dependency (#2899)
- Closes #2893
2023-03-09 17:22:17 +13:00
63b9ec4c5e Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-09 16:57:14 +13:00
b115bc4247 [cli] Execute commands in-order with nodes (#2901)
Executes piped commands in the order they were provided (instead of
executing CLI commands immediately).
2023-03-09 16:55:23 +13:00
dadc30f795 Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-09 16:46:08 +13:00
111d8391e2 Merge branch 'main' into kyle0654/cli_execution_order 2023-03-09 16:37:15 +13:00
1157b454b2 decouple default component from react root (#2897)
Decouple default component from react root
2023-03-09 16:34:47 +13:00
8a6473610b [cli] Execute commands in-order with nodes 2023-03-08 19:25:03 -08:00
ea7911be89 Merge branch 'main' into chore/accessability_various-additions 2023-03-08 17:15:28 -06:00
9ee648e0c3 Merge branch 'main' into feat_longer_prompts 2023-03-09 00:13:01 +01:00
543682fd3b Merge branch 'feat_longer_prompts' of github.com:damian0815/InvokeAI into feat_longer_prompts 2023-03-08 23:24:41 +01:00
88cb63e4a1 pin new compel version 2023-03-08 23:24:30 +01:00
76212d1cca Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-08 17:05:11 -05:00
a8df9e5122 Merge branch 'main' into decouple-component-from-root 2023-03-08 16:58:34 -05:00
2db180d909 Make img2img strength 1 behave the same as txt2img (#2895)
* Fix img2img and inpainting code so a strength of 1 behaves the same as txt2img.

* Make generated images identical to their txt2img counterparts when strength is 1.
2023-03-08 22:50:16 +01:00
b716fe8f06 add pytorch-lightning dependency back in
- Closes #2893
2023-03-08 16:48:39 -05:00
69e2dc0404 update for compel changes 2023-03-08 20:45:01 +01:00
a38b75572f don't log excess tokens as truncated 2023-03-08 20:00:18 +01:00
e18de761b6 Merge branch 'main' into decouple-component-from-root 2023-03-08 13:13:43 -05:00
816ea39827 decouple default component from react root 2023-03-08 12:48:49 -05:00
1cd4cdd0e5 Merge branch 'main' into tests 2023-03-08 12:19:04 -05:00
768e969c90 cleanup and fix kwarg 2023-03-08 18:00:54 +01:00
57db66634d longer prompts wip 2023-03-08 14:25:48 +01:00
87789c1de8 add InvokeAIGenerator and InvokeAIGeneratorFactory classes 2023-03-07 23:52:53 -05:00
c3c1511ec6 add accessibility to localization
only set fallback english values
implement on ModelSelect and ProgressBar
2023-03-07 21:30:51 -06:00
6b41127421 Merge branch 'main' into chore/accessability_various-additions 2023-03-07 17:44:55 -06:00
d232a439f7 build: update actions (#2883)
- Updates triggers for UI workflow `lint-frontend`
- Corrects UI paths for `test-invoke-pip` and `test-invoke-pip-skip`
2023-03-08 11:51:32 +13:00
c04f21e83e Merge branch 'main' into build/update-actions 2023-03-08 11:50:50 +13:00
8762069b37 ui: update readme & scripts (#2884)
- Update ui readme
- Update scripts to use `yarn` instead of `npm` and use `concurrently`
to watch/build the theme token types along with SPA
2023-03-08 00:20:46 +13:00
d9ebdd2684 build(ui): use concurrently to run dev 2023-03-07 21:58:46 +11:00
3e4c10ef9c docs(ui): update readme 2023-03-07 21:58:42 +11:00
17eb2ca5a2 build: update actions
- Updates triggers for UI workflow `lint-frontend`
- Corrects UI paths for `test-invoke-pip` and `test-invoke-pip-skip`
2023-03-07 21:25:43 +11:00
63725d7534 add .pytest.ini to .gitignore 2023-03-07 09:08:27 +00:00
00f30ea457 Merge branch 'main' into tests 2023-03-07 09:03:18 +00:00
1b2a3c7144 ui: translations update from weblate (#2882)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-07 21:51:09 +13:00
01a1777370 translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 4.1% (20 of 480 strings)

translationBot(ui): update translation (Portuguese (Brazil))

Currently translated at 97.2% (467 of 480 strings)

translationBot(ui): update translation (Dutch)

Currently translated at 97.2% (467 of 480 strings)

translationBot(ui): update translation (French)

Currently translated at 83.1% (399 of 480 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-03-07 09:09:42 +01:00
32945c7f45 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-03-07 09:09:42 +01:00
b0b8846430 Add aria-label to icon variant of IAISimpleMenu
Uses whatever the iconTooltip copy is
2023-03-06 22:43:41 -06:00
fdb146a43a add aria-label to UnifiedCanvasLayerSelect
matching tooltip copy
2023-03-06 22:42:39 -06:00
42c1f1fc9d add VisuallyHidden tab text to InvokeTabs 2023-03-06 22:42:04 -06:00
89a8ef86b5 add an aria-label to ProgressBar 2023-03-06 22:41:45 -06:00
f0fb767f57 add aria-label to ModelSelect 2023-03-06 22:39:08 -06:00
4bd93464bf [cli] Update CLI to define commands as Pydantic objects (#2861)
Updates the CLI to define CLI commands as Pydantic objects, similar to
how Invocations (nodes) work. For example:

```py
class HelpCommand(BaseCommand):
    """Shows help"""
    type: Literal['help'] = 'help'

    def run(self, context: CliContext) -> None:
        context.parser.print_help()
```
2023-03-07 13:25:06 +13:00
3d3de82ca9 Merge branch 'main' into kyle/cli_commands 2023-03-07 12:56:30 +13:00
c3ff9e6be8 Fixed startup issues with the web UI. (#2876) 2023-03-06 18:29:28 -05:00
21f79e5919 add missing package (#2878)
Added missing dependency declaration `@chakra-ui/styled-system`
2023-03-07 10:34:50 +13:00
0342e25c74 add missing package 2023-03-06 16:13:17 -05:00
91f982fb0b feat(ui): migrate theming to chakra ui (#2873)
*looks like this #2814 was reverted accidentally. instead of trying to
revert the revert, this PR can simply be re-accepted and will fix the
ui.*

- Migrate UI from SCSS to Chakra's CSS-in-JS system 
  - better dx
  - more capable theming 
  - full RTL language support (we now have Arabic and Hebrew)
  - general cleanup of the whole UI's styling
- Tidy npm packages and update scripts, necessitates update to github
actions

To test this PR in dev mode, you will need to do a `yarn install` as a
lot has changed.

thanks to @blessedcoolant for helping out on this, it was a big effort.
2023-03-07 08:43:26 +13:00
b9ab43a4bb build(ui): clean build chakra migration 2023-03-07 08:39:44 +13:00
6e0e48bf8a Merge branch 'main' into pr/2873 2023-03-07 08:36:09 +13:00
dcc8313dbf support both epsilon and v-prediction v2 inference (#2870)
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1. "epsilon" prediction type for Stable Diffusion v2 Base 
2. "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so automatically.
2023-03-06 14:29:35 -05:00
bf5831faa3 Merge branch 'main' into kyle/cli_commands 2023-03-06 08:52:38 -05:00
5eff035f55 Merge branch 'main' into tests 2023-03-06 08:37:07 -05:00
7c60068388 Merge branch 'main' into bugfix/fix-convert-sd-to-diffusers-error 2023-03-06 08:20:29 -05:00
d843fb078a feat(ui): remove references to dark mode 2023-03-06 20:40:59 +11:00
41b2e4633f chore(ui): remove unused scss files 2023-03-06 20:06:23 +11:00
57144ac0cf feat(ui): migrate theming to chakra ui 2023-03-06 20:03:39 +11:00
a305b6adbf fix call signature of import_diffuser_model() (#2871)
This fixes the borked #2867 PR.
2023-03-05 23:58:08 -05:00
94daaa4abf fix call signature of import_diffuser_model() 2023-03-05 23:37:59 -05:00
901337186d add .git-blame-ignore-revs file to maintain provenance (#2855)
To avoid `git blame` recording all the autoformatting changes under the
name 'lstein', this PR adds a `.git-blame-ignore-revs` that will ignore
any provenance changes that occurred during the recent refactor merge.
2023-03-05 22:58:34 -05:00
7e2f64f60b Merge branch 'main' into refactor/maintain-blame-provenance 2023-03-05 22:57:50 -05:00
126cba2324 Bugfix/reenable ckpt conversion to ram (#2868)
This fixes the crash that was occurring when trying to load a legacy
checkpoint file.

Note that this PR includes commits from #2867 to avoid diffusers files
from re-downloading at startup time.
2023-03-05 22:57:19 -05:00
2f9dcd7906 support both epsilon and v-prediction v2 inference
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1) "epsilon" prediction type for Stable Diffusion v2 Base
2) "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so
automatically.
2023-03-05 22:51:40 -05:00
e537b5d8e1 Revert "Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram"
This reverts commit e0e70c9222, reversing
changes made to 0b184913b9.
2023-03-06 14:29:39 +13:00
e0e70c9222 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-06 14:27:30 +13:00
1b21e5df54 Migrate to new HF diffusers cache location (#2867)
# Migrate to new HF diffusers cache location

This PR adjusts the model cache directory to use the layout of
`diffusers 0.14`. This will automatically migrate any diffusers models
located in `INVOKEAI_ROOT/models/diffusers` to
`INVOKEAI_ROOT/models/hub`, and cache new downloaded diffusers files
into the same location.

As before, if environment variable `HF_HOME` is set, then both
HuggingFace `from_pretrained()` calls as well as all InvokeAI methods
will use `HF_HOME/hub` as their cache.
2023-03-06 13:05:13 +13:00
4b76af37ae Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-06 12:42:30 +13:00
486c445afb fix typos and replace frontend REAMDE content 2023-03-05 21:05:09 +00:00
4547c48013 add docs for local development including tests 2023-03-05 19:59:06 +00:00
8f21201c91 [ui]: migrate all styling to chakra-ui theme (#2814)
- Migrate UI from SCSS to Chakra's CSS-in-JS system 
  - better dx
  - more capable theming 
  - full RTL language support (we now have Arabic and Hebrew)
  - general cleanup of the whole UI's styling
- Tidy npm packages and update scripts, necessitates update to github
actions

To test this PR in dev mode, you will need to do a `yarn install` as a
lot has changed.

thanks to @blessedcoolant for helping out on this, it was a big effort.
2023-03-06 07:23:59 +13:00
532b74a206 Merge branch 'main' into feat/ui/chakra-theme 2023-03-06 06:54:33 +13:00
0b184913b9 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-05 12:37:43 -05:00
97719e40e4 fix Dockerfile after restructure (#2863)
this PR should close #2862
2023-03-05 18:33:00 +01:00
5ad3062b66 Merge branch 'main' into fix/broken-dockerfile-2862 2023-03-05 12:32:25 -05:00
92d012a92d Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-05 12:30:24 -05:00
fc187f263e deal with non-directories in diffusers/ 2023-03-05 12:29:52 -05:00
fd94f85abe remove legacy ldm code (#2866)
This removes modules that appear to be no longer used by any code under
the `invokeai` package now that the `ckpt_generator` is gone.

There are a few small changes in here to code that was referencing code
in a conditional branch for ckpt, or to swap out a  function for a
🤗 one, but only as much was strictly necessary to get things to
run. We'll follow with more clean-up to get lingering `if isinstance` or
`except AttributeError` branches later.
2023-03-05 12:10:38 -05:00
4e9e1b660d respect HF_HOME setting when migrating 2023-03-05 12:08:29 -05:00
d01adedff5 give user chance to back out before migration 2023-03-05 12:04:31 -05:00
c247f430f7 combine pytest.ini with pyproject.toml 2023-03-05 17:00:08 +00:00
3d6a358042 remove .coveragerc from source contrl 2023-03-05 16:59:12 +00:00
4d1dcd11de Merge branch 'main' into dev/rm_legacy_deps 2023-03-05 11:50:53 -05:00
b33655b0d6 restore automatic conversion of legacy files to diffusers pipelines 2023-03-05 11:45:25 -05:00
81dee04dc9 during migration do not overwrite symlinks 2023-03-05 08:40:12 -05:00
114018e3e6 Unified spelling of Hugging Face 2023-03-05 07:30:35 -06:00
ef8cf83b28 migrate to new HF diffusers cache location 2023-03-05 08:20:24 -05:00
633857b0e3 build(ui): Migrate UI to Chakra 2023-03-05 21:50:50 +13:00
214574d11f Merge branch 'feat/ui/chakra-theme' of https://github.com/psychedelicious/InvokeAI into pr/2814 2023-03-05 21:48:08 +13:00
8584665ade feat(ui): migrate theming to chakra 2023-03-05 19:41:57 +11:00
516c56d0c5 feat(ui): Model Manager Cleanup 2023-03-05 21:41:55 +13:00
5891b43ce2 Merge branch 'feat/ui/chakra-theme' of https://github.com/psychedelicious/InvokeAI into pr/2814 2023-03-05 21:41:12 +13:00
62e75f95aa feat(ui): migrate theming to chakra 2023-03-05 19:39:51 +11:00
b07621e27e chore(ui): build frontend 2023-03-05 19:30:28 +11:00
545d8968fd feat(ui): migrated theming to chakra
build(ui): fix husky path

build(ui): fix hmr issue, remove emotion cache

build(ui): clean up package.json

build(ui): update gh action and npm scripts

feat(ui): wip port lightbox to chakra theme

feat(ui): wip use chakra theme tokens

feat(ui): Add status text to main loading spinner

feat(ui): wip chakra theme tweaking

feat(ui): simply iaisimplemenu button

feat(ui): wip chakra theming

feat(ui): Theme Management

feat(ui): Add Ocean Blue Theme

feat(ui): wip lightbox

fix(ui): fix lightbox mouse

feat(ui): set default theme variants

feat(ui): model manager chakra theme

chore(ui): lint

feat(ui): remove last scss

feat(ui): fix switch theme

feat(ui): Theme Cleanup

feat(ui): Stylize Search Models Found List

feat(ui): hide scrollbars

feat(ui): fix floating button position

feat(ui): Scrollbar Styling

fix broken scripts

This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files

protect invocations against black autoformatting

deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16
2023-03-05 19:30:02 +11:00
7cf2f58513 deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 (#2865)
Things to check for in this version:

- `diffusers` cache location is now more consistent with other
huggingface-hub using code (i.e. `transformers`) as of
https://github.com/huggingface/diffusers/pull/2005. I think ultimately
this should make @damian0815 (and other folks with multiple
diffusers-using projects) happier, but it's worth taking a look to make
sure the way @lstein set things up to respect `HF_HOME` is still
functioning as intended.
- I've gone ahead and updated `transformers` to the current version
(4.26), but I have a vague memory that we were holding it back at some
point? Need to look that up and see if that's the case and why.
2023-03-05 01:53:01 -05:00
618e3e5e91 deps: add explicitly dependency to rich
was previously pulled in as a secondary dependency of something else.
2023-03-04 18:37:39 -08:00
c703b60986 remove legacy ldm code 2023-03-04 18:16:59 -08:00
7c0ce5c282 fix push expression
- make use of `github.ref_type`
2023-03-05 02:58:13 +01:00
82fe34b1f7 update build-container workflow
- switch versioning from semver to pep440
- remove unecesarry paths
- include `.dockerignore` in paths
2023-03-05 02:13:57 +01:00
65f9aae81d deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 2023-03-04 16:32:16 -08:00
2d9fac23e7 fix Dockerfile
- update broken paths after restructure
2023-03-04 23:51:07 +01:00
ebc4b52f41 [cli] Update CLI to define commands as Pydantic objects 2023-03-04 14:46:02 -08:00
c4e6d4b348 fix broken scripts (#2857)
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.
```
   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file
```

2) Scripts that are installed by pip install. They get placed into the
venv's
   PATH and are intended to be the official entry points:
```
   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
```
2023-03-04 16:57:45 -05:00
eab32bce6c Merge branch 'main' into bugfix/fix-scripts 2023-03-04 13:19:02 -06:00
55d2094094 Protect invocations against black autoformatting (#2854)
This places `#fmt: off` and `#fmt: on` blocks around sections of the
invocation code that shouldn't be reformatted by Black.
2023-03-04 12:26:43 -05:00
a0d50a2b23 Merge branch 'main' into formatting/undo-black-formatting-of-invocations 2023-03-04 12:05:11 -05:00
9efeb1b2ec Merge branch 'main' into bugfix/fix-scripts 2023-03-03 20:36:29 -06:00
86e2cb0428 Fix for txt2img2img.py (#2856)
Fix error when using txt2img 
ModuleNotFoundError: No module named 'invokeai.backend.models'
and
ModuleNotFoundError: No module named
'invokeai.backend.generator.diffusers_pipeline'
2023-03-04 15:24:39 +13:00
53c2c0f91d Update txt2img2img.py 2023-03-04 12:58:33 +11:00
bdc7b8b75a fix broken scripts
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
2023-03-03 20:19:37 -05:00
1bfdd54810 Update txt2img2img.py 2023-03-04 11:23:21 +11:00
b4bf6c12a5 add .git-blame-ignore-revs file to maintain provenance
To avoid `git blame` recording all the autoformatting changes
under the name 'lstein', this PR adds a `.git-blame-ignore-revs`
that will ignore any provenance changes that occurred during the
recent refactor merge.
2023-03-03 16:23:48 -05:00
ab35c241c2 protect invocations against black autoformatting 2023-03-03 15:25:08 -05:00
b3dccfaeb6 Final phase of source tree restructure (#2833)
# All python code has been moved under `invokeai`. All vestiges of `ldm`
and `ldm.invoke` are now gone.

***You will need to run `pip install -e .` before the code will work
again!***

Everything seems to be functional, but extensive testing is advised.

A guide to where the files have gone is forthcoming.
2023-03-03 15:05:41 -05:00
6477e31c1e revert and disable auto-formatting of invocations 2023-03-03 14:59:17 -05:00
dd4a1c998b merge localisation files that were added in main 2023-03-03 14:47:01 -05:00
70203e6e5a CODEOWNERS coarse draft 2023-03-03 14:36:43 -05:00
d778a7c5ca ui: translations update from weblate (#2850)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-03 20:07:34 +11:00
f8e59636cd translationBot(ui): update translation (Korean)
Currently translated at 15.5% (73 of 469 strings)

translationBot(ui): added translation (Korean)

Co-authored-by: LemonDouble <lemondouble2@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-03-03 10:06:13 +01:00
2d1a0b0a05 translationBot(ui): update translation (Portuguese)
Currently translated at 12.7% (60 of 469 strings)

Co-authored-by: Airton Silva <airtonsilva2009@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-03 10:06:13 +01:00
c9b2234d90 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-03-03 10:06:12 +01:00
82b224539b translationBot(ui): update translation (Hebrew)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): added translation (Hebrew)

Co-authored-by: Netz <pixi@pixelabs.net>
Co-authored-by: Netzer R <pixi@pixelabs.net>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/he/
Translation: InvokeAI/Web UI
2023-03-03 10:06:12 +01:00
0b15ffb95b translationBot(ui): update translation (Portuguese)
Currently translated at 12.5% (59 of 469 strings)

translationBot(ui): added translation (Portuguese)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-03 10:06:11 +01:00
ce9aaab22f translationBot(ui): added translation (Chinese (Traditional))
Co-authored-by: psychedelicious <mabianfu@icloud.com>
2023-03-03 10:06:11 +01:00
3f53f1186d move diagnostic message to stderr; was confusing CI 2023-03-03 01:54:48 -05:00
c0aff396d2 fix ldm->invokeai pathnames in workflows 2023-03-03 01:44:55 -05:00
955900507f fix issue with invokeai.version 2023-03-03 01:34:38 -05:00
d606abc544 fix weblint call 2023-03-03 01:09:56 -05:00
44400d2a66 fix incorrect import of merge code 2023-03-03 01:07:31 -05:00
60a98cacef all vestiges of ldm.invoke removed 2023-03-03 01:02:00 -05:00
6a990565ff all files migrated; tweaks needed 2023-03-03 00:02:15 -05:00
3f0b0f3250 almost all of backend migrated; restoration next 2023-03-02 13:28:17 -05:00
1a7371ea17 remove unused embeddings code 2023-03-01 21:09:22 -05:00
850d1ee984 move models and modules under invokeai/backend/ldm 2023-03-01 18:24:18 -05:00
2c7928b163 remove pycaches from repo 2023-02-28 23:25:35 -05:00
87d1ec6a4c Merge branch 'main' into refactor/move-models-and-generators 2023-02-28 17:34:05 -05:00
53c62537f7 fix newlines causing negative prompt to be parsed incorrectly (#2837)
closes #2753
2023-02-28 17:29:46 -05:00
418d93fdfd fix newlines causing negative prompt to be parsed incorrectly 2023-02-28 22:37:28 +01:00
f2ce2f1778 fix import of moved model_manager module 2023-02-28 08:38:14 -05:00
5b6c61fc75 move models and generator into backend 2023-02-28 08:32:11 -05:00
1d77581d96 restore behavior of !import_model; fix initial models bug 2023-02-28 00:45:56 -05:00
3b921cf393 add more missing files 2023-02-28 00:37:13 -05:00
d334f7f1f6 add missing files 2023-02-28 00:31:15 -05:00
8c9764476c first phase of source tree restructure
This is the first phase of a big shifting of files and directories
in the source tree.

You will need to run `pip install -e .` before the code will work again!

Here's what's in the current commit:

1) Remove a lot of dead code that dealt with checkpoint and safetensor loading.
2) Entire ckpt_generator hierarchy is now gone!
3) ldm.invoke.generator.*   => invokeai.generator.*
4) ldm.model.*              => invokeai.model.*
5) ldm.invoke.model_manager => invokeai.model.model_manager

6) In addition, a number of frequently-accessed classes can be imported
   from the invokeai.model and invokeai.generator modules:

   from invokeai.generator import ( Generator, PipelineIntermediateState,
                                    StableDiffusionGeneratorPipeline, infill_methods)

   from invokeai.models import ( ModelManager, SDLegacyType
                                 InvokeAIDiffuserComponent, AttentionMapSaver,
                                 DDIMSampler, KSampler, PLMSSampler,
                                 PostprocessingSettings )
2023-02-27 23:52:46 -05:00
b7d5a3e0b5 [nodes] Add better error handling to processor and CLI (#2828)
* [nodes] Add better error handling to processor and CLI

* [nodes] Use more explicit name for marking node execution error

* [nodes] Update the processor call to error
2023-02-27 10:01:07 -08:00
e0405031a7 add a workflow to close stale issues (#2808)
with values set as discussed in discord
2023-02-26 16:14:42 -05:00
ee24b686b3 Merge branch 'main' into dev/ci/add-close-inactive-issues 2023-02-26 16:14:03 -05:00
835eb14c79 Split requirements / pyproject installation in Dockerfile (#2815)
This should make caching way easier and therefore speed up the image
(re-)creation a lot.

Other small improvements:
- reorder .dockerignore
- rename amd flavor to rocm to align with cuda flavor
- use `user:group` for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 13:48:32 -05:00
9aadf7abc1 Merge branch 'main' into dev/ci/add-close-inactive-issues 2023-02-26 13:13:42 -05:00
243f9e8377 Merge branch 'main' into dev/docker/separate-req-inst 2023-02-26 13:13:07 -05:00
6e0c6d9cc9 perf(invoke_ai_web_server): encode intermediate result previews as jpeg (#2817)
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-26 18:47:51 +13:00
a3076cf951 perf(invoke_ai_web_server): encode intermediate result previews as jpeg
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-25 21:23:25 -08:00
6696882c71 doc(invoke_ai_web_server): put docstrings inside their functions (#2816)
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-26 18:20:10 +13:00
17b039e85d doc(invoke_ai_web_server): put docstrings inside their functions
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-25 20:21:47 -08:00
81539e6ab4 Merge remote-tracking branch 'upstream/main' into dev/docker/separate-req-inst 2023-02-26 00:55:03 +01:00
92304b9f8a remove pip-tools, still split requirements install
- also use user:group for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 00:53:43 +01:00
ec1de5ae8b more detailed volume parameters 2023-02-26 00:51:30 +01:00
49198a61ef enable BuildKit in env.sh 2023-02-26 00:50:13 +01:00
8c5773abc1 add a workflow to close stale issues
with values set as discussed in discord
2023-02-25 13:20:05 +01:00
01f8c37bd3 rename amd flavor to rocm 2023-02-24 06:20:44 +01:00
b7718985d5 update build-container.yml
- add branches 'dev/ci/docker/*' and 'dev/docker/*'
2023-02-24 03:58:22 +01:00
90cda11868 separate installation of requirements and source
this should highly increase rebuilding of the image when:
- version did not change
- requirements didn't change
2023-02-24 03:51:18 +01:00
5cb877e096 reorder .dockerignore 2023-02-24 02:53:27 +01:00
1591 changed files with 140746 additions and 94739 deletions

View File

@ -1,6 +0,0 @@
[run]
omit='.env/*'
source='.'
[report]
show_missing = true

View File

@ -20,13 +20,13 @@ def calc_images_mean_L1(image1_path, image2_path):
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('image1_path')
parser.add_argument('image2_path')
parser.add_argument("image1_path")
parser.add_argument("image2_path")
args = parser.parse_args()
return args
if __name__ == '__main__':
if __name__ == "__main__":
args = parse_args()
mean_L1 = calc_images_mean_L1(args.image1_path, args.image2_path)
print(mean_L1)

View File

@ -1,25 +1,9 @@
# use this file as a whitelist
*
!invokeai
!ldm
!pyproject.toml
!docker/docker-entrypoint.sh
!LICENSE
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# ignore frontend but whitelist dist
invokeai/frontend/
!invokeai/frontend/dist/
# ignore invokeai/assets but whitelist invokeai/assets/web
invokeai/assets/
!invokeai/assets/web/
# Byte-compiled / optimized / DLL files
**/__pycache__/
**/*.py[cod]
# Distribution / packaging
*.egg-info/
*.egg
**/node_modules
**/__pycache__
**/*.egg-info

View File

@ -1,8 +1,5 @@
root = true
# All files
[*]
max_line_length = 80
charset = utf-8
end_of_line = lf
indent_size = 2
@ -13,18 +10,3 @@ trim_trailing_whitespace = true
# Python
[*.py]
indent_size = 4
max_line_length = 120
# css
[*.css]
indent_size = 4
# flake8
[.flake8]
indent_size = 4
# Markdown MkDocs
[docs/**/*.md]
max_line_length = 80
indent_size = 4
indent_style = unset

37
.flake8
View File

@ -1,37 +0,0 @@
[flake8]
max-line-length = 120
extend-ignore =
# See https://github.com/PyCQA/pycodestyle/issues/373
E203,
# use Bugbear's B950 instead
E501,
# from black repo https://github.com/psf/black/blob/main/.flake8
E266, W503, B907
extend-select =
# Bugbear line length
B950
extend-exclude =
scripts/orig_scripts/*
ldm/models/*
ldm/modules/*
ldm/data/*
ldm/generate.py
ldm/util.py
ldm/simplet2i.py
per-file-ignores =
# B950 line too long
# W605 invalid escape sequence
# F841 assigned to but never used
# F401 imported but unused
tests/test_prompt_parser.py: B950, W605, F401
tests/test_textual_inversion.py: F841, B950
# B023 Function definition does not bind loop variable
scripts/legacy_api.py: F401, B950, B023, F841
ldm/invoke/__init__.py: F401
# B010 Do not call setattr with a constant attribute value
ldm/invoke/server_legacy.py: B010
# =====================
# flake-quote settings:
# =====================
# Set this to match black style:
inline-quotes = double

2
.git-blame-ignore-revs Normal file
View File

@ -0,0 +1,2 @@
b3dccfaeb636599c02effc377cdd8a87d658256c
218b6d0546b990fc449c876fb99f44b50c4daa35

71
.github/CODEOWNERS vendored
View File

@ -2,60 +2,33 @@
/.github/workflows/ @lstein @blessedcoolant
# documentation
/docs/ @lstein @blessedcoolant
mkdocs.yml @lstein @ebr
/docs/ @lstein @blessedcoolant @hipsterusername
/mkdocs.yml @lstein @blessedcoolant
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising
# installation and configuration
/pyproject.toml @lstein @ebr
/docker/ @lstein
/scripts/ @ebr @lstein @blessedcoolant
/installer/ @ebr @lstein
ldm/invoke/config @lstein @ebr
invokeai/assets @lstein @blessedcoolant
invokeai/configs @lstein @ebr @blessedcoolant
/ldm/invoke/_version.py @lstein @blessedcoolant
/pyproject.toml @lstein @blessedcoolant
/docker/ @lstein @blessedcoolant
/scripts/ @ebr @lstein
/installer/ @lstein @ebr
/invokeai/assets @lstein @ebr
/invokeai/configs @lstein
/invokeai/version @lstein @blessedcoolant
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious
/invokeai/backend @blessedcoolant @psychedelicious
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp
# generation and model management
/ldm/*.py @lstein @blessedcoolant
/ldm/generate.py @lstein @gregghelt2
/ldm/invoke/args.py @lstein @blessedcoolant
/ldm/invoke/ckpt* @lstein @blessedcoolant
/ldm/invoke/ckpt_generator @lstein @blessedcoolant
/ldm/invoke/CLI.py @lstein @blessedcoolant
/ldm/invoke/config @lstein @ebr @blessedcoolant
/ldm/invoke/generator @gregghelt2 @damian0815
/ldm/invoke/globals.py @lstein @blessedcoolant
/ldm/invoke/merge_diffusers.py @lstein @blessedcoolant
/ldm/invoke/model_manager.py @lstein @blessedcoolant
/ldm/invoke/txt2mask.py @lstein @blessedcoolant
/ldm/invoke/patchmatch.py @Kyle0654 @lstein
/ldm/invoke/restoration @lstein @blessedcoolant
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising
# attention, textual inversion, model configuration
/ldm/models @damian0815 @gregghelt2 @blessedcoolant
/ldm/modules/textual_inversion_manager.py @lstein @blessedcoolant
/ldm/modules/attention.py @damian0815 @gregghelt2
/ldm/modules/diffusionmodules @damian0815 @gregghelt2
/ldm/modules/distributions @damian0815 @gregghelt2
/ldm/modules/ema.py @damian0815 @gregghelt2
/ldm/modules/embedding_manager.py @lstein
/ldm/modules/encoders @damian0815 @gregghelt2
/ldm/modules/image_degradation @damian0815 @gregghelt2
/ldm/modules/losses @damian0815 @gregghelt2
/ldm/modules/x_transformer.py @damian0815 @gregghelt2
# Nodes
apps/ @Kyle0654 @jpphoto
# legacy REST API
# these are dead code
#/ldm/invoke/pngwriter.py @CapableWeb
#/ldm/invoke/server_legacy.py @CapableWeb
#/scripts/legacy_api.py @CapableWeb
#/tests/legacy_tests.sh @CapableWeb
# front ends
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein @ebr
/invokeai/frontend/merge @lstein @blessedcoolant
/invokeai/frontend/training @lstein @blessedcoolant
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp

View File

@ -65,6 +65,16 @@ body:
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
validations:
required: true
- type: textarea
id: what-happened

51
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,51 @@
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?

19
.github/stale.yaml vendored Normal file
View File

@ -0,0 +1,19 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 28
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Please
update the ticket if this is still a problem on the latest release.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
Due to inactivity, this issue has been automatically closed. If this is
still a problem on the latest release, please recreate the issue.

View File

@ -3,19 +3,20 @@ on:
push:
branches:
- 'main'
- 'update/ci/docker/*'
- 'update/docker/*'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
- '.dockerignore'
- 'invokeai/**'
- 'docker/Dockerfile'
- 'docker/docker-entrypoint.sh'
- 'workflows/build-container.yml'
tags:
- 'v*.*.*'
- 'v*'
workflow_dispatch:
permissions:
contents: write
packages: write
jobs:
docker:
@ -23,23 +24,27 @@ jobs:
strategy:
fail-fast: false
matrix:
flavor:
- amd
- cuda
- cpu
include:
- flavor: amd
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
- flavor: cuda
pip-extra-index-url: ''
- flavor: cpu
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
gpu-driver:
- cuda
- cpu
- rocm
runs-on: ubuntu-latest
name: ${{ matrix.flavor }}
name: ${{ matrix.gpu-driver }}
env:
PLATFORMS: 'linux/amd64,linux/arm64'
DOCKERFILE: 'docker/Dockerfile'
# torch/arm64 does not support GPU currently, so arm64 builds
# would not be GPU-accelerated.
# re-enable arm64 if there is sufficient demand.
# PLATFORMS: 'linux/amd64,linux/arm64'
PLATFORMS: 'linux/amd64'
steps:
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: |
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
- name: Checkout
uses: actions/checkout@v3
@ -50,17 +55,17 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ vars.DOCKERHUB_REPOSITORY }}
${{ env.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=pep440,pattern={{version}}
type=pep440,pattern={{major}}.{{minor}}
type=pep440,pattern={{major}}
type=sha,enable=true,prefix=sha-,format=short
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.flavor }},onlatest=false
latest=${{ matrix.gpu-driver == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.gpu-driver }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
@ -78,34 +83,33 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# - name: Login to Docker Hub
# if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
# uses: docker/login-action@v2
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
id: docker_build
uses: docker/build-push-action@v4
with:
context: .
file: ${{ env.DOCKERFILE }}
file: docker/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.flavor }}
type=gha,scope=main-${{ matrix.flavor }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.flavor }}
type=gha,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
type=gha,scope=main-${{ matrix.gpu-driver }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
- name: Docker Hub Description
if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ vars.DOCKERHUB_REPOSITORY }}
short-description: ${{ github.event.repository.description }}
# - name: Docker Hub Description
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
# uses: peter-evans/dockerhub-description@v3
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
# repository: ${{ vars.DOCKERHUB_REPOSITORY }}
# short-description: ${{ github.event.repository.description }}

View File

@ -0,0 +1,28 @@
name: Close inactive issues
on:
schedule:
- cron: "00 4 * * *"
env:
DAYS_BEFORE_ISSUE_STALE: 30
DAYS_BEFORE_ISSUE_CLOSE: 14
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v8
with:
days-before-issue-stale: ${{ env.DAYS_BEFORE_ISSUE_STALE }}
days-before-issue-close: ${{ env.DAYS_BEFORE_ISSUE_CLOSE }}
stale-issue-label: "Inactive Issue"
stale-issue-message: "There has been no activity in this issue for ${{ env.DAYS_BEFORE_ISSUE_STALE }} days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release."
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
exempt-issue-labels: "Active Issue"
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

View File

@ -2,15 +2,19 @@ name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
push:
paths:
- 'invokeai/frontend/**'
branches:
- 'main'
merge_group:
workflow_dispatch:
defaults:
run:
working-directory: invokeai/frontend
working-directory: invokeai/frontend/web
jobs:
lint-frontend:
@ -23,7 +27,7 @@ jobs:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn tsc'
- run: 'yarn run madge'
- run: 'yarn run lint --max-warnings=0'
- run: 'yarn run prettier --check'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'

View File

@ -2,8 +2,10 @@ name: mkdocs-material
on:
push:
branches:
- 'main'
- 'development'
- 'refs/heads/main'
permissions:
contents: write
jobs:
mkdocs-material:
@ -41,7 +43,7 @@ jobs:
--verbose
- name: deploy to gh-pages
if: ${{ github.ref == 'refs/heads/v2.3' }}
if: ${{ github.ref == 'refs/heads/main' }}
run: |
python -m \
mkdocs gh-deploy \

View File

@ -3,7 +3,7 @@ name: PyPI Release
on:
push:
paths:
- 'ldm/invoke/_version.py'
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
jobs:

27
.github/workflows/style-checks.yml vendored Normal file
View File

@ -0,0 +1,27 @@
name: style checks
# just formatting for now
# TODO: add isort and flake8 later
on:
pull_request:
push:
branches: main
jobs:
black:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies with pip
run: |
pip install black
# - run: isort --check-only .
- run: black --check .
# - run: flake8

View File

@ -1,67 +0,0 @@
name: Test invoke.py pip
on:
pull_request:
paths-ignore:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- run: 'echo "No build required"'

View File

@ -3,19 +3,7 @@ on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
pull_request:
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
types:
- 'ready_for_review'
- 'opened'
@ -36,19 +24,12 @@ jobs:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
@ -66,14 +47,6 @@ jobs:
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
env:
@ -83,15 +56,23 @@ jobs:
id: checkout-sources
uses: actions/checkout@v3
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: Check for changed python files
id: changed-files
uses: tj-actions/changed-files@v37
with:
files_yaml: |
python:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
- 'tests/**'
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' }}
- name: set test prompt to main branch validation
if: steps.changed-files.outputs.python_any_changed == 'true'
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: setup python
if: steps.changed-files.outputs.python_any_changed == 'true'
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
@ -99,6 +80,7 @@ jobs:
cache-dependency-path: pyproject.toml
- name: install invokeai
if: steps.changed-files.outputs.python_any_changed == 'true'
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
@ -106,43 +88,42 @@ jobs:
--editable=".[test]"
- name: run pytest
if: steps.changed-files.outputs.python_any_changed == 'true'
id: run-pytest
run: pytest
- name: set INVOKEAI_OUTDIR
run: >
python -c
"import os;from ldm.invoke.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
>> ${{ matrix.github-env }}
# - name: run invokeai-configure
# env:
# HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
# run: >
# invokeai-configure
# --yes
# --default_only
# --full-precision
# # can't use fp16 weights without a GPU
- name: run invokeai-configure
id: run-preload-models
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
run: >
invokeai-configure
--yes
--default_only
--full-precision
# can't use fp16 weights without a GPU
# - name: run invokeai
# id: run-invokeai
# env:
# # Set offline mode to make sure configure preloaded successfully.
# HF_HUB_OFFLINE: 1
# HF_DATASETS_OFFLINE: 1
# TRANSFORMERS_OFFLINE: 1
# INVOKEAI_OUTDIR: ${{ github.workspace }}/results
# run: >
# invokeai
# --no-patchmatch
# --no-nsfw_checker
# --precision=float32
# --always_use_cpu
# --use_memory_db
# --outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
# --from_file ${{ env.TEST_PROMPTS }}
- name: run invokeai
id: run-invokeai
env:
# Set offline mode to make sure configure preloaded successfully.
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--from_file ${{ env.TEST_PROMPTS }}
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results
path: ${{ env.INVOKEAI_OUTDIR }}
# - name: Archive results
# env:
# INVOKEAI_OUTDIR: ${{ github.workspace }}/results
# uses: actions/upload-artifact@v3
# with:
# name: results
# path: ${{ env.INVOKEAI_OUTDIR }}

19
.gitignore vendored
View File

@ -9,6 +9,8 @@ models/ldm/stable-diffusion-v1/model.ckpt
configs/models.user.yaml
config/models.user.yml
invokeai.init
.version
.last_model
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
@ -32,11 +34,10 @@ __pycache__/
.Python
build/
develop-eggs/
dist/
# dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
@ -63,6 +64,7 @@ pip-delete-this-directory.txt
htmlcov/
.tox/
.nox/
.coveragerc
.coverage
.coverage.*
.cache
@ -73,8 +75,10 @@ cov.xml
*.py,cover
.hypothesis/
.pytest_cache/
.pytest.ini
cover/
junit/
notes/
# Translations
*.mo
@ -197,8 +201,11 @@ checkpoints
# If it's a Mac
.DS_Store
invokeai/frontend/yarn.lock
invokeai/frontend/node_modules
# Let the frontend manage its own gitignore
!invokeai/frontend/*
!invokeai/frontend/web/*
# Scratch folder
.scratch/
@ -213,11 +220,6 @@ gfpgan/
# config file (will be created by installer)
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt
models/clipseg
models/gfpgan
# ignore initfile
.invokeai
@ -232,4 +234,3 @@ installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh

View File

@ -1,41 +1,10 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
# See https://pre-commit.com/ for usage and config
repos:
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8
additional_dependencies:
- flake8-black
- flake8-bugbear
- flake8-comprehensions
- flake8-simplify
- repo: https://github.com/pre-commit/mirrors-prettier
rev: 'v3.0.0-alpha.4'
hooks:
- id: prettier
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-added-large-files
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-merge-conflict
- id: check-symlinks
- id: check-toml
- id: end-of-file-fixer
- id: no-commit-to-branch
args: ['--branch', 'main']
- id: trailing-whitespace
- repo: local
hooks:
- id: black
name: black
stages: [commit]
language: system
entry: black
types: [python]

View File

@ -1,14 +0,0 @@
invokeai/frontend/.husky
invokeai/frontend/patches
# Ignore artifacts:
build
coverage
static
invokeai/frontend/dist
# Ignore all HTML files:
*.html
# Ignore deprecated docs
docs/installation/deprecated_documentation

View File

@ -1,9 +1,9 @@
embeddedLanguageFormatting: auto
endOfLine: lf
singleQuote: true
semi: true
trailingComma: es5
tabWidth: 2
useTabs: false
singleQuote: true
quoteProps: as-needed
embeddedLanguageFormatting: auto
overrides:
- files: '*.md'
options:
@ -11,9 +11,3 @@ overrides:
printWidth: 80
parser: markdown
cursorOffset: -1
- files: docs/**/*.md
options:
tabWidth: 4
- files: 'invokeai/frontend/public/locales/*.json'
options:
tabWidth: 4

View File

@ -1,5 +0,0 @@
[pytest]
DJANGO_SETTINGS_MODULE = webtas.settings
; python_files = tests.py test_*.py *_tests.py
addopts = --cov=. --cov-config=.coveragerc --cov-report xml:cov.xml

189
LICENSE
View File

@ -1,21 +1,176 @@
MIT License
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Copyright (c) 2022 InvokeAI Team
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
1. Definitions.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

290
LICENSE-SDXL.txt Normal file
View File

@ -0,0 +1,290 @@
Copyright (c) 2023 Stability AI
CreativeML Open RAIL++-M License dated July 26, 2023
Section I: PREAMBLE
Multimodal generative models are being widely adopted and used, and
have the potential to transform the way artists, among other
individuals, conceive and benefit from AI or ML technologies as a tool
for content creation.
Notwithstanding the current and potential benefits that these
artifacts can bring to society at large, there are also concerns about
potential misuses of them, either due to their technical limitations
or ethical considerations.
In short, this license strives for both the open and responsible
downstream use of the accompanying model. When it comes to the open
character, we took inspiration from open source permissive licenses
regarding the grant of IP rights. Referring to the downstream
responsible use, we added use-based restrictions not permitting the
use of the model in very specific scenarios, in order for the licensor
to be able to enforce the license in case potential misuses of the
Model may occur. At the same time, we strive to promote open and
responsible research on generative models for art and content
generation.
Even though downstream derivative versions of the model could be
released under different licensing terms, the latter will always have
to include - at minimum - the same use-based restrictions as the ones
in the original license (this license). We believe in the intersection
between open and responsible AI development; thus, this agreement aims
to strike a balance between both in order to enable responsible
open-science in the field of AI.
This CreativeML Open RAIL++-M License governs the use of the model
(and its derivatives) and is informed by the model card associated
with the model.
NOW THEREFORE, You and Licensor agree as follows:
Definitions
"License" means the terms and conditions for use, reproduction, and
Distribution as defined in this document.
"Data" means a collection of information and/or content extracted from
the dataset used with the Model, including to train, pretrain, or
otherwise evaluate the Model. The Data is not licensed under this
License.
"Output" means the results of operating a Model as embodied in
informational content resulting therefrom.
"Model" means any accompanying machine-learning based assemblies
(including checkpoints), consisting of learnt weights, parameters
(including optimizer states), corresponding to the model architecture
as embodied in the Complementary Material, that have been trained or
tuned, in whole or in part on the Data, using the Complementary
Material.
"Derivatives of the Model" means all modifications to the Model, works
based on the Model, or any other model which is created or initialized
by transfer of patterns of the weights, parameters, activations or
output of the Model, to the other model, in order to cause the other
model to perform similarly to the Model, including - but not limited
to - distillation methods entailing the use of intermediate data
representations or methods based on the generation of synthetic data
by the Model for training the other model.
"Complementary Material" means the accompanying source code and
scripts used to define, run, load, benchmark or evaluate the Model,
and used to prepare data for training or evaluation, if any. This
includes any accompanying documentation, tutorials, examples, etc, if
any.
"Distribution" means any transmission, reproduction, publication or
other sharing of the Model or Derivatives of the Model to a third
party, including providing the Model as a hosted service made
available by electronic or other remote means - e.g. API-based or web
access.
"Licensor" means the copyright owner or entity authorized by the
copyright owner that is granting the License, including the persons or
entities that may have rights in the Model and/or distributing the
Model.
"You" (or "Your") means an individual or Legal Entity exercising
permissions granted by this License and/or making use of the Model for
whichever purpose and in any field of use, including usage of the
Model in an end-use application - e.g. chatbot, translator, image
generator.
"Third Parties" means individuals or legal entities that are not under
common control with Licensor or You.
"Contribution" means any work of authorship, including the original
version of the Model and any modifications or additions to that Model
or Derivatives of the Model thereof, that is intentionally submitted
to Licensor for inclusion in the Model by the copyright owner or by an
individual or Legal Entity authorized to submit on behalf of the
copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent to
the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control
systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the
Model, but excluding communication that is conspicuously marked or
otherwise designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" means Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Model.
Section II: INTELLECTUAL PROPERTY RIGHTS
Both copyright and patent grants apply to the Model, Derivatives of
the Model and Complementary Material. The Model and Derivatives of the
Model are subject to additional terms as described in
Section III.
Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare, publicly display, publicly
perform, sublicense, and distribute the Complementary Material, the
Model, and Derivatives of the Model.
Grant of Patent License. Subject to the terms and conditions of this
License and where and as applicable, each Contributor hereby grants to
You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this paragraph) patent license to
make, have made, use, offer to sell, sell, import, and otherwise
transfer the Model and the Complementary Material, where such license
applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by
combination of their Contribution(s) with the Model to which such
Contribution(s) was submitted. If You institute patent litigation
against any entity (including a cross-claim or counterclaim in a
lawsuit) alleging that the Model and/or Complementary Material or a
Contribution incorporated within the Model and/or Complementary
Material constitutes direct or contributory patent infringement, then
any patent licenses granted to You under this License for the Model
and/or Work shall terminate as of the date such litigation is asserted
or filed.
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
Distribution and Redistribution. You may host for Third Party remote
access purposes (e.g. software-as-a-service), reproduce and distribute
copies of the Model or Derivatives of the Model thereof in any medium,
with or without modifications, provided that You meet the following
conditions: Use-based restrictions as referenced in paragraph 5 MUST
be included as an enforceable provision by You in any type of legal
agreement (e.g. a license) governing the use and/or distribution of
the Model or Derivatives of the Model, and You shall give notice to
subsequent users You Distribute to, that the Model or Derivatives of
the Model are subject to paragraph 5. This provision does not apply to
the use of Complementary Material. You must give any Third Party
recipients of the Model or Derivatives of the Model a copy of this
License; You must cause any modified files to carry prominent notices
stating that You changed the files; You must retain all copyright,
patent, trademark, and attribution notices excluding those notices
that do not pertain to any part of the Model, Derivatives of the
Model. You may add Your own copyright statement to Your modifications
and may provide additional or different license terms and conditions -
respecting paragraph 4.a. - for use, reproduction, or Distribution of
Your modifications, or for any such Derivatives of the Model as a
whole, provided Your use, reproduction, and Distribution of the Model
otherwise complies with the conditions stated in this License.
Use-based restrictions. The restrictions set forth in Attachment A are
considered Use-based restrictions. Therefore You cannot use the Model
and the Derivatives of the Model for the specified restricted
uses. You may use the Model subject to this License, including only
for lawful purposes and in accordance with the License. Use may
include creating any content with, finetuning, updating, running,
training, evaluating and/or reparametrizing the Model. You shall
require all of Your users who use the Model or a Derivative of the
Model to comply with the terms of this paragraph (paragraph 5).
The Output You Generate. Except as set forth herein, Licensor claims
no rights in the Output You generate using the Model. You are
accountable for the Output you generate and its subsequent uses. No
use of the output can contravene any provision as stated in the
License.
Section IV: OTHER PROVISIONS
Updates and Runtime Restrictions. To the maximum extent permitted by
law, Licensor reserves the right to restrict (remotely or otherwise)
usage of the Model in violation of this License.
Trademarks and related. Nothing in this License permits You to make
use of Licensors trademarks, trade names, logos or to otherwise
suggest endorsement or misrepresent the relationship between the
parties; and any rights not expressly granted herein are reserved by
the Licensors.
Disclaimer of Warranty. Unless required by applicable law or agreed to
in writing, Licensor provides the Model and the Complementary Material
(and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Model, Derivatives of
the Model, and the Complementary Material and assume any risks
associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise, unless
required by applicable law (such as deliberate and grossly negligent
acts) or agreed to in writing, shall any Contributor be liable to You
for damages, including any direct, indirect, special, incidental, or
consequential damages of any character arising as a result of this
License or out of the use or inability to use the Model and the
Complementary Material (including but not limited to damages for loss
of goodwill, work stoppage, computer failure or malfunction, or any
and all other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
Accepting Warranty or Additional Liability. While redistributing the
Model, Derivatives of the Model and the Complementary Material
thereof, You may choose to offer, and charge a fee for, acceptance of
support, warranty, indemnity, or other liability obligations and/or
rights consistent with this License. However, in accepting such
obligations, You may act only on Your own behalf and on Your sole
responsibility, not on behalf of any other Contributor, and only if
You agree to indemnify, defend, and hold each Contributor harmless for
any liability incurred by, or claims asserted against, such
Contributor by reason of your accepting any such warranty or
additional liability.
If any provision of this License is held to be invalid, illegal or
unenforceable, the remaining provisions shall be unaffected thereby
and remain valid as if such provision had not been set forth herein.
END OF TERMS AND CONDITIONS
Attachment A
Use Restrictions
You agree not to use the Model or Derivatives of the Model:
* In any way that violates any applicable national, federal, state,
local or international law or regulation;
* For the purpose of exploiting, harming or attempting to exploit or
harm minors in any way;
* To generate or disseminate verifiably false information and/or
content with the purpose of harming others;
* To generate or disseminate personal identifiable information that
can be used to harm an individual;
* To defame, disparage or otherwise harass others;
* For fully automated decision making that adversely impacts an
individuals legal rights or otherwise creates or modifies a
binding, enforceable obligation;
* For any use intended to or which has the effect of discriminating
against or harming individuals or groups based on online or offline
social behavior or known or predicted personal or personality
characteristics;
* To exploit any of the vulnerabilities of a specific group of persons
based on their age, social, physical or mental characteristics, in
order to materially distort the behavior of a person pertaining to
that group in a manner that causes or is likely to cause that person
or another person physical or psychological harm;
* For any use intended to or which has the effect of discriminating
against individuals or groups based on legally protected
characteristics or categories;
* To provide medical advice and medical results interpretation;
* To generate or disseminate information for the purpose to be used
for administration of justice, law enforcement, immigration or
asylum processes, such as predicting an individual will commit
fraud/crime commitment (e.g. by text profiling, drawing causal
relationships between assertions made in documents, indiscriminate
and arbitrarily-targeted use).

251
README.md
View File

@ -1,8 +1,11 @@
<div align="center">
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
# Invoke AI - Generative AI for Professional Creatives
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
# InvokeAI: A Stable Diffusion Toolkit
[![discord badge]][discord link]
@ -33,13 +36,23 @@
</div>
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI is a leading creative engine built to empower professionals
and enthusiasts alike. Generate and create stunning visual media using
the latest AI-driven technologies. InvokeAI offers an industry leading
Web Interface, interactive Command Line Interface, and also serves as
the foundation for multiple commercial products.
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
**Quick links**: [[How to
Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/">Code and
Downloads</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>]
<div align="center">
@ -49,22 +62,30 @@ requests. Be sure to use the provided templates. They will help us diagnose issu
## Table of Contents
1. [Quick Start](#getting-started-with-invokeai)
2. [Installation](#detailed-installation-instructions)
3. [Hardware Requirements](#hardware-requirements)
4. [Features](#features)
5. [Latest Changes](#latest-changes)
6. [Troubleshooting](#troubleshooting)
7. [Contributing](#contributing)
8. [Contributors](#contributors)
9. [Support](#support)
10. [Further Reading](#further-reading)
Table of Contents 📝
## Getting Started with InvokeAI
**Getting Started**
1. 🏁 [Quick Start](#quick-start)
3. 🖥️ [Hardware Requirements](#hardware-requirements)
**More About Invoke**
1. 🌟 [Features](#features)
2. 📣 [Latest Changes](#latest-changes)
3. 🛠️ [Troubleshooting](#troubleshooting)
**Supporting the Project**
1. 🤝 [Contributing](#contributing)
2. 👥 [Contributors](#contributors)
3. 💕 [Support](#support)
## Quick Start
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
If upgrading from version 2.3, please read [Migrating a 2.3 root
directory to 3.0](#migrating-to-3) first.
### Automatic Installer (suggested for 1st time users)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
@ -73,9 +94,8 @@ For full installation and upgrade instructions, please see:
3. Unzip the file.
4. If you are on Windows, double-click on the `install.bat` script. On
macOS, open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. On Linux, run `install.sh`.
4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. **Linux:** run `install.sh`.
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
@ -84,7 +104,7 @@ installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generaiton models.
select a set of starting image generation models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
@ -101,10 +121,12 @@ and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for users familiar with Terminals)
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
not supported.
You must have Python 3.9 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
@ -139,7 +161,7 @@ not supported.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
_For Linux with an AMD GPU:_
@ -148,6 +170,11 @@ not supported.
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For non-GPU systems:_
```terminal
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2:_
```sh
@ -157,22 +184,24 @@ not supported.
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure
invokeai-configure --root .
```
Don't miss the dot at the end!
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai --web
invokeai-web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
### Detailed Installation Instructions
## Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
@ -180,6 +209,128 @@ AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
<a name="migrating-to-3"></a>
### Migrating a v2.3 InvokeAI root directory
The InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named `invokeai` and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.
We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g. `invokeai-3` and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place. This section gives both recipes.
#### Creating a new root directory and migrating old models
This is the safer recipe because it leaves your old root directory in
place to fall back on.
1. Follow the instructions above to create and install InvokeAI in a
directory that has a different name from the 2.3 invokeai directory.
In this example, we will use "invokeai-3"
2. When you are prompted to select models to install, select a minimal
set of models, such as stable-diffusion-v1.5 only.
3. After installation is complete launch `invokeai.sh` (Linux/Mac) or
`invokeai.bat` and select option 8 "Open the developers console". This
will take you to the command line.
4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to
/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`
paths for your v2.3 and v3.0 root directories respectively.
This will copy and convert your old models from 2.3 format to 3.0
format and create a new `models` directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed `models.orig` and can be deleted once you have
confirmed that the migration was successful.
If you wish, you can pass the 2.3 root directory to both `--from` and
`--to` in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.
#### Migrating in place
For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. ***This recipe does not work on
Windows platforms due to a bug in the Windows version of the 2.3
upgrade script.** See the next section for a Windows recipe.
##### For Mac and Linux Users:
1. Launch the InvokeAI launcher script in your current v2.3 root directory.
2. Select option [9] "Update InvokeAI" to bring up the updater dialog.
3. Select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and
update it to the 3.0 format. The following files will be replaced:
- The invokeai.init file, replaced by invokeai.yaml
- The models directory
- The configs/models.yaml model index
The original versions of these files will be saved with the suffix
".orig" appended to the end. Once you have confirmed that the upgrade
worked, you can safely remove these files. Alternatively you can
restore a working v2.3 directory by removing the new files and
restoring the ".orig" files' original names.
##### For Windows Users:
Windows Users can upgrade with the
1. Enter the 2.3 root directory you wish to upgrade
2. Launch `invoke.sh` or `invoke.bat`
3. Select the "Developer's console" option [8]
4. Type the following commands
```
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .
```
(Replace `v3.0.0` with the current release number if this document is out of date).
The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migrating Images
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. To do this, you
need to run an additional step:
1. From a working InvokeAI 3.0 root directory, start the launcher and
enter menu option [8] to open the "developer's console".
2. At the developer's console command line, type the command:
```bash
invokeai-import-images
```
3. This will lead you through the process of confirming the desired
source and destination for the imported images. The images will
appear in the gallery board of your choice, and contain the
original prompt, model name, and other parameters used to generate
the image.
(Many kudos to **techjedi** for contributing this script.)
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
@ -190,21 +341,20 @@ AMD card (using the ROCm driver).
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
of VRAM is highly recommended for rendering using the Stable
Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
only), 6-8 GB for XL rendering.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
### Memory
**Memory** - At least 12 GB Main Memory RAM.
- At least 12 GB Main Memory RAM.
### Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
@ -218,28 +368,23 @@ InvokeAI offers a locally hosted Web Server & React Frontend, with an industry l
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Advanced Prompt Syntax*
### *Node Architecture & Editor (Beta)*
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
### *Command Line Interface*
### *Board & Gallery Management*
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
- *Popular Sampler Support*
- *Upscaling & Face Restoration Tools*
- *SD 2.0, 2.1, XL support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
### Coming Soon
- *Node-Based Architecture & UI*
- And more...
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
### Latest Changes
@ -247,7 +392,7 @@ For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
## Troubleshooting
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
@ -277,8 +422,6 @@ This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.

4
coverage/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore

13
docker/.env.sample Normal file
View File

@ -0,0 +1,13 @@
## Make a copy of this file named `.env` and fill in the values below.
## Any environment variables supported by InvokeAI can be specified here.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data.
# Outputs will also be stored here by default.
# This **must** be an absolute path.
INVOKEAI_ROOT=
HUGGINGFACE_TOKEN=
## optional variables specific to the docker setup
# GPU_DRIVER=cuda
# CONTAINER_UID=1000

View File

@ -1,103 +1,129 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile:1.4
ARG PYTHON_VERSION=3.9
##################
## base image ##
##################
FROM python:${PYTHON_VERSION}-slim AS python-base
## Builder stage
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
FROM library/ubuntu:22.04 AS builder
# prepare for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \
git \
python3.10-venv \
python3-pip \
build-essential
# Install necessary packages
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.*
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
# set working directory and env
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
ENV PATH ${APPDIR}/${APPNAME}/bin:$PATH
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# don't fall back to legacy build system
ENV PIP_USE_PEP517=1
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.0.1
ARG TORCHVISION_VERSION=0.15.2
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
ARG BUILDPLATFORM
#######################
## build pyproject ##
#######################
FROM python-base AS pyproject-builder
WORKDIR ${INVOKEAI_SRC}
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
gcc=4:10.2.* \
python3-dev=3.9.*
# Install pytorch before all other pip packages
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is default
RUN --mount=type=cache,target=/root/.cache/pip \
python3 -m venv ${VIRTUAL_ENV} &&\
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.4.2"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu118"; \
fi &&\
pip install $extra_index_url_arg \
torch==$TORCH_VERSION \
torchvision==$TORCHVISION_VERSION
# prepare pip for buildkit cache
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
RUN mkdir -p ${PIP_CACHE_DIR}
# Install the local package.
# Editable mode helps use the same image for development:
# the local working copy can be bind-mounted into the image
# at path defined by ${INVOKEAI_SRC}
COPY invokeai ./invokeai
COPY pyproject.toml ./
RUN --mount=type=cache,target=/root/.cache/pip \
# xformers + triton fails to install on arm64
if [ "$GPU_DRIVER" = "cuda" ] && [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
pip install -e ".[xformers]"; \
else \
pip install -e "."; \
fi
# create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
python3 -m venv "${APPNAME}" \
--upgrade-deps
# #### Build the Web UI ------------------------------------
# copy sources
COPY --link . .
FROM node:18 AS web-builder
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/usr/lib/node_modules \
npm install --include dev
RUN --mount=type=cache,target=/usr/lib/node_modules \
yarn vite build
# install pyproject.toml
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
"${APPNAME}/bin/pip" install .
#### Runtime stage ---------------------------------------
FROM library/ubuntu:22.04 AS runtime
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
RUN apt update && apt install -y --no-install-recommends \
git \
curl \
vim \
tmux \
ncdu \
iotop \
bzip2 \
gosu \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
python3-pip \
build-essential \
libopencv-dev \
libstdc++-10-dev &&\
apt-get clean && apt-get autoclean
# globally add magic-wormhole
# for ease of transferring data to and from the container
# when running in sandboxed cloud environments; e.g. Runpod etc.
RUN pip install magic-wormhole
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV INVOKEAI_ROOT=/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
# --link requires buldkit w/ dockerfile syntax 1.4
COPY --link --from=builder ${INVOKEAI_SRC} ${INVOKEAI_SRC}
COPY --link --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY --link --from=web-builder /build/dist ${INVOKEAI_SRC}/invokeai/frontend/web/dist
# Link amdgpu.ids for ROCm builds
# contributed by https://github.com/Rubonnek
RUN mkdir -p "/opt/amdgpu/share/libdrm" &&\
ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids"
WORKDIR ${INVOKEAI_SRC}
# build patchmatch
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match"
#####################
## runtime image ##
#####################
FROM python-base AS runtime
# Create unprivileged user and make the local dir
RUN useradd --create-home --shell /bin/bash -u 1000 --comment "container local user" invoke
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R invoke:invoke ${INVOKEAI_ROOT}
# Create a new user
ARG UNAME=appuser
RUN useradd \
--no-log-init \
-m \
-U \
"${UNAME}"
# create volume directory
ARG VOLUME_DIR=/data
RUN mkdir -p "${VOLUME_DIR}" \
&& chown -R "${UNAME}" "${VOLUME_DIR}"
# setup runtime environment
USER ${UNAME}
COPY --chown=${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
ENV INVOKEAI_ROOT ${VOLUME_DIR}
ENV TRANSFORMERS_CACHE ${VOLUME_DIR}/.cache
ENV INVOKE_MODEL_RECONFIGURE "--yes --default_only"
EXPOSE 9090
ENTRYPOINT [ "invokeai" ]
CMD [ "--web", "--host", "0.0.0.0", "--port", "9090" ]
VOLUME [ "${VOLUME_DIR}" ]
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]
CMD ["invokeai-web", "--host", "0.0.0.0"]

77
docker/README.md Normal file
View File

@ -0,0 +1,77 @@
# InvokeAI Containerized
All commands are to be run from the `docker` directory: `cd docker`
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
#### macOS
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
This is done via Docker Desktop preferences
## Quickstart
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. `docker compose up`
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
### Use a GPU
- Linux is *recommended* for GPU support in Docker.
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
You can also set these values in `docker compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (most values are optional):
```
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
```
## Even Moar Customizing!
See the `docker compose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory
Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
```
command:
- invokeai-configure
- --yes
```
Or install models:
```
command:
- invokeai-model-install
```

View File

@ -1,51 +1,11 @@
#!/usr/bin/env bash
set -e
# If you want to build a specific flavor, set the CONTAINER_FLAVOR environment variable
# e.g. CONTAINER_FLAVOR=cpu ./build.sh
# Possible Values are:
# - cpu
# - cuda
# - rocm
# Don't forget to also set it when executing run.sh
# if it is not set, the script will try to detect the flavor by itself.
#
# Doc can be found here:
# https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
build_args=""
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
source ./env.sh
echo "docker-compose build args:"
echo $build_args
DOCKERFILE=${INVOKE_DOCKERFILE:-./Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile:\t\t${DOCKERFILE}"
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
echo -e "Volumename:\t\t${VOLUMENAME}"
echo -e "Platform:\t\t${PLATFORM}"
echo -e "Container Registry:\t${CONTAINER_REGISTRY}"
echo -e "Container Repository:\t${CONTAINER_REPOSITORY}"
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
echo -e "Container Flavor:\t${CONTAINER_FLAVOR}"
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
# Create docker volume
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "creating docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
DOCKER_BUILDKIT=1 docker build \
--platform="${PLATFORM:-linux/amd64}" \
--tag="${CONTAINER_IMAGE:-invokeai}" \
${CONTAINER_FLAVOR:+--build-arg="CONTAINER_FLAVOR=${CONTAINER_FLAVOR}"} \
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
--file="${DOCKERFILE}" \
..
docker-compose build $build_args

48
docker/docker-compose.yml Normal file
View File

@ -0,0 +1,48 @@
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
version: '3.8'
services:
invokeai:
image: "local/invokeai:latest"
# edit below to run on a container runtime other than nvidia-container-runtime.
# not yet tested with rocm/AMD GPUs
# Comment out the "deploy" section to run on CPU only
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
build:
context: ..
dockerfile: docker/Dockerfile
# variables without a default will automatically inherit from the host environment
environment:
- INVOKEAI_ROOT
- HF_HOME
# Create a .env file in the same directory as this docker-compose.yml file
# and populate it with environment variables. See .env.sample
env_file:
- .env
ports:
- "${INVOKEAI_PORT:-9090}:9090"
volumes:
- ${INVOKEAI_ROOT:-~/invokeai}:${INVOKEAI_ROOT:-/invokeai}
- ${HF_HOME:-~/.cache/huggingface}:${HF_HOME:-/invokeai/.cache/huggingface}
# - ${INVOKEAI_MODELS_DIR:-${INVOKEAI_ROOT:-/invokeai/models}}
# - ${INVOKEAI_MODELS_CONFIG_PATH:-${INVOKEAI_ROOT:-/invokeai/configs/models.yaml}}
tty: true
stdin_open: true
# # Example of running alternative commands/scripts in the container
# command:
# - bash
# - -c
# - |
# invokeai-model-install --yes --default-only --config_file ${INVOKEAI_ROOT}/config_custom.yaml
# invokeai-nodes-web --host 0.0.0.0

65
docker/docker-entrypoint.sh Executable file
View File

@ -0,0 +1,65 @@
#!/bin/bash
set -e -o pipefail
### Container entrypoint
# Runs the CMD as defined by the Dockerfile or passed to `docker run`
# Can be used to configure the runtime dir
# Bypass by using ENTRYPOINT or `--entrypoint`
### Set INVOKEAI_ROOT pointing to a valid runtime directory
# Otherwise configure the runtime dir first.
### Configure the InvokeAI runtime directory (done by default)):
# docker run --rm -it <this image> --configure
# or skip with --no-configure
### Set the CONTAINER_UID envvar to match your user.
# Ensures files created in the container are owned by you:
# docker run --rm -it -v /some/path:/invokeai -e CONTAINER_UID=$(id -u) <this image>
# Default UID: 1000 chosen due to popularity on Linux systems. Possibly 501 on MacOS.
USER_ID=${CONTAINER_UID:-1000}
USER=invoke
usermod -u ${USER_ID} ${USER} 1>/dev/null
configure() {
# Configure the runtime directory
if [[ -f ${INVOKEAI_ROOT}/invokeai.yaml ]]; then
echo "${INVOKEAI_ROOT}/invokeai.yaml exists. InvokeAI is already configured."
echo "To reconfigure InvokeAI, delete the above file."
echo "======================================================================"
else
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}"
gosu ${USER} invokeai-configure --yes --default_only
fi
}
## Skip attempting to configure.
## Must be passed first, before any other args.
if [[ $1 != "--no-configure" ]]; then
configure
else
shift
fi
### Set the $PUBLIC_KEY env var to enable SSH access.
# We do not install openssh-server in the image by default to avoid bloat.
# but it is useful to have the full SSH server e.g. on Runpod.
# (use SCP to copy files to/from the image, etc)
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
apt-get update
apt-get install -y openssh-server
pushd "$HOME"
mkdir -p .ssh
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
chmod -R 700 .ssh
popd
service ssh start
fi
cd "${INVOKEAI_ROOT}"
# Run the CMD as the Container User (not root).
exec gosu ${USER} "$@"

View File

@ -1,51 +0,0 @@
#!/usr/bin/env bash
# This file is used to set environment variables for the build.sh and run.sh scripts.
# Try to detect the container flavor if no PIP_EXTRA_INDEX_URL got specified
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
# Activate virtual environment if not already activated and exists
if [[ -z $VIRTUAL_ENV ]]; then
[[ -e "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" ]] \
&& source "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" \
&& echo "Activated virtual environment: $VIRTUAL_ENV"
fi
# Decide which container flavor to build if not specified
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
# Check for CUDA and ROCm
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
if [[ "${CUDA_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="cuda"
elif [[ "${ROCM_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="rocm"
else
CONTAINER_FLAVOR="cpu"
fi
fi
# Set PIP_EXTRA_INDEX_URL based on container flavor
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
elif [[ "$CONTAINER_FLAVOR" == "cpu" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
# elif [[ -z "$CONTAINER_FLAVOR" || "$CONTAINER_FLAVOR" == "cuda" ]]; then
# PIP_PACKAGE=${PIP_PACKAGE-".[xformers]"}
fi
fi
# Variables shared by build.sh and run.sh
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
REPOSITORY_NAME="${REPOSITORY_NAME,,}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME}_data"}"
ARCH="${ARCH-$(uname -m)}"
PLATFORM="${PLATFORM-linux/${ARCH}}"
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"

View File

@ -1,41 +1,8 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
# Create outputs directory if it does not exist
[[ -d ./outputs ]] || mkdir ./outputs
echo -e "You are using these values:\n"
echo -e "Volumename:\t${VOLUMENAME}"
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
echo -e "local Models:\t${MODELSPATH:-unset}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="${PLATFORM}" \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount=source="${VOLUMENAME}",target=/data \
--mount type=bind,source="$(pwd)"/outputs,target=/data/outputs \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${@:+$@}
# Remove Trash folder
for f in outputs/.Trash*; do
if [ -e "$f" ]; then
rm -Rf "$f"
break
fi
done
docker-compose up --build -d
docker-compose logs -f

60
docker/runpod-readme.md Normal file
View File

@ -0,0 +1,60 @@
# InvokeAI - A Stable Diffusion Toolkit
Stable Diffusion distribution by InvokeAI: https://github.com/invoke-ai
The Docker image tracks the `main` branch of the InvokeAI project, which means it includes the latest features, but may contain some bugs.
Your working directory is mounted under the `/workspace` path inside the pod. The models are in `/workspace/invokeai/models`, and outputs are in `/workspace/invokeai/outputs`.
> **Only the /workspace directory will persist between pod restarts!**
> **If you _terminate_ (not just _stop_) the pod, the /workspace will be lost.**
## Quickstart
1. Launch a pod from this template. **It will take about 5-10 minutes to run through the initial setup**. Be patient.
1. Wait for the application to load.
- TIP: you know it's ready when the CPU usage goes idle
- You can also check the logs for a line that says "_Point your browser at..._"
1. Open the Invoke AI web UI: click the `Connect` => `connect over HTTP` button.
1. Generate some art!
## Other things you can do
At any point you may edit the pod configuration and set an arbitrary Docker command. For example, you could run a command to downloads some models using `curl`, or fetch some images and place them into your outputs to continue a working session.
If you need to run *multiple commands*, define them in the Docker Command field like this:
`bash -c "cd ${INVOKEAI_ROOT}/outputs; wormhole receive 2-foo-bar; invoke.py --web --host 0.0.0.0"`
### Copying your data in and out of the pod
This image includes a couple of handy tools to help you get the data into the pod (such as your custom models or embeddings), and out of the pod (such as downloading your outputs). Here are your options for getting your data in and out of the pod:
- **SSH server**:
1. Make sure to create and set your Public Key in the RunPod settings (follow the official instructions)
1. Add an exposed port 22 (TCP) in the pod settings!
1. When your pod restarts, you will see a new entry in the `Connect` dialog. Use this SSH server to `scp` or `sftp` your files as necessary, or SSH into the pod using the fully fledged SSH server.
- [**Magic Wormhole**](https://magic-wormhole.readthedocs.io/en/latest/welcome.html):
1. On your computer, `pip install magic-wormhole` (see above instructions for details)
1. Connect to the command line **using the "light" SSH client** or the browser-based console. _Currently there's a bug where `wormhole` isn't available when connected to "full" SSH server, as described above_.
1. `wormhole send /workspace/invokeai/outputs` will send the entire `outputs` directory. You can also send individual files.
1. Once packaged, you will see a `wormhole receive <123-some-words>` command. Copy it
1. Paste this command into the terminal on your local machine to securely download the payload.
1. It works the same in reverse: you can `wormhole send` some models from your computer to the pod. Again, save your files somewhere in `/workspace` or they will be lost when the pod is stopped.
- **RunPod's Cloud Sync feature** may be used to sync the persistent volume to cloud storage. You could, for example, copy the entire `/workspace` to S3, add some custom models to it, and copy it back from S3 when launching new pod configurations. Follow the Cloud Sync instructions.
### Disable the NSFW checker
The NSFW checker is enabled by default. To disable it, edit the pod configuration and set the following command:
```
invoke --web --host 0.0.0.0 --no-nsfw_checker
```
---
Template ©2023 Eugene Brodsky [ebr](https://github.com/ebr)

View File

@ -1,5 +0,0 @@
{
"MD046": false,
"MD007": false,
"MD030": false
}

View File

@ -4,6 +4,236 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.3.5 <small>(22 May 2023)</small>
This release (along with the post1 and post2 follow-on releases) expands support for additional LoRA and LyCORIS models, upgrades diffusers versions, and fixes a few bugs.
### LoRA and LyCORIS Support Improvement
A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
Support for the newer LoKR LyCORIS files has been added.
### Library Updates and Speed/Reproducibility Advancements
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
Library Version
Torch 2.0.0
Diffusers 0.16.1
Xformers 0.0.19
Compel 1.1.5
Other Improvements
### Performance Improvements
When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
### Bug Fixes
The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running
## v2.3.4 <small>(7 April 2023)</small>
What's New in 2.3.4
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
### LoRA and LyCORIS Support
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
### New WebUI LoRA and Textual Inversion Buttons
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
### Minor features and fixes
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
### Known Bugs in 2.3.4
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.3 <small>(28 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.2 the following bugs have been fixed:
Bugs
When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
The batch script log file names have been fixed to be compatible with Windows.
Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
Support loading of legacy config files that have no personalization (textual inversion) section.
An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
Enhancements
It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
### Known Bugs in 2.3.3
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.2 <small>(11 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
Crashes that occurred during model merging.
Restore previous naming of Stable Diffusion base and 768 models.
Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
New "Invokeai-batch" script
### Invoke AI Batch
2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...
If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
### Known Bugs in 2.3.2
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
## v2.3.1 <small>(22 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
Using the Model Installer App
Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command invokeai-model-install.
Using the Command Line Client (CLI)
The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see INSTALLING MODELS for more information on model management.
### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
Command-line users can launch the new configure app using invokeai-configure.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
Command-line users can run this interface by typing invokeai-configure
### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
An easier way to contribute translations to the WebUI
We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
Numerous internal bugfixes and performance issues
### Bug Fixes
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.
Summary of InvokeAI command line scripts (all accessible via the launcher menu)
Command Description
invokeai Command line interface
invokeai --web Web interface
invokeai-model-install Model installer with console forms-based front end
invokeai-ti --gui Textual inversion, with a console forms-based front end
invokeai-merge --gui Model merging, with a console forms-based front end
invokeai-configure Startup configuration; can also be used to reinstall support models
invokeai-update InvokeAI software updater
### Known Bugs in 2.3.1
These are known bugs in the release.
MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
@ -264,7 +494,7 @@ sections describe what's new for InvokeAI.
[Manual Installation](installation/020_INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps,
sampler, etc) in a `.invokeai` file. See
[Client](features/CLI.md)
[Client](deprecated/CLI.md)
- Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
@ -387,8 +617,6 @@ sections describe what's new for InvokeAI.
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for [inpainting](features/INPAINTING.md) and
[outpainting](features/OUTPAINTING.md)
- img2img runs on all k\* samplers
- Support for
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
@ -399,7 +627,7 @@ sections describe what's new for InvokeAI.
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
[larger images to be created without duplicating elements](deprecated/CLI.md#this-is-an-example-of-txt2img),
at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control
variation during image generation (see
@ -408,7 +636,7 @@ sections describe what's new for InvokeAI.
of images and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
platforms.
- Improved [command-line completion behavior](features/CLI.md) New commands
- Improved [command-line completion behavior](deprecated/CLI.md) New commands
added:
- List command-line history with `!history`
- Search command-line history with `!search`

Binary file not shown.

After

Width:  |  Height:  |  Size: 470 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 415 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 297 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 983 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 101 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 637 KiB

After

Width:  |  Height:  |  Size: 729 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 530 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 409 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 335 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 948 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 563 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 353 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

BIN
docs/assets/upscaling.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 637 KiB

View File

@ -0,0 +1,56 @@
# How to Contribute
## Welcome to Invoke AI
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
## Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
### Areas of contribution:
#### Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
#### Documentation
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documenation.md).
#### Translation
If you'd like to help with translation, please see our [translation guide](docs/contributing/.contribution_guides/translation.md).
#### Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
### Contributors
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
### Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
By making a contribution to this project, you certify that:
1. The contribution was created in whole or in part by you and you have the right to submit it under the open-source license indicated in this projects GitHub repository; or
2. The contribution is based upon previous work that, to the best of your knowledge, is covered under an appropriate open-source license and you have the right under that license to submit that work with modifications, whether created in whole or in part by you, under the same open-source license (unless you are permitted to submit under a different license); or
3. The contribution was provided directly to you by some other person who certified (1) or (2) and you have not modified it; or
4. You understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information you submit with it, including your sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open-source license(s) involved.
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
### Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).
Original portions of the software are Copyright (c) 2023 by respective contributors.
---
Remember, your contributions help make this project great. We're excited to see what you'll bring to our community!

View File

@ -1,105 +1,790 @@
# Invocations
Invocations represent a single operation, its inputs, and its outputs. These operations and their outputs can be chained together to generate and modify images.
Features in InvokeAI are added in the form of modular node-like systems called
**Invocations**.
An Invocation is simply a single operation that takes in some inputs and gives
out some outputs. We can then chain multiple Invocations together to create more
complex functionality.
## Invocations Directory
InvokeAI Invocations can be found in the `invokeai/app/invocations` directory.
You can add your new functionality to one of the existing Invocations in this
directory or create a new file in this directory as per your needs.
**Note:** _All Invocations must be inside this directory for InvokeAI to
recognize them as valid Invocations._
## Creating A New Invocation
In order to understand the process of creating a new Invocation, let us actually
create one.
In our example, let us create an Invocation that will take in an image, resize
it and output the resized image.
The first set of things we need to do when creating a new Invocation are -
- Create a new class that derives from a predefined parent class called
`BaseInvocation`.
- The name of every Invocation must end with the word `Invocation` in order for
it to be recognized as an Invocation.
- Every Invocation must have a `docstring` that describes what this Invocation
does.
- Every Invocation must have a unique `type` field defined which becomes its
indentifier.
- Invocations are strictly typed. We make use of the native
[typing](https://docs.python.org/3/library/typing.html) library and the
installed [pydantic](https://pydantic-docs.helpmanual.io/) library for
validation.
So let us do that.
```python
from typing import Literal
from .baseinvocation import BaseInvocation
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
```
That's great.
Now we have setup the base of our new Invocation. Let us think about what inputs
our Invocation takes.
- We need an `image` that we are going to resize.
- We will need new `width` and `height` values to which we need to resize the
image to.
### **Inputs**
Every Invocation input is a pydantic `Field` and like everything else should be
strictly typed and defined.
So let us create these inputs for our Invocation. First up, the `image` input we
need. Generally, we can use standard variable types in Python but InvokeAI
already has a custom `ImageField` type that handles all the stuff that is needed
for image inputs.
But what is this `ImageField` ..? It is a special class type specifically
written to handle how images are dealt with in InvokeAI. We will cover how to
create your own custom field types later in this guide. For now, let's go ahead
and use it.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation
from ..models.image import ImageField
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
```
Let us break down our input code.
```python
image: Union[ImageField, None] = Field(description="The input image", default=None)
```
| Part | Value | Description |
| --------- | ---------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| Name | `image` | The variable that will hold our image |
| Type Hint | `Union[ImageField, None]` | The types for our field. Indicates that the image can either be an `ImageField` type or `None` |
| Field | `Field(description="The input image", default=None)` | The image variable is a field which needs a description and a default value that we set to `None`. |
Great. Now let us create our other inputs for `width` and `height`
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation
from ..models.image import ImageField
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
```
As you might have noticed, we added two new parameters to the field type for
`width` and `height` called `gt` and `le`. These basically stand for _greater
than or equal to_ and _less than or equal to_. There are various other param
types for field that you can find on the **pydantic** documentation.
**Note:** _Any time it is possible to define constraints for our field, we
should do it so the frontend has more information on how to parse this field._
Perfect. We now have our inputs. Let us do something with these.
### **Invoke Function**
The `invoke` function is where all the magic happens. This function provides you
the `context` parameter that is of the type `InvocationContext` which will give
you access to the current context of the generation and all the other services
that are provided by it by InvokeAI.
Let us create this function first.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext
from ..models.image import ImageField
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext):
pass
```
### **Outputs**
The output of our Invocation will be whatever is returned by this `invoke`
function. Like with our inputs, we need to strongly type and define our outputs
too.
What is our output going to be? Another image. Normally you'd have to create a
type for this but InvokeAI already offers you an `ImageOutput` type that handles
all the necessary info related to image outputs. So let us use that.
We will cover how to create your own output types later in this guide.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext
from ..models.image import ImageField
from .image import ImageOutput
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext) -> ImageOutput:
pass
```
Perfect. Now that we have our Invocation setup, let us do what we want to do.
- We will first load the image. Generally we do this using the `PIL` library but
we can use one of the services provided by InvokeAI to load the image.
- We will resize the image using `PIL` to our input data.
- We will output this image in the format we set above.
So let's do that.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext
from ..models.image import ImageField, ResourceOrigin, ImageCategory
from .image import ImageOutput
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext) -> ImageOutput:
# Load the image using InvokeAI's predefined Image Service.
image = context.services.images.get_pil_image(self.image.image_origin, self.image.image_name)
# Resizing the image
# Because we used the above service, we already have a PIL image. So we can simply resize.
resized_image = image.resize((self.width, self.height))
# Preparing the image for output using InvokeAI's predefined Image Service.
output_image = context.services.images.create(
image=resized_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
# Returning the Image
return ImageOutput(
image=ImageField(
image_name=output_image.image_name,
image_origin=output_image.image_origin,
),
width=output_image.width,
height=output_image.height,
)
```
**Note:** Do not be overwhelmed by the `ImageOutput` process. InvokeAI has a
certain way that the images need to be dispatched in order to be stored and read
correctly. In 99% of the cases when dealing with an image output, you can simply
copy-paste the template above.
That's it. You made your own **Resize Invocation**.
## Result
Once you make your Invocation correctly, the rest of the process is fully
automated for you.
When you launch InvokeAI, you can go to `http://localhost:9090/docs` and see
your new Invocation show up there with all the relevant info.
![resize invocation](../assets/contributing/resize_invocation.png)
When you launch the frontend UI, you can go to the Node Editor tab and find your
new Invocation ready to be used.
![resize node editor](../assets/contributing/resize_node_editor.png)
# Advanced
## Custom Input Fields
Now that you know how to create your own Invocations, let us dive into slightly
more advanced topics.
While creating your own Invocations, you might run into a scenario where the
existing input types in InvokeAI do not meet your requirements. In such cases,
you can create your own input types.
Let us create one as an example. Let us say we want to create a color input
field that represents a color code. But before we start on that here are some
general good practices to keep in mind.
**Good Practices**
- There is no naming convention for input fields but we highly recommend that
you name it something appropriate like `ColorField`.
- It is not mandatory but it is heavily recommended to add a relevant
`docstring` to describe your input field.
- Keep your field in the same file as the Invocation that it is made for or in
another file where it is relevant.
All input types a class that derive from the `BaseModel` type from `pydantic`.
So let's create one.
```python
from pydantic import BaseModel
class ColorField(BaseModel):
'''A field that holds the rgba values of a color'''
pass
```
Perfect. Now let us create our custom inputs for our field. This is exactly
similar how you created input fields for your Invocation. All the same rules
apply. Let us create four fields representing the _red(r)_, _blue(b)_,
_green(g)_ and _alpha(a)_ channel of the color.
```python
class ColorField(BaseModel):
'''A field that holds the rgba values of a color'''
r: int = Field(ge=0, le=255, description="The red channel")
g: int = Field(ge=0, le=255, description="The green channel")
b: int = Field(ge=0, le=255, description="The blue channel")
a: int = Field(ge=0, le=255, description="The alpha channel")
```
That's it. We now have a new input field type that we can use in our Invocations
like this.
```python
color: ColorField = Field(default=ColorField(r=0, g=0, b=0, a=0), description='Background color of an image')
```
**Extra Config**
All input fields also take an additional `Config` class that you can use to do
various advanced things like setting required parameters and etc.
Let us do that for our _ColorField_ and enforce all the values because we did
not define any defaults for our fields.
```python
class ColorField(BaseModel):
'''A field that holds the rgba values of a color'''
r: int = Field(ge=0, le=255, description="The red channel")
g: int = Field(ge=0, le=255, description="The green channel")
b: int = Field(ge=0, le=255, description="The blue channel")
a: int = Field(ge=0, le=255, description="The alpha channel")
class Config:
schema_extra = {"required": ["r", "g", "b", "a"]}
```
Now it becomes mandatory for the user to supply all the values required by our
input field.
We will discuss the `Config` class in extra detail later in this guide and how
you can use it to make your Invocations more robust.
## Custom Output Types
Like with custom inputs, sometimes you might find yourself needing custom
outputs that InvokeAI does not provide. We can easily set one up.
Now that you are familiar with Invocations and Inputs, let us use that knowledge
to put together a custom output type for an Invocation that returns _width_,
_height_ and _background_color_ that we need to create a blank image.
- A custom output type is a class that derives from the parent class of
`BaseInvocationOutput`.
- It is not mandatory but we recommend using names ending with `Output` for
output types. So we'll call our class `BlankImageOutput`
- It is not mandatory but we highly recommend adding a `docstring` to describe
what your output type is for.
- Like Invocations, each output type should have a `type` variable that is
**unique**
Now that we know the basic rules for creating a new output type, let us go ahead
and make it.
```python
from typing import Literal
from pydantic import Field
from .baseinvocation import BaseInvocationOutput
class BlankImageOutput(BaseInvocationOutput):
'''Base output type for creating a blank image'''
type: Literal['blank_image_output'] = 'blank_image_output'
# Inputs
width: int = Field(description='Width of blank image')
height: int = Field(description='Height of blank image')
bg_color: ColorField = Field(description='Background color of blank image')
class Config:
schema_extra = {"required": ["type", "width", "height", "bg_color"]}
```
All set. We now have an output type that requires what we need to create a
blank_image. And if you noticed it, we even used the `Config` class to ensure
the fields are required.
## Custom Configuration
As you might have noticed when making inputs and outputs, we used a class called
`Config` from _pydantic_ to further customize them. Because our inputs and
outputs essentially inherit from _pydantic_'s `BaseModel` class, all
[configuration options](https://docs.pydantic.dev/latest/usage/schema/#schema-customization)
that are valid for _pydantic_ classes are also valid for our inputs and outputs.
You can do the same for your Invocations too but InvokeAI makes our life a
little bit easier on that end.
InvokeAI provides a custom configuration class called `InvocationConfig`
particularly for configuring Invocations. This is exactly the same as the raw
`Config` class from _pydantic_ with some extra stuff on top to help faciliate
parsing of the scheme in the frontend UI.
At the current moment, tihs `InvocationConfig` class is further improved with
the following features related the `ui`.
| Config Option | Field Type | Example |
| ------------- | ------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
| type_hints | `Dict[str, Literal["integer", "float", "boolean", "string", "enum", "image", "latents", "model", "control"]]` | `type_hint: "model"` provides type hints related to the model like displaying a list of available models |
| tags | `List[str]` | `tags: ['resize', 'image']` will classify your invocation under the tags of resize and image. |
| title | `str` | `title: 'Resize Image` will rename your to this custom title rather than infer from the name of the Invocation class. |
So let us update your `ResizeInvocation` with some extra configuration and see
how that works.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from ..models.image import ImageField, ResourceOrigin, ImageCategory
from .image import ImageOutput
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
class Config(InvocationConfig):
schema_extra: {
ui: {
tags: ['resize', 'image'],
title: ['My Custom Resize']
}
}
def invoke(self, context: InvocationContext) -> ImageOutput:
# Load the image using InvokeAI's predefined Image Service.
image = context.services.images.get_pil_image(self.image.image_origin, self.image.image_name)
# Resizing the image
# Because we used the above service, we already have a PIL image. So we can simply resize.
resized_image = image.resize((self.width, self.height))
# Preparing the image for output using InvokeAI's predefined Image Service.
output_image = context.services.images.create(
image=resized_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
# Returning the Image
return ImageOutput(
image=ImageField(
image_name=output_image.image_name,
image_origin=output_image.image_origin,
),
width=output_image.width,
height=output_image.height,
)
```
We now customized our code to let the frontend know that our Invocation falls
under `resize` and `image` categories. So when the user searches for these
particular words, our Invocation will show up too.
We also set a custom title for our Invocation. So instead of being called
`Resize`, it will be called `My Custom Resize`.
As simple as that.
As time goes by, InvokeAI will further improve and add more customizability for
Invocation configuration. We will have more documentation regarding this at a
later time.
# **[TODO]**
## Custom Components For Frontend
Every backend input type should have a corresponding frontend component so the
UI knows what to render when you use a particular field type.
If you are using existing field types, we already have components for those. So
you don't have to worry about creating anything new. But this might not always
be the case. Sometimes you might want to create new field types and have the
frontend UI deal with it in a different way.
This is where we venture into the world of React and Javascript and create our
own new components for our Invocations. Do not fear the world of JS. It's
actually pretty straightforward.
Let us create a new component for our custom color field we created above. When
we use a color field, let us say we want the UI to display a color picker for
the user to pick from rather than entering values. That is what we will build
now.
---
# OLD -- TO BE DELETED OR MOVED LATER
---
## Creating a new invocation
To create a new invocation, either find the appropriate module file in `/ldm/invoke/app/invocations` to add your invocation to, or create a new one in that folder. All invocations in that folder will be discovered and made available to the CLI and API automatically. Invocations make use of [typing](https://docs.python.org/3/library/typing.html) and [pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration into the CLI and API.
To create a new invocation, either find the appropriate module file in
`/ldm/invoke/app/invocations` to add your invocation to, or create a new one in
that folder. All invocations in that folder will be discovered and made
available to the CLI and API automatically. Invocations make use of
[typing](https://docs.python.org/3/library/typing.html) and
[pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration
into the CLI and API.
An invocation looks like this:
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
# fmt: off
type: Literal["upscale"] = "upscale"
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description = "The upscale level")
image: Union[ImageField, None] = Field(description="The input image", default=None)
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2, 4] = Field(default=2, description="The upscale level")
# fmt: on
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
image = context.services.images.get_pil_image(
self.image.image_origin, self.image.image_name
)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
image_origin=image_dto.image_origin,
),
width=image_dto.width,
height=image_dto.height,
)
```
Each portion is important to implement correctly.
### Class definition and type
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
```
All invocations must derive from `BaseInvocation`. They should have a docstring that declares what they do in a single, short line. They should also have a `type` with a type hint that's `Literal["command_name"]`, where `command_name` is what the user will type on the CLI or use in the API to create this invocation. The `command_name` must be unique. The `type` must be assigned to the value of the literal in the type hint.
All invocations must derive from `BaseInvocation`. They should have a docstring
that declares what they do in a single, short line. They should also have a
`type` with a type hint that's `Literal["command_name"]`, where `command_name`
is what the user will type on the CLI or use in the API to create this
invocation. The `command_name` must be unique. The `type` must be assigned to
the value of the literal in the type hint.
### Inputs
```py
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description="The upscale level")
```
Inputs consist of three parts: a name, a type hint, and a `Field` with default, description, and validation information. For example:
| Part | Value | Description |
| ---- | ----- | ----------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |
Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this field to be parsed with `None` as a value, which enables linking to previous invocations. All fields should either provide a default value or allow `None` as a value, so that they can be overwritten with a linked output from another invocation.
Inputs consist of three parts: a name, a type hint, and a `Field` with default,
description, and validation information. For example:
The special type `ImageField` is also used here. All images are passed as `ImageField`, which protects them from pydantic validation errors (since images only ever come from links).
| Part | Value | Description |
| --------- | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |
Finally, note that for all linking, the `type` of the linked fields must match. If the `name` also matches, then the field can be **automatically linked** to a previous invocation by name and matching.
Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this
field to be parsed with `None` as a value, which enables linking to previous
invocations. All fields should either provide a default value or allow `None` as
a value, so that they can be overwritten with a linked output from another
invocation.
The special type `ImageField` is also used here. All images are passed as
`ImageField`, which protects them from pydantic validation errors (since images
only ever come from links).
Finally, note that for all linking, the `type` of the linked fields must match.
If the `name` also matches, then the field can be **automatically linked** to a
previous invocation by name and matching.
### Config
```py
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
```
This is an optional configuration for the invocation. It inherits from
pydantic's model `Config` class, and it used primarily to customize the
autogenerated OpenAPI schema.
The UI relies on the OpenAPI schema in two ways:
- An API client & Typescript types are generated from it. This happens at build
time.
- The node editor parses the schema into a template used by the UI to create the
node editor UI. This parsing happens at runtime.
In this example, a `ui` key has been added to the `schema_extra` dict to provide
some tags for the UI, to facilitate filtering nodes.
See the Schema Generation section below for more information.
### Invoke Function
```py
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
image = context.services.images.get_pil_image(
self.image.image_origin, self.image.image_name
)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
# TODO: can this return multiple results?
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
image=ImageField(
image_name=image_dto.image_name,
image_origin=image_dto.image_origin,
),
width=image_dto.width,
height=image_dto.height,
)
```
The `invoke` function is the last portion of an invocation. It is provided an `InvocationContext` which contains services to perform work as well as a `session_id` for use as needed. It should return a class with output values that derives from `BaseInvocationOutput`.
Before being called, the invocation will have all of its fields set from defaults, inputs, and finally links (overriding in that order).
The `invoke` function is the last portion of an invocation. It is provided an
`InvocationContext` which contains services to perform work as well as a
`session_id` for use as needed. It should return a class with output values that
derives from `BaseInvocationOutput`.
Assume that this invocation may be running simultaneously with other invocations, may be running on another machine, or in other interesting scenarios. If you need functionality, please provide it as a service in the `InvocationServices` class, and make sure it can be overridden.
Before being called, the invocation will have all of its fields set from
defaults, inputs, and finally links (overriding in that order).
Assume that this invocation may be running simultaneously with other
invocations, may be running on another machine, or in other interesting
scenarios. If you need functionality, please provide it as a service in the
`InvocationServices` class, and make sure it can be overridden.
### Outputs
```py
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
type: Literal['image'] = 'image'
image: ImageField = Field(default=None, description="The output image")
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
```
Output classes look like an invocation class without the invoke method. Prefer to use an existing output class if available, and prefer to name inputs the same as outputs when possible, to promote automatic invocation linking.
Output classes look like an invocation class without the invoke method. Prefer
to use an existing output class if available, and prefer to name inputs the same
as outputs when possible, to promote automatic invocation linking.
## Schema Generation
Invocation, output and related classes are used to generate an OpenAPI schema.
### Required Properties
The schema generation treat all properties with default values as optional. This
makes sense internally, but when when using these classes via the generated
schema, we end up with e.g. the `ImageOutput` class having its `image` property
marked as optional.
We know that this property will always be present, so the additional logic
needed to always check if the property exists adds a lot of extraneous cruft.
To fix this, we can leverage `pydantic`'s
[schema customisation](https://docs.pydantic.dev/usage/schema/#schema-customization)
to mark properties that we know will always be present as required.
Here's that `ImageOutput` class, without the needed schema customisation:
```python
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
```
The OpenAPI schema that results from this `ImageOutput` will have the `type`,
`image`, `width` and `height` properties marked as optional, even though we know
they will always have a value.
```python
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
# Add schema customization
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
```
With the customization in place, the schema will now show these properties as
required, obviating the need for extensive null checks in client code.
See this `pydantic` issue for discussion on this solution:
<https://github.com/pydantic/pydantic/discussions/4577>

View File

@ -0,0 +1,273 @@
# Local Development
If you are looking to contribute you will need to have a local development
environment. See the
[Developer Install](../installation/020_INSTALL_MANUAL.md#developer-install) for
full details.
Broadly this involves cloning the repository, installing the pre-reqs, and
InvokeAI (in editable form). Assuming this is working, choose your area of
focus.
## Documentation
We use [mkdocs](https://www.mkdocs.org) for our documentation with the
[material theme](https://squidfunk.github.io/mkdocs-material/). Documentation is
written in markdown files under the `./docs` folder and then built into a static
website for hosting with GitHub Pages at
[invoke-ai.github.io/InvokeAI](https://invoke-ai.github.io/InvokeAI).
To contribute to the documentation you'll need to install the dependencies. Note
the use of `"`.
```zsh
pip install ".[docs]"
```
Now, to run the documentation locally with hot-reloading for changes made.
```zsh
mkdocs serve
```
You'll then be prompted to connect to `http://127.0.0.1:8080` in order to
access.
## Backend
The backend is contained within the `./invokeai/backend` folder structure. To
get started however please install the development dependencies.
From the root of the repository run the following command. Note the use of `"`.
```zsh
pip install ".[test]"
```
This in an optional group of packages which is defined within the
`pyproject.toml` and will be required for testing the changes you make the the
code.
### Running Tests
We use [pytest](https://docs.pytest.org/en/7.2.x/) for our test suite. Tests can
be found under the `./tests` folder and can be run with a single `pytest`
command. Optionally, to review test coverage you can append `--cov`.
```zsh
pytest --cov
```
Test outcomes and coverage will be reported in the terminal. In addition a more
detailed report is created in both XML and HTML format in the `./coverage`
folder. The HTML one in particular can help identify missing statements
requiring tests to ensure coverage. This can be run by opening
`./coverage/html/index.html`.
For example.
```zsh
pytest --cov; open ./coverage/html/index.html
```
??? info "HTML coverage report output"
![html-overview](../assets/contributing/html-overview.png)
![html-detail](../assets/contributing/html-detail.png)
## Front End
<!--#TODO: get input from blessedcoolant here, for the moment inserted the frontend README via snippets extension.-->
--8<-- "invokeai/frontend/web/README.md"
## Developing InvokeAI in VSCode
VSCode offers some nice tools:
- python debugger
- automatic `venv` activation
- remote dev (e.g. run InvokeAI on a beefy linux desktop while you type in
comfort on your macbook)
### Setup
You'll need the
[Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
and
[Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
extensions installed first.
It's also really handy to install the `Jupyter` extensions:
- [Jupyter](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter)
- [Jupyter Cell Tags](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-jupyter-cell-tags)
- [Jupyter Notebook Renderers](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter-renderers)
- [Jupyter Slide Show](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-jupyter-slideshow)
#### InvokeAI workspace
Creating a VSCode workspace for working on InvokeAI is highly recommended. It
can hold InvokeAI-specific settings and configs.
To make a workspace:
- Open the InvokeAI repo dir in VSCode
- `File` > `Save Workspace As` > save it _outside_ the repo
#### Default python interpreter (i.e. automatic virtual environment activation)
- Use command palette to run command
`Preferences: Open Workspace Settings (JSON)`
- Add `python.defaultInterpreterPath` to `settings`, pointing to your `venv`'s
python
Should look something like this:
```jsonc
{
// I like to have all InvokeAI-related folders in my workspace
"folders": [
{
// repo root
"path": "InvokeAI"
},
{
// InvokeAI root dir, where `invokeai.yaml` lives
"path": "/path/to/invokeai_root"
}
],
"settings": {
// Where your InvokeAI `venv`'s python executable lives
"python.defaultInterpreterPath": "/path/to/invokeai_root/.venv/bin/python"
}
}
```
Now when you open the VSCode integrated terminal, or do anything that needs to
run python, it will automatically be in your InvokeAI virtual environment.
Bonus: When you create a Jupyter notebook, when you run it, you'll be prompted
for the python interpreter to run in. This will default to your `venv` python,
and so you'll have access to the same python environment as the InvokeAI app.
This is _super_ handy.
#### Debugging configs with `launch.json`
Debugging configs are managed in a `launch.json` file. Like most VSCode configs,
these can be scoped to a workspace or folder.
Follow the [official guide](https://code.visualstudio.com/docs/python/debugging)
to set up your `launch.json` and try it out.
Now we can create the InvokeAI debugging configs:
```jsonc
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
// Run the InvokeAI backend & serve the pre-built UI
"name": "InvokeAI Web",
"type": "python",
"request": "launch",
"program": "scripts/invokeai-web.py",
"args": [
// Your InvokeAI root dir (where `invokeai.yaml` lives)
"--root",
"/path/to/invokeai_root",
// Access the app from anywhere on your local network
"--host",
"0.0.0.0"
],
"justMyCode": true
},
{
// Run the nodes-based CLI
"name": "InvokeAI CLI",
"type": "python",
"request": "launch",
"program": "scripts/invokeai-cli.py",
"justMyCode": true
},
{
// Run tests
"name": "InvokeAI Test",
"type": "python",
"request": "launch",
"module": "pytest",
"args": ["--capture=no"],
"justMyCode": true
},
{
// Run a single test
"name": "InvokeAI Single Test",
"type": "python",
"request": "launch",
"module": "pytest",
"args": [
// Change this to point to the specific test you are working on
"tests/nodes/test_invoker.py"
],
"justMyCode": true
},
{
// This is the default, useful to just run a single file
"name": "Python: File",
"type": "python",
"request": "launch",
"program": "${file}",
"justMyCode": true
}
]
}
```
You'll see these configs in the debugging configs drop down. Running them will
start InvokeAI with attached debugger, in the correct environment, and work just
like the normal app.
Enjoy debugging InvokeAI with ease (not that we have any bugs of course).
#### Remote dev
This is very easy to set up and provides the same very smooth experience as
local development. Environments and debugging, as set up above, just work,
though you'd need to recreate the workspace and debugging configs on the remote.
Consult the
[official guide](https://code.visualstudio.com/docs/remote/remote-overview) to
get it set up.
Suggest using VSCode's included settings sync so that your remote dev host has
all the same app settings and extensions automagically.
##### One remote dev gotcha
I've found the automatic port forwarding to be very flakey. You can disable it
in `Preferences: Open Remote Settings (ssh: hostname)`. Search for
`remote.autoForwardPorts` and untick the box.
To forward ports very reliably, use SSH on the remote dev client (e.g. your
macbook). Here's how to forward both backend API port (`9090`) and the frontend
live dev server port (`5173`):
```bash
ssh \
-L 9090:localhost:9090 \
-L 5173:localhost:5173 \
user@remote-dev-host
```
The forwarding stops when you close the terminal window, so suggest to do this
_outside_ the VSCode integrated terminal in case you need to restart VSCode for
an extension update or something
Now, on your remote dev client, you can open `localhost:9090` and access the UI,
now served from the remote dev host, just the same as if it was running on the
client.

View File

@ -0,0 +1,91 @@
# Development
## **What do I need to know to help?**
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](development_guides/contributingToFrontend.md)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
There are two paths to making a development contribution:
1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors.
1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item youd like to help with, reach out to the contributor assigned to the item to see how you can help.
2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.**
*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no ones time is being misspent.*
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviwers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?**
Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.

View File

@ -0,0 +1,75 @@
# Contributing to the Frontend
# InvokeAI Web UI
- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui)
- [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack)
- [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing)
- [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment)
- [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds)
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in `invokeai/frontend/web/` for review.
## Stack
State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
- `createAsyncThunk` for HTTP requests
- `createEntityAdapter` for fetching images and models
- `createListenerMiddleware` for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/) for bundling.
Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this:
```bash
npm install --global yarn
yarn --version
```
From `invokeai/frontend/web/` run `yarn install` to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server: `yarn dev`
3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host`
### Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `yarn build`.

View File

@ -0,0 +1,13 @@
# Documentation
Documentation is an important part of any open source project. It provides a clear and concise way to communicate how the software works, how to use it, and how to troubleshoot issues. Without proper documentation, it can be difficult for users to understand the purpose and functionality of the project.
## Contributing
All documentation is maintained in the InvokeAI GitHub repository. If you come across documentation that is out of date or incorrect, please submit a pull request with the necessary changes.
When updating or creating documentation, please keep in mind InvokeAI is a tool for everyone, not just those who have familiarity with generative art.
## Help & Questions
Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.

View File

@ -0,0 +1,19 @@
# Translation
InvokeAI uses [Weblate](https://weblate.org/) for translation. Weblate is a FOSS project providing a scalable translation service. Weblate automates the tedious parts of managing translation of a growing project, and the service is generously provided at no cost to FOSS projects like InvokeAI.
## Contributing
If you'd like to contribute by adding or updating a translation, please visit our [Weblate project](https://hosted.weblate.org/engage/invokeai/). You'll need to sign in with your GitHub account (a number of other accounts are supported, including Google).
Once signed in, select a language and then the Web UI component. From here you can Browse and Translate strings from English to your chosen language. Zen mode offers a simpler translation experience.
Your changes will be attributed to you in the automated PR process; you don't need to do anything else.
## Help & Questions
Please check Weblate's [documentation](https://docs.weblate.org/en/latest/index.html) or ping @Harvestor on [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
## Thanks
Thanks to the InvokeAI community for their efforts to translate the project!

View File

@ -0,0 +1,11 @@
# Tutorials
Tutorials help new & existing users expand their abilty to use InvokeAI to the full extent of our features and services.
Currently, we have a set of tutorials available on our [YouTube channel](https://www.youtube.com/@invokeai), but as InvokeAI continues to evolve with new updates, we want to ensure that we are giving our users the resources they need to succeed.
Tutorials can be in the form of videos or article walkthroughs on a subject of your choice. We recommend focusing tutorials on the key image generation methods, or on a specific component within one of the image generation methods.
## Contributing
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.

589
docs/deprecated/CLI.md Normal file
View File

@ -0,0 +1,589 @@
---
title: Command-Line Interface
---
# :material-bash: CLI
## **Interactive Command Line Interface**
The InvokeAI command line interface (CLI) provides scriptable access
to InvokeAI's features.Some advanced features are only available
through the CLI, though they eventually find their way into the WebUI.
The CLI is accessible from the `invoke.sh`/`invoke.bat` launcher by
selecting option (1). Alternatively, it can be launched directly from
the command line by activating the InvokeAI environment and giving the
command:
```bash
invokeai
```
After some startup messages, you will be presented with the `invoke> `
prompt. Here you can type prompts to generate images and issue other
commands to load and manipulate generative models. The CLI has a large
number of command-line options that control its behavior. To get a
concise summary of the options, call `invokeai` with the `--help` argument:
```bash
invokeai --help
```
The script uses the readline library to allow for in-line editing, command
history (++up++ and ++down++), autocompletion, and more. To help keep track of
which prompts generated which images, the script writes a log file of image
names and prompts to the selected output directory.
Here is a typical session
```bash
PS1:C:\Users\fred> invokeai
* Initializing, be patient...
* Initializing, be patient...
>> Initialization file /home/lstein/invokeai/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0-rc5
>> InvokeAI runtime directory is "/home/lstein/invokeai"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> xformers memory-efficient attention is available and enabled
(...more initialization messages...)
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
invoke> ashley judd riding a camel -n2 -s150
Outputs:
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
invoke> "there's a fly in my soup" -n6 -g
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
invoke> q
```
![invoke-py-demo](../assets/dream-py-demo.png)
## Arguments
The script recognizes a series of command-line switches that will
change important global defaults, such as the directory for image
outputs and the location of the model weight files.
### List of arguments recognized at the command line
These command-line arguments can be passed to `invoke.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see
[List of prompt arguments](#list-of-prompt-arguments). Others
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `--help` | `-h` | | Print a concise help message. |
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Location for generated images. |
| `--prompt_as_dir` | `-p` | `False` | Name output directories using the prompt text. |
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
| `--model <modelname>` | | `stable-diffusion-1.5` | Loads the initial model specified in configs/models.yaml. |
| `--ckpt_convert ` | | `False` | If provided both .ckpt and .safetensors files will be auto-converted into diffusers format in memory |
| `--autoconvert <path>` | | `None` | On startup, scan the indicated directory for new .ckpt/.safetensor files and automatically convert and import them |
| `--precision` | | `fp16` | Provide `fp32` for full precision mode, `fp16` for half-precision. `fp32` needed for Macintoshes and some NVidia cards. |
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--safety-checker` | | `False` | Activate safety checker for NSFW and other potentially disturbing imagery |
| `--patchmatch`, `--no-patchmatch` | | `--patchmatch` | Load/Don't load the PatchMatch inpainting extension |
| `--xformers`, `--no-xformers` | | `--xformers` | Load/Don't load the Xformers memory-efficient attention module (CUDA only) |
| `--web` | | `False` | Start in web server mode |
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
| `--port <port>` | | `9090` | Which port web server should listen for requests on. |
| `--config <path>` | | `configs/models.yaml` | Configuration file for models and their weights. |
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate per prompt. |
| `--width <int>` | `-W<int>` | `512` | Width of generated image |
| `--height <int>` | `-H<int>` | `512` | Height of generated image | `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| `--strength <float>` | `-s<float>` | `0.75` | For img2img: how hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
| `--fit` | `-F` | `False` | For img2img: scale the init image to fit into the specified -H and -W dimensions |
| `--grid` | `-g` | `False` | Save all image series as a grid rather than individually. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file. |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
!!! warning "These arguments are deprecated but still work"
<div align="center" markdown>
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| `--full_precision` | | `False` | Same as `--precision=fp32`|
| `--weights <path>` | | `None` | Path to weights file; use `--model stable-diffusion-1.4` instead |
| `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
</div>
!!! tip
On Windows systems, you may run into
problems when passing the invoke script standard backslashed path
names because the Python interpreter treats "\" as an escape.
You can either double your slashes (ick): `C:\\path\\to\\my\\file`, or
use Linux/Mac style forward slashes (better): `C:/path/to/my/file`.
## The .invokeai initialization file
To start up invoke.py with your preferred settings, place your desired
startup options in a file in your home directory named `.invokeai` The
file should contain the startup options as you would type them on the
command line (`--steps=10 --grid`), one argument per line, or a
mixture of both using any of the accepted command switch formats:
!!! example "my unmodified initialization file"
```bash title="~/.invokeai" linenums="1"
# InvokeAI initialization file
# This is the InvokeAI initialization file, which contains command-line default values.
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
# or renaming it and then running invokeai-configure again.
# The --root option below points to the folder in which InvokeAI stores its models, configs and outputs.
--root="/Users/mauwii/invokeai"
# the --outdir option controls the default location of image files.
--outdir="/Users/mauwii/invokeai/outputs"
# You may place other frequently-used startup commands here, one or more per line.
# Examples:
# --web --host=0.0.0.0
# --steps=20
# -Ak_euler_a -C10.0
```
!!! note
The initialization file only accepts the command line arguments.
There are additional arguments that you can provide on the `invoke>` command
line (such as `-n` or `--iterations`) that cannot be entered into this file.
Also be alert for empty blank lines at the end of the file, which will cause
an arguments error at startup time.
## List of prompt arguments
After the invoke.py script initializes, it will present you with a `invoke>`
prompt. Here you can enter information to generate images from text
([txt2img](#txt2img)), to embellish an existing image or sketch
([img2img](#img2img)), or to selectively alter chosen regions of the image
([inpainting](#inpainting)).
### txt2img
!!! example ""
```bash
invoke> waterfall and rainbow -W640 -H480
```
This will create the requested image with the dimensions 640 (width)
and 480 (height).
Here are the invoke> command that apply to txt2img:
| Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
| `--width <int>` | `-W<int>` | `512` | Width of generated image |
| `--height <int>` | `-H<int>` | `512` | Height of generated image |
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate from this prompt |
| `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| `--cfg_scale <float>` | `-C<float>` | `7.5` | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| `--seed <int>` | `-S<int>` | `None` | Set the random seed for the next series of images. This can be used to recreate an image generated previously. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use -h to get list of available samplers. |
| `--karras_max <int>` | | `29` | When using k\_\* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
| `--hires_fix` | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--grid` | `-g` | `False` | Turn on grid mode to return a single image combining all the images generated by this prompt |
| `--individual` | `-i` | `True` | Turn off grid mode (deprecated; leave off --grid instead) |
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Temporarily change the location of these images |
| `--seamless` | | `False` | Activate seamless tiling for interesting effects |
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](../features/OTHER.md#weighted-prompts) |
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](../features/VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](../features/VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
!!! note
the width and height of the image must be multiples of 64. You can
provide different values, but they will be rounded down to the nearest multiple
of 64.
!!! example "This is a example of img2img"
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
```
This will modify the indicated vacation photograph by making it more like the
prompt. Results will vary greatly depending on what is in the image. We also ask
to --fit the image into a box no bigger than 640x480. Otherwise the image size
will be identical to the provided photo and you may run out of memory if it is
large.
In addition to the command-line options recognized by txt2img, img2img accepts
additional options:
| Argument <img width="160" align="right"/> | Shortcut | Default | Description |
| ----------------------------------------- | ----------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
### inpainting
!!! example ""
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
```
This will do the same thing as img2img, but image alterations will
only occur within transparent areas defined by the mask file specified
by `-M`. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as well as
the --mask (-M) and --text_mask (-tm) arguments:
| Argument <img width="100" align="right"/> | Shortcut | Default | Description |
| ----------------------------------------- | ------------------------ | ------- | ------------------------------------------------------------------------------------------------ |
| `--init_mask <path>` | `-M<path>` | `None` | Path to an image the same size as the initial_image, with areas for inpainting made transparent. |
| `--invert_mask ` | | False | If true, invert the mask so that transparent areas are opaque and vice versa. |
| `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image |
The mask may either be an image with transparent areas, in which case the
inpainting will occur in the transparent areas only, or a black and white image,
in which case all black areas will be painted into.
`--text_mask` (short form `-tm`) is a way to generate a mask using a text
description of the part of the image to replace. For example, if you have an
image of a breakfast plate with a bagel, toast and scrambled eggs, you can
selectively mask the bagel and replace it with a piece of cake this way:
```bash
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
```
The algorithm uses <a
href="https://github.com/timojl/clipseg">clipseg</a> to classify different
regions of the image. The classifier puts out a confidence score for each region
it identifies. Generally regions that score above 0.5 are reliable, but if you
are getting too much or too little masking you can adjust the threshold down (to
get more mask), or up (to get less). In this example, by passing `-tm` a higher
value, we are insisting on a more stringent classification.
```bash
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
```
### Custom Styles and Subjects
You can load and use hundreds of community-contributed Textual
Inversion models just by typing the appropriate trigger phrase. Please
see [Concepts Library](../features/CONCEPTS.md) for more details.
## Other Commands
The CLI offers a number of commands that begin with "!".
### Postprocessing images
To postprocess a file using face restoration or upscaling, use the `!fix`
command.
#### `!fix`
This command runs a post-processor on a previously-generated image. It takes a
PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
switches in order to fix faces or upscale. If you provide a filename, the script
will look for it in the current output directory. Otherwise you can provide a
full or partial path to the desired file.
Some examples:
!!! example "Upscale to 4X its original size and fix faces using codeformer"
```bash
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
```
!!! example "Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen"
```bash
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
```
#### `!mask`
This command takes an image, a text prompt, and uses the `clipseg` algorithm to
automatically generate a mask of the area that matches the text prompt. It is
useful for debugging the text masking process prior to inpainting with the
`--text_mask` argument. See [INPAINTING.md] for details.
### Model selection and importation
The CLI allows you to add new models on the fly, as well as to switch
among them rapidly without leaving the script. There are several
different model formats, each described in the [Model Installation
Guide](../installation/050_INSTALLING_MODELS.md).
#### `!models`
This prints out a list of the models defined in `config/models.yaml'. The active
model is bold-faced
Example:
<pre>
inpainting-1.5 not loaded Stable Diffusion inpainting model
<b>stable-diffusion-1.5 active Stable Diffusion v1.5</b>
waifu-diffusion not loaded Waifu Diffusion v1.4
</pre>
#### `!switch <model>`
This quickly switches from one model to another without leaving the CLI script.
`invoke.py` uses a memory caching system; once a model has been loaded,
switching back and forth is quick. The following example shows this in action.
Note how the second column of the `!models` table changes to `cached` after a
model is first loaded, and that the long initialization step is not needed when
loading a cached model.
#### `!import_model <hugging_face_repo_ID>`
This imports and installs a `diffusers`-style model that is stored on
the [HuggingFace Web Site](https://huggingface.co). You can look up
any [Stable Diffusion diffusers
model](https://huggingface.co/models?library=diffusers) and install it
with a command like the following:
```bash
!import_model prompthero/openjourney
```
#### `!import_model <path/to/diffusers/directory>`
If you have a copy of a `diffusers`-style model saved to disk, you can
import it by passing the path to model's top-level directory.
#### `!import_model <url>`
For a `.ckpt` or `.safetensors` file, if you have a direct download
URL for the file, you can provide it to `!import_model` and the file
will be downloaded and installed for you.
#### `!import_model <path/to/model/weights.ckpt>`
This command imports a new model weights file into InvokeAI, makes it available
for image generation within the script, and writes out the configuration for the
model into `config/models.yaml` for use in subsequent sessions.
Provide `!import_model` with the path to a weights file ending in `.ckpt`. If
you type a partial path and press tab, the CLI will autocomplete. Although it
will also autocomplete to `.vae` files, these are not currenty supported (but
will be soon).
When you hit return, the CLI will prompt you to fill in additional information
about the model, including the short name you wish to use for it with the
`!switch` command, a brief description of the model, the default image width and
height to use with this model, and the model's configuration file. The latter
three fields are automatically filled with reasonable defaults. In the example
below, the bold-faced text shows what the user typed in with the exception of
the width, height and configuration file paths, which were filled in
automatically.
#### `!import_model <path/to/directory_of_models>`
If you provide the path of a directory that contains one or more
`.ckpt` or `.safetensors` files, the CLI will scan the directory and
interactively offer to import the models it finds there. Also see the
`--autoconvert` command-line option.
#### `!edit_model <name_of_model>`
The `!edit_model` command can be used to modify a model that is already defined
in `config/models.yaml`. Call it with the short name of the model you wish to
modify, and it will allow you to modify the model's `description`, `weights` and
other fields.
Example:
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
### History processing
The CLI provides a series of convenient commands for reviewing previous actions,
retrieving them, modifying them, and re-running them.
#### `!history`
The invoke script keeps track of all the commands you issue during a session,
allowing you to re-run them. On Mac and Linux systems, it also writes the
command-line history out to disk, giving you access to the most recent 1000
commands issued.
The `!history` command will return a numbered list of all the commands issued
during the session (Windows), or the most recent 1000 commands (Mac|Linux). You
can then repeat a command by using the command `!NNN`, where "NNN" is the
history line number. For example:
!!! example ""
```bash
invoke> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
####`!fetch`
This command retrieves the generation parameters from a previously generated
image and either loads them into the command line (Linux|Mac), or prints them
out in a comment for copy-and-paste (Windows). You may provide either the name
of a file in the current output directory, or a full file path. Specify path to
a folder with image png files, and wildcard \*.png to retrieve the dream command
used to generate the images, and save them to a file commands.txt for further
processing.
!!! example "load the generation command for a single png file"
```bash
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
```
!!! example "fetch the generation commands from a batch of files and store them into `selected.txt`"
```bash
invoke> !fetch outputs\selected-imgs\*.png selected.txt
```
#### `!replay`
This command replays a text file generated by !fetch or created manually
!!! example
```bash
invoke> !replay outputs\selected-imgs\selected.txt
```
!!! note
These commands may behave unexpectedly if given a PNG file that was
not generated by InvokeAI.
#### `!search <search string>`
This is similar to !history but it only returns lines that contain
`search string`. For example:
```bash
invoke> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
#### `!clear`
This clears the search history from memory and disk. Be advised that this
operation is irreversible and does not issue any warnings!
## Command-line editing and completion
The command-line offers convenient history tracking, editing, and command
completion.
- To scroll through previous commands and potentially edit/reuse them, use the
++up++ and ++down++ keys.
- To edit the current command, use the ++left++ and ++right++ keys to position
the cursor, and then ++backspace++, ++delete++ or insert characters.
- To move to the very beginning of the command, type ++ctrl+a++ (or
++command+a++ on the Mac)
- To move to the end of the command, type ++ctrl+e++.
- To cut a section of the command, position the cursor where you want to start
cutting and type ++ctrl+k++
- To paste a cut section back in, position the cursor where you want to paste,
and type ++ctrl+y++
Windows users can get similar, but more limited, functionality if they launch
`invoke.py` with the `winpty` program and have the `pyreadline3` library
installed:
```batch
> winpty python scripts\invoke.py
```
On the Mac and Linux platforms, when you exit invoke.py, the last 1000 lines of
your command-line history will be saved. When you restart `invoke.py`, you can
access the saved history using the ++up++ key.
In addition, limited command-line completion is installed. In various contexts,
you can start typing your command and press ++tab++. A list of potential
completions will be presented to you. You can then type a little more, hit
++tab++ again, and eventually autocomplete what you want.
When specifying file paths using the one-letter shortcuts, the CLI will attempt
to complete pathnames for you. This is most handy for the `-I` (init image) and
`-M` (init mask) paths. To initiate completion, start the path with a slash
(`/`) or `./`. For example:
```bash
invoke> zebra with a mustache -I./test-pictures<TAB>
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
```
You can then type ++z++, hit ++tab++ again, and it will autofill to `zebra.jpg`.
More text completion features (such as autocompleting seeds) are on their way.

View File

@ -0,0 +1,310 @@
---
title: Inpainting
---
# :octicons-paintbrush-16: Inpainting
## **Creating Transparent Regions for Inpainting**
Inpainting is really cool. To do it, you start with an initial image and use a
photoeditor to make one or more regions transparent (i.e. they have a "hole" in
them). You then provide the path to this image at the dream> command line using
the `-I` switch. Stable Diffusion will only paint within the transparent region.
There's a catch. In the current implementation, you have to prepare the initial
image correctly so that the underlying colors are preserved under the
transparent area. Many imaging editing applications will by default erase the
color information under the transparent pixels and replace them with white or
black, which will lead to suboptimal inpainting. It often helps to apply
incomplete transparency, such as any value between 1 and 99%
You also must take care to export the PNG file in such a way that the color
information is preserved. There is often an option in the export dialog that
lets you specify this.
If your photoeditor is erasing the underlying color information, `dream.py` will
give you a big fat warning. If you can't find a way to coax your photoeditor to
retain color values under transparent areas, then you can combine the `-I` and
`-M` switches to provide both the original unedited image and the masked
(partially transparent) image:
```bash
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
```
## **Masking using Text**
You can also create a mask using a text prompt to select the part of the image
you want to alter, using the [clipseg](https://github.com/timojl/clipseg)
algorithm. This works on any image, not just ones generated by InvokeAI.
The `--text_mask` (short form `-tm`) option takes two arguments. The first
argument is a text description of the part of the image you wish to mask (paint
over). If the text description contains a space, you must surround it with
quotation marks. The optional second argument is the minimum threshold for the
mask classifier's confidence score, described in more detail below.
To see how this works in practice, here's an image of a still life painting that
I got off the web.
<figure markdown>
![still life scaled](../assets/still-life-scaled.jpg)
</figure>
You can selectively mask out the orange and replace it with a baseball in this
way:
```bash
invoke> a baseball -I /path/to/still_life.png -tm orange
```
<figure markdown>
![](../assets/still-life-inpainted.png)
</figure>
The clipseg classifier produces a confidence score for each region it
identifies. Generally regions that score above 0.5 are reliable, but if you are
getting too much or too little masking you can adjust the threshold down (to get
more mask), or up (to get less). In this example, by passing `-tm` a higher
value, we are insisting on a tigher mask. However, if you make it too high, the
orange may not be picked up at all!
```bash
invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6
```
The `!mask` command may be useful for debugging problems with the text2mask
feature. The syntax is `!mask /path/to/image.png -tm <text> <threshold>`
It will generate three files:
- The image with the selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.selected.png
- The image with the un-selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.deselected.png
- The image with the selected area converted into a black and white image
according to the threshold level
- it will be named XXXXX.<imagename>.<prompt>.masked.png
The `.masked.png` file can then be directly passed to the `invoke>` prompt in
the CLI via the `-M` argument. Do not attempt this with the `selected.png` or
`deselected.png` files, as they contain some transparency throughout the image
and will not produce the desired results.
Here is an example of how `!mask` works:
```bash
invoke> !mask ./test-pictures/curly.png -tm hair 0.5
>> generating masks from ./test-pictures/curly.png
>> Initializing clipseg model for text to mask inference
Outputs:
[941.1] outputs/img-samples/000019.curly.hair.deselected.png: !mask ./test-pictures/curly.png -tm hair 0.5
[941.2] outputs/img-samples/000019.curly.hair.selected.png: !mask ./test-pictures/curly.png -tm hair 0.5
[941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5
```
<figure markdown>
![curly](../assets/outpainting/curly.png)
<figcaption>Original image "curly.png"</figcaption>
</figure>
<figure markdown>
![curly hair selected](../assets/inpainting/000019.curly.hair.selected.png)
<figcaption>000019.curly.hair.selected.png</figcaption>
</figure>
<figure markdown>
![curly hair deselected](../assets/inpainting/000019.curly.hair.deselected.png)
<figcaption>000019.curly.hair.deselected.png</figcaption>
</figure>
<figure markdown>
![curly hair masked](../assets/inpainting/000019.curly.hair.masked.png)
<figcaption>000019.curly.hair.masked.png</figcaption>
</figure>
It looks like we selected the hair pretty well at the 0.5 threshold (which is
the default, so we didn't actually have to specify it), so let's have some fun:
```bash
invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20
>> loaded input image of size 512x512 from ./test-pictures/curly.png
...
Outputs:
[946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75
```
<figure markdown>
![](../assets/inpainting/000024.801380492.png)
</figure>
You can also skip the `!mask` creation step and just select the masked
region directly:
```bash
invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20
```
## Using the RunwayML inpainting model
The
[RunwayML Inpainting Model v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting)
is a specialized version of
[Stable Diffusion v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5)
that contains extra channels specifically designed to enhance inpainting and
outpainting. While it can do regular `txt2img` and `img2img`, it really shines
when filling in missing regions. It has an almost uncanny ability to blend the
new regions with existing ones in a semantically coherent way.
To install the inpainting model, follow the
[instructions](../installation/050_INSTALLING_MODELS.md) for installing a new model.
You may use either the CLI (`invoke.py` script) or directly edit the
`configs/models.yaml` configuration file to do this. The main thing to watch out
for is that the the model `config` option must be set up to use
`v1-inpainting-inference.yaml` rather than the `v1-inference.yaml` file that is
used by Stable Diffusion 1.4 and 1.5.
After installation, your `models.yaml` should contain an entry that looks like
this one:
```yml
inpainting-1.5:
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
description: SD inpainting v1.5
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
```
As shown in the example, you may include a VAE fine-tuning weights file as well.
This is strongly recommended.
To use the custom inpainting model, launch `invoke.py` with the argument
`--model inpainting-1.5` or alternatively from within the script use the
`!switch inpainting-1.5` command to load and switch to the inpainting model.
You can now do inpainting and outpainting exactly as described above, but there
will (likely) be a noticeable improvement in coherence. Txt2img and Img2img will
work as well.
There are a few caveats to be aware of:
1. The inpainting model is larger than the standard model, and will use nearly 4
GB of GPU VRAM. This makes it unlikely to run on a 4 GB graphics card.
2. When operating in Img2img mode, the inpainting model is much less steerable
than the standard model. It is great for making small changes, such as
changing the pattern of a fabric, or slightly changing a subject's expression
or hair, but the model will resist making the dramatic alterations that the
standard model lets you do.
3. While the `--hires` option works fine with the inpainting model, some special
features, such as `--embiggen` are disabled.
4. Prompt weighting (`banana++ sushi`) and merging work well with the inpainting
model, but prompt swapping
(`a ("fluffy cat").swap("smiling dog") eating a hotdog`) will not have any
effect due to the way the model is set up. You may use text masking (with
`-tm thing-to-mask`) as an effective replacement.
5. The model tends to oversharpen image if you use high step or CFG values. If
you need to do large steps, use the standard model.
6. The `--strength` (`-f`) option has no effect on the inpainting model due to
its fundamental differences with the standard model. It will always take the
full number of steps you specify.
## Troubleshooting
Here are some troubleshooting tips for inpainting and outpainting.
## Inpainting is not changing the masked region enough!
One of the things to understand about how inpainting works is that it is
equivalent to running img2img on just the masked (transparent) area. img2img
builds on top of the existing image data, and therefore will attempt to preserve
colors, shapes and textures to the best of its ability. Unfortunately this means
that if you want to make a dramatic change in the inpainted region, for example
replacing a red wall with a blue one, the algorithm will fight you.
You have a couple of options. The first is to increase the values of the
requested steps (`-sXXX`), strength (`-f0.XX`), and/or condition-free guidance
(`-CXX.X`). If this is not working for you, a more extreme step is to provide
the `--inpaint_replace 0.X` (`-r0.X`) option. This value ranges from 0.0 to 1.0.
The higher it is the less attention the algorithm will pay to the data
underneath the masked region. At high values this will enable you to replace
colored regions entirely, but beware that the masked region mayl not blend in
with the surrounding unmasked regions as well.
---
## Recipe for GIMP
[GIMP](https://www.gimp.org/) is a popular Linux photoediting tool.
1. Open image in GIMP.
2. Layer->Transparency->Add Alpha Channel
3. Use lasso tool to select region to mask
4. Choose Select -> Float to create a floating selection
5. Open the Layers toolbar (^L) and select "Floating Selection"
6. Set opacity to a value between 0% and 99%
7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from transparent
pixels" checkbox is selected.
---
## Recipe for Adobe Photoshop
1. Open image in Photoshop
<figure markdown>
![step1](../assets/step1.png)
</figure>
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area
you desire to inpaint.
<figure markdown>
![step2](../assets/step2.png)
</figure>
3. Because we'll be applying a mask over the area we want to preserve, you
should now select the inverse by using the ++shift+ctrl+i++ shortcut, or
right clicking and using the "Select Inverse" option.
4. You'll now create a mask by selecting the image layer, and Masking the
selection. Make sure that you don't delete any of the underlying image, or
your inpainting results will be dramatically impacted.
<figure markdown>
![step4](../assets/step4.png)
</figure>
5. Make sure to hide any background layers that are present. You should see the
mask applied to your image layer, and the image on your canvas should display
the checkered background.
<figure markdown>
![step5](../assets/step5.png)
</figure>
6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the
menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
<figure markdown>
![step6](../assets/step6.png)
</figure>
7. After following the inpainting instructions above (either through the CLI or
the Web UI), marvel at your newfound ability to selectively invoke. Lookin'
good!
<figure markdown>
![step7](../assets/step7.png)
</figure>
8. In the export dialogue, Make sure the "Save colour values from transparent
pixels" checkbox is selected.

View File

@ -1,589 +0,0 @@
---
title: Command-Line Interface
---
# :material-bash: CLI
## **Interactive Command Line Interface**
The InvokeAI command line interface (CLI) provides scriptable access
to InvokeAI's features.Some advanced features are only available
through the CLI, though they eventually find their way into the WebUI.
The CLI is accessible from the `invoke.sh`/`invoke.bat` launcher by
selecting option (1). Alternatively, it can be launched directly from
the command line by activating the InvokeAI environment and giving the
command:
```bash
invokeai
```
After some startup messages, you will be presented with the `invoke> `
prompt. Here you can type prompts to generate images and issue other
commands to load and manipulate generative models. The CLI has a large
number of command-line options that control its behavior. To get a
concise summary of the options, call `invokeai` with the `--help` argument:
```bash
invokeai --help
```
The script uses the readline library to allow for in-line editing, command
history (++up++ and ++down++), autocompletion, and more. To help keep track of
which prompts generated which images, the script writes a log file of image
names and prompts to the selected output directory.
Here is a typical session
```bash
PS1:C:\Users\fred> invokeai
* Initializing, be patient...
* Initializing, be patient...
>> Initialization file /home/lstein/invokeai/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0-rc5
>> InvokeAI runtime directory is "/home/lstein/invokeai"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> xformers memory-efficient attention is available and enabled
(...more initialization messages...)
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
invoke> ashley judd riding a camel -n2 -s150
Outputs:
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
invoke> "there's a fly in my soup" -n6 -g
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
invoke> q
```
![invoke-py-demo](../assets/dream-py-demo.png)
## Arguments
The script recognizes a series of command-line switches that will
change important global defaults, such as the directory for image
outputs and the location of the model weight files.
### List of arguments recognized at the command line
These command-line arguments can be passed to `invoke.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see
[List of prompt arguments](#list-of-prompt-arguments). Others
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `--help` | `-h` | | Print a concise help message. |
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Location for generated images. |
| `--prompt_as_dir` | `-p` | `False` | Name output directories using the prompt text. |
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
| `--model <modelname>` | | `stable-diffusion-1.5` | Loads the initial model specified in configs/models.yaml. |
| `--ckpt_convert ` | | `False` | If provided both .ckpt and .safetensors files will be auto-converted into diffusers format in memory |
| `--autoconvert <path>` | | `None` | On startup, scan the indicated directory for new .ckpt/.safetensor files and automatically convert and import them |
| `--precision` | | `fp16` | Provide `fp32` for full precision mode, `fp16` for half-precision. `fp32` needed for Macintoshes and some NVidia cards. |
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--safety-checker` | | `False` | Activate safety checker for NSFW and other potentially disturbing imagery |
| `--patchmatch`, `--no-patchmatch` | | `--patchmatch` | Load/Don't load the PatchMatch inpainting extension |
| `--xformers`, `--no-xformers` | | `--xformers` | Load/Don't load the Xformers memory-efficient attention module (CUDA only) |
| `--web` | | `False` | Start in web server mode |
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
| `--port <port>` | | `9090` | Which port web server should listen for requests on. |
| `--config <path>` | | `configs/models.yaml` | Configuration file for models and their weights. |
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate per prompt. |
| `--width <int>` | `-W<int>` | `512` | Width of generated image |
| `--height <int>` | `-H<int>` | `512` | Height of generated image | `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| `--strength <float>` | `-s<float>` | `0.75` | For img2img: how hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
| `--fit` | `-F` | `False` | For img2img: scale the init image to fit into the specified -H and -W dimensions |
| `--grid` | `-g` | `False` | Save all image series as a grid rather than individually. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file. |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
!!! warning "These arguments are deprecated but still work"
<div align="center" markdown>
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| `--full_precision` | | `False` | Same as `--precision=fp32`|
| `--weights <path>` | | `None` | Path to weights file; use `--model stable-diffusion-1.4` instead |
| `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
</div>
!!! tip
On Windows systems, you may run into
problems when passing the invoke script standard backslashed path
names because the Python interpreter treats "\" as an escape.
You can either double your slashes (ick): `C:\\path\\to\\my\\file`, or
use Linux/Mac style forward slashes (better): `C:/path/to/my/file`.
## The .invokeai initialization file
To start up invoke.py with your preferred settings, place your desired
startup options in a file in your home directory named `.invokeai` The
file should contain the startup options as you would type them on the
command line (`--steps=10 --grid`), one argument per line, or a
mixture of both using any of the accepted command switch formats:
!!! example "my unmodified initialization file"
```bash title="~/.invokeai" linenums="1"
# InvokeAI initialization file
# This is the InvokeAI initialization file, which contains command-line default values.
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
# or renaming it and then running invokeai-configure again.
# The --root option below points to the folder in which InvokeAI stores its models, configs and outputs.
--root="/Users/mauwii/invokeai"
# the --outdir option controls the default location of image files.
--outdir="/Users/mauwii/invokeai/outputs"
# You may place other frequently-used startup commands here, one or more per line.
# Examples:
# --web --host=0.0.0.0
# --steps=20
# -Ak_euler_a -C10.0
```
!!! note
The initialization file only accepts the command line arguments.
There are additional arguments that you can provide on the `invoke>` command
line (such as `-n` or `--iterations`) that cannot be entered into this file.
Also be alert for empty blank lines at the end of the file, which will cause
an arguments error at startup time.
## List of prompt arguments
After the invoke.py script initializes, it will present you with a `invoke>`
prompt. Here you can enter information to generate images from text
([txt2img](#txt2img)), to embellish an existing image or sketch
([img2img](#img2img)), or to selectively alter chosen regions of the image
([inpainting](#inpainting)).
### txt2img
!!! example ""
```bash
invoke> waterfall and rainbow -W640 -H480
```
This will create the requested image with the dimensions 640 (width)
and 480 (height).
Here are the invoke> command that apply to txt2img:
| Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
| `--width <int>` | `-W<int>` | `512` | Width of generated image |
| `--height <int>` | `-H<int>` | `512` | Height of generated image |
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate from this prompt |
| `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| `--cfg_scale <float>` | `-C<float>` | `7.5` | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| `--seed <int>` | `-S<int>` | `None` | Set the random seed for the next series of images. This can be used to recreate an image generated previously. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use -h to get list of available samplers. |
| `--karras_max <int>` | | `29` | When using k\_\* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
| `--hires_fix` | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--grid` | `-g` | `False` | Turn on grid mode to return a single image combining all the images generated by this prompt |
| `--individual` | `-i` | `True` | Turn off grid mode (deprecated; leave off --grid instead) |
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Temporarily change the location of these images |
| `--seamless` | | `False` | Activate seamless tiling for interesting effects |
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
!!! note
the width and height of the image must be multiples of 64. You can
provide different values, but they will be rounded down to the nearest multiple
of 64.
!!! example "This is a example of img2img"
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
```
This will modify the indicated vacation photograph by making it more like the
prompt. Results will vary greatly depending on what is in the image. We also ask
to --fit the image into a box no bigger than 640x480. Otherwise the image size
will be identical to the provided photo and you may run out of memory if it is
large.
In addition to the command-line options recognized by txt2img, img2img accepts
additional options:
| Argument <img width="160" align="right"/> | Shortcut | Default | Description |
| ----------------------------------------- | ----------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
### inpainting
!!! example ""
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
```
This will do the same thing as img2img, but image alterations will
only occur within transparent areas defined by the mask file specified
by `-M`. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](./INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as well as
the --mask (-M) and --text_mask (-tm) arguments:
| Argument <img width="100" align="right"/> | Shortcut | Default | Description |
| ----------------------------------------- | ------------------------ | ------- | ------------------------------------------------------------------------------------------------ |
| `--init_mask <path>` | `-M<path>` | `None` | Path to an image the same size as the initial_image, with areas for inpainting made transparent. |
| `--invert_mask ` | | False | If true, invert the mask so that transparent areas are opaque and vice versa. |
| `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image |
The mask may either be an image with transparent areas, in which case the
inpainting will occur in the transparent areas only, or a black and white image,
in which case all black areas will be painted into.
`--text_mask` (short form `-tm`) is a way to generate a mask using a text
description of the part of the image to replace. For example, if you have an
image of a breakfast plate with a bagel, toast and scrambled eggs, you can
selectively mask the bagel and replace it with a piece of cake this way:
```bash
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
```
The algorithm uses <a
href="https://github.com/timojl/clipseg">clipseg</a> to classify different
regions of the image. The classifier puts out a confidence score for each region
it identifies. Generally regions that score above 0.5 are reliable, but if you
are getting too much or too little masking you can adjust the threshold down (to
get more mask), or up (to get less). In this example, by passing `-tm` a higher
value, we are insisting on a more stringent classification.
```bash
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
```
### Custom Styles and Subjects
You can load and use hundreds of community-contributed Textual
Inversion models just by typing the appropriate trigger phrase. Please
see [Concepts Library](CONCEPTS.md) for more details.
## Other Commands
The CLI offers a number of commands that begin with "!".
### Postprocessing images
To postprocess a file using face restoration or upscaling, use the `!fix`
command.
#### `!fix`
This command runs a post-processor on a previously-generated image. It takes a
PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
switches in order to fix faces or upscale. If you provide a filename, the script
will look for it in the current output directory. Otherwise you can provide a
full or partial path to the desired file.
Some examples:
!!! example "Upscale to 4X its original size and fix faces using codeformer"
```bash
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
```
!!! example "Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen"
```bash
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
```
#### `!mask`
This command takes an image, a text prompt, and uses the `clipseg` algorithm to
automatically generate a mask of the area that matches the text prompt. It is
useful for debugging the text masking process prior to inpainting with the
`--text_mask` argument. See [INPAINTING.md] for details.
### Model selection and importation
The CLI allows you to add new models on the fly, as well as to switch
among them rapidly without leaving the script. There are several
different model formats, each described in the [Model Installation
Guide](../installation/050_INSTALLING_MODELS.md).
#### `!models`
This prints out a list of the models defined in `config/models.yaml'. The active
model is bold-faced
Example:
<pre>
inpainting-1.5 not loaded Stable Diffusion inpainting model
<b>stable-diffusion-1.5 active Stable Diffusion v1.5</b>
waifu-diffusion not loaded Waifu Diffusion v1.4
</pre>
#### `!switch <model>`
This quickly switches from one model to another without leaving the CLI script.
`invoke.py` uses a memory caching system; once a model has been loaded,
switching back and forth is quick. The following example shows this in action.
Note how the second column of the `!models` table changes to `cached` after a
model is first loaded, and that the long initialization step is not needed when
loading a cached model.
#### `!import_model <hugging_face_repo_ID>`
This imports and installs a `diffusers`-style model that is stored on
the [HuggingFace Web Site](https://huggingface.co). You can look up
any [Stable Diffusion diffusers
model](https://huggingface.co/models?library=diffusers) and install it
with a command like the following:
```bash
!import_model prompthero/openjourney
```
#### `!import_model <path/to/diffusers/directory>`
If you have a copy of a `diffusers`-style model saved to disk, you can
import it by passing the path to model's top-level directory.
#### `!import_model <url>`
For a `.ckpt` or `.safetensors` file, if you have a direct download
URL for the file, you can provide it to `!import_model` and the file
will be downloaded and installed for you.
#### `!import_model <path/to/model/weights.ckpt>`
This command imports a new model weights file into InvokeAI, makes it available
for image generation within the script, and writes out the configuration for the
model into `config/models.yaml` for use in subsequent sessions.
Provide `!import_model` with the path to a weights file ending in `.ckpt`. If
you type a partial path and press tab, the CLI will autocomplete. Although it
will also autocomplete to `.vae` files, these are not currenty supported (but
will be soon).
When you hit return, the CLI will prompt you to fill in additional information
about the model, including the short name you wish to use for it with the
`!switch` command, a brief description of the model, the default image width and
height to use with this model, and the model's configuration file. The latter
three fields are automatically filled with reasonable defaults. In the example
below, the bold-faced text shows what the user typed in with the exception of
the width, height and configuration file paths, which were filled in
automatically.
#### `!import_model <path/to/directory_of_models>`
If you provide the path of a directory that contains one or more
`.ckpt` or `.safetensors` files, the CLI will scan the directory and
interactively offer to import the models it finds there. Also see the
`--autoconvert` command-line option.
#### `!edit_model <name_of_model>`
The `!edit_model` command can be used to modify a model that is already defined
in `config/models.yaml`. Call it with the short name of the model you wish to
modify, and it will allow you to modify the model's `description`, `weights` and
other fields.
Example:
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
### History processing
The CLI provides a series of convenient commands for reviewing previous actions,
retrieving them, modifying them, and re-running them.
#### `!history`
The invoke script keeps track of all the commands you issue during a session,
allowing you to re-run them. On Mac and Linux systems, it also writes the
command-line history out to disk, giving you access to the most recent 1000
commands issued.
The `!history` command will return a numbered list of all the commands issued
during the session (Windows), or the most recent 1000 commands (Mac|Linux). You
can then repeat a command by using the command `!NNN`, where "NNN" is the
history line number. For example:
!!! example ""
```bash
invoke> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
####`!fetch`
This command retrieves the generation parameters from a previously generated
image and either loads them into the command line (Linux|Mac), or prints them
out in a comment for copy-and-paste (Windows). You may provide either the name
of a file in the current output directory, or a full file path. Specify path to
a folder with image png files, and wildcard \*.png to retrieve the dream command
used to generate the images, and save them to a file commands.txt for further
processing.
!!! example "load the generation command for a single png file"
```bash
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
```
!!! example "fetch the generation commands from a batch of files and store them into `selected.txt`"
```bash
invoke> !fetch outputs\selected-imgs\*.png selected.txt
```
#### `!replay`
This command replays a text file generated by !fetch or created manually
!!! example
```bash
invoke> !replay outputs\selected-imgs\selected.txt
```
!!! note
These commands may behave unexpectedly if given a PNG file that was
not generated by InvokeAI.
#### `!search <search string>`
This is similar to !history but it only returns lines that contain
`search string`. For example:
```bash
invoke> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
#### `!clear`
This clears the search history from memory and disk. Be advised that this
operation is irreversible and does not issue any warnings!
## Command-line editing and completion
The command-line offers convenient history tracking, editing, and command
completion.
- To scroll through previous commands and potentially edit/reuse them, use the
++up++ and ++down++ keys.
- To edit the current command, use the ++left++ and ++right++ keys to position
the cursor, and then ++backspace++, ++delete++ or insert characters.
- To move to the very beginning of the command, type ++ctrl+a++ (or
++command+a++ on the Mac)
- To move to the end of the command, type ++ctrl+e++.
- To cut a section of the command, position the cursor where you want to start
cutting and type ++ctrl+k++
- To paste a cut section back in, position the cursor where you want to paste,
and type ++ctrl+y++
Windows users can get similar, but more limited, functionality if they launch
`invoke.py` with the `winpty` program and have the `pyreadline3` library
installed:
```batch
> winpty python scripts\invoke.py
```
On the Mac and Linux platforms, when you exit invoke.py, the last 1000 lines of
your command-line history will be saved. When you restart `invoke.py`, you can
access the saved history using the ++up++ key.
In addition, limited command-line completion is installed. In various contexts,
you can start typing your command and press ++tab++. A list of potential
completions will be presented to you. You can then type a little more, hit
++tab++ again, and eventually autocomplete what you want.
When specifying file paths using the one-letter shortcuts, the CLI will attempt
to complete pathnames for you. This is most handy for the `-I` (init image) and
`-M` (init mask) paths. To initiate completion, start the path with a slash
(`/`) or `./`. For example:
```bash
invoke> zebra with a mustache -I./test-pictures<TAB>
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
```
You can then type ++z++, hit ++tab++ again, and it will autofill to `zebra.jpg`.
More text completion features (such as autocompleting seeds) are on their way.

View File

@ -1,8 +1,11 @@
---
title: Styles and Subjects
title: Textual Inversion Embeddings and LoRAs
---
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
# :material-library-shelves: Textual Inversions and LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## Using Textual Inversion Files
@ -12,27 +15,21 @@ and artistic styles. They are also known as "embeds" in the machine learning
world.
Each TI file introduces one or more vocabulary terms to the SD model. These are
known in InvokeAI as "triggers." Triggers are often, but not always, denoted
using angle brackets as in "&lt;trigger-phrase&gt;". The two most common type of
known in InvokeAI as "triggers." Triggers are denoted using angle brackets
as in "&lt;trigger-phrase&gt;". The two most common type of
TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
different TI training packages. InvokeAI supports both formats, but its
[built-in TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
[built-in TI training system](TRAINING.md) produces `.pt`.
The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
amassed a large ligrary of &gt;800 community-contributed TI files covering a
broad range of subjects and styles. InvokeAI has built-in support for this
library which downloads and merges TI files automatically upon request. You can
also install your own or others' TI files by placing them in a designated
directory.
You may also be interested in using [LoRA Models](LORAS.md) to
generate images with specialized styles and subjects.
broad range of subjects and styles. You can also install your own or others' TI files
by placing them in the designated directory for the compatible model type
### An Example
Here are a few examples to illustrate how Textual Inversion works. All
these images were generated using the command-line client and the
Stable Diffusion 1.5 model:
Here are a few examples to illustrate how it works. All these images were
generated using the command-line client and the Stable Diffusion 1.5 model:
| Japanese gardener | Japanese gardener &lt;ghibli-face&gt; | Japanese gardener &lt;hoi4-leaders&gt; | Japanese gardener &lt;cartoona-animals&gt; |
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
@ -45,120 +42,47 @@ You can also combine styles and concepts:
| :--------------------------------------------------------: |
| ![](../assets/concepts/image5.png) |
</figure>
## Using a Hugging Face Concept
!!! warning "Authenticating to HuggingFace"
Some concepts require valid authentication to HuggingFace. Without it, they will not be downloaded
and will be silently ignored.
If you used an installer to install InvokeAI, you may have already set a HuggingFace token.
If you skipped this step, you can:
- run the InvokeAI configuration script again (if you used a manual installer): `invokeai-configure`
- set one of the `HUGGINGFACE_TOKEN` or `HUGGING_FACE_HUB_TOKEN` environment variables to contain your token
Finally, if you already used any HuggingFace library on your computer, you might already have a token
in your local cache. Check for a hidden `.huggingface` directory in your home folder. If it
contains a `token` file, then you are all set.
Hugging Face TI concepts are downloaded and installed automatically as you
require them. This requires your machine to be connected to the Internet. To
find out what each concept is for, you can browse the
[Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
look at examples of what each concept produces.
When you have an idea of a concept you wish to try, go to the command-line
client (CLI) and type a `<` character and the beginning of the Hugging Face
concept name you wish to load. Press ++tab++, and the CLI will show you all
matching concepts. You can also type `<` and hit ++tab++ to get a listing of all
~800 concepts, but be prepared to scroll up to see them all! If there is more
than one match you can continue to type and ++tab++ until the concept is
completed.
!!! example
if you type in `<x` and hit ++tab++, you'll be prompted with the completions:
```py
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
```
Now type `id` and press ++tab++. It will be autocompleted to `<xidiversity>`
because this is a unique match.
Finish your prompt and generate as usual. You may include multiple concept terms
in the prompt.
If you have never used this concept before, you will see a message that the TI
model is being downloaded and installed. After this, the concept will be saved
locally (in the `models/sd-concepts-library` directory) for future use.
Several steps happen during downloading and installation, including a scan of
the file for malicious code. Should any errors occur, you will be warned and the
concept will fail to load. Generation will then continue treating the trigger
term as a normal string of characters (e.g. as literal `<ghibli-face>`).
You can also use `<concept-names>` in the WebGUI's prompt textbox. There is no
autocompletion at this time.
## Installing your Own TI Files
You may install any number of `.pt` and `.bin` files simply by copying them into
the `embeddings` directory of the InvokeAI runtime directory (usually `invokeai`
in your home directory). You may create subdirectories in order to organize the
files in any way you wish. Be careful not to overwrite one file with another.
the `embedding` directory of the corresponding InvokeAI models directory (usually `invokeai`
in your home directory). For example, you can simply move a Stable Diffusion 1.5 embedding file to
the `sd-1/embedding` folder. Be careful not to overwrite one file with another.
For example, TI files generated by the Hugging Face toolkit share the named
`learned_embedding.bin`. You can use subdirectories to keep them distinct.
`learned_embedding.bin`. You can rename these, or use subdirectories to keep them distinct.
At startup time, InvokeAI will scan the `embeddings` directory and load any TI
files it finds there. At startup you will see messages similar to these:
At startup time, InvokeAI will scan the various `embedding` directories and load any TI
files it finds there for compatible models. At startup you will see a message similar to this one:
```bash
>> Loading embeddings from /data/lstein/invokeai-2.3/embeddings
| Loading v1 embedding file: style-hamunaptra
| Loading v4 embedding file: embeddings/learned_embeds-steps-500.bin
| Loading v2 embedding file: lfa
| Loading v3 embedding file: easynegative
| Loading v1 embedding file: rem_rezero
| Loading v2 embedding file: midj-strong
| Loading v4 embedding file: anime-background-style-v2/learned_embeds.bin
| Loading v4 embedding file: kamon-style/learned_embeds.bin
** Notice: kamon-style/learned_embeds.bin was trained on a model with an incompatible token dimension: 768 vs 1024.
>> Textual inversion triggers: <anime-background-style-v2>, <easynegative>, <lfa>, <midj-strong>, <milo>, Rem3-2600, Style-Hamunaptra
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
```
To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
Textual Inversion embeddings trained on version 1.X stable diffusion
models are incompatible with version 2.X models and vice-versa.
## Using LoRAs
After the embeddings load, InvokeAI will print out a list of all the
recognized trigger terms. To trigger the term, include it in the
prompt exactly as written, including angle brackets if any and
respecting the capitalization.
LoRA files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
There are at least four different embedding file formats, and each uses
a different convention for the trigger terms. In some cases, the
trigger term is specified in the file contents and may or may not be
surrounded by angle brackets. In the example above, `Rem3-2600`,
`Style-Hamunaptra`, and `<midj-strong>` were specified this way and
there is no easy way to change the term.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
In other cases the trigger term is not contained within the embedding
file. In this case, InvokeAI constructs a trigger term consisting of
the base name of the file (without the file extension) surrounded by
angle brackets. In the example above `<easynegative`> is such a file
(the filename was `easynegative.safetensors`). In such cases, you can
change the trigger term simply by renaming the file.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
## Training your own Textual Inversion models
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
InvokeAI provides a script that lets you train your own Textual
Inversion embeddings using a small number (about a half-dozen) images
of your desired style or subject. Please see [Textual
Inversion](TEXTUAL_INVERSION.md) for details.
## Further Reading
Please see [the repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.

View File

@ -0,0 +1,282 @@
---
title: Configuration
---
# :material-tune-variant: InvokeAI Configuration
## Intro
InvokeAI has numerous runtime settings which can be used to adjust
many aspects of its operations, including the location of files and
directories, memory usage, and performance. These settings can be
viewed and customized in several ways:
1. By editing settings in the `invokeai.yaml` file.
2. By setting environment variables.
3. On the command-line, when InvokeAI is launched.
In addition, the most commonly changed settings are accessible
graphically via the `invokeai-configure` script.
### How the Configuration System Works
When InvokeAI is launched, the very first thing it needs to do is to
find its "root" directory, which contains its configuration files,
installed models, its database of images, and the folder(s) of
generated images themselves. In this document, the root directory will
be referred to as ROOT.
#### Finding the Root Directory
To find its root directory, InvokeAI uses the following recipe:
1. It first looks for the argument `--root <path>` on the command line
it was launched from, and uses the indicated path if present.
2. Next it looks for the environment variable INVOKEAI_ROOT, and uses
the directory path found there if present.
3. If neither of these are present, then InvokeAI looks for the
folder containing the `.venv` Python virtual environment directory for
the currently active environment. This directory is checked for files
expected inside the InvokeAI root before it is used.
4. Finally, InvokeAI looks for a directory in the current user's home
directory named `invokeai`.
#### Reading the InvokeAI Configuration File
Once the root directory has been located, InvokeAI looks for a file
named `ROOT/invokeai.yaml`, and if present reads configuration values
from it. The top of this file looks like this:
```
InvokeAI:
Web Server:
host: localhost
port: 9090
allow_origins: []
allow_credentials: true
allow_methods:
- '*'
allow_headers:
- '*'
Features:
esrgan: true
internet_available: true
log_tokenization: false
patchmatch: true
restore: true
...
```
This lines in this file are used to establish default values for
Invoke's settings. In the above fragment, the Web Server's listening
port is set to 9090 by the `port` setting.
You can edit this file with a text editor such as "Notepad" (do not
use Word or any other word processor). When editing, be careful to
maintain the indentation, and do not add extraneous text, as syntax
errors will prevent InvokeAI from launching. A basic guide to the
format of YAML files can be found
[here](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/).
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [7] in the launcher, "Re-run the
configure script".
#### Reading Environment Variables
Next InvokeAI looks for defined environment variables in the format
`INVOKEAI_<setting_name>`, for example `INVOKEAI_port`. Environment
variable values take precedence over configuration file variables. On
a Macintosh system, for example, you could change the port that the
web server listens on by setting the environment variable this way:
```
export INVOKEAI_port=8000
invokeai-web
```
Please check out these
[Macintosh](https://phoenixnap.com/kb/set-environment-variable-mac)
and
[Windows](https://phoenixnap.com/kb/windows-set-environment-variable)
guides for setting temporary and permanent environment variables.
#### Reading the Command Line
Lastly, InvokeAI takes settings from the command line, which override
everything else. The command-line settings have the same name as the
corresponding configuration file settings, preceded by a `--`, for
example `--port 8000`.
If you are using the launcher (`invoke.sh` or `invoke.bat`) to launch
InvokeAI, then just pass the command-line arguments to the launcher:
```
invoke.bat --port 8000 --host 0.0.0.0
```
The arguments will be applied when you select the web server option
(and the other options as well).
If, on the other hand, you prefer to launch InvokeAI directly from the
command line, you would first activate the virtual environment (known
as the "developer's console" in the launcher), and run `invokeai-web`:
```
> C:\Users\Fred\invokeai\.venv\scripts\activate
(.venv) > invokeai-web --port 8000 --host 0.0.0.0
```
You can get a listing and brief instructions for each of the
command-line options by giving the `--help` argument:
```
(.venv) > invokeai-web --help
usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ...]] [--allow_credentials | --no-allow_credentials] [--allow_methods [ALLOW_METHODS ...]]
[--allow_headers [ALLOW_HEADERS ...]] [--esrgan | --no-esrgan] [--internet_available | --no-internet_available] [--log_tokenization | --no-log_tokenization]
[--patchmatch | --no-patchmatch] [--restore | --no-restore]
[--always_use_cpu | --no-always_use_cpu] [--free_gpu_mem | --no-free_gpu_mem] [--max_loaded_models MAX_LOADED_MODELS] [--max_cache_size MAX_CACHE_SIZE]
[--max_vram_cache_size MAX_VRAM_CACHE_SIZE] [--gpu_mem_reserved GPU_MEM_RESERVED] [--precision {auto,float16,float32,autocast}]
[--sequential_guidance | --no-sequential_guidance] [--xformers_enabled | --no-xformers_enabled] [--tiled_decode | --no-tiled_decode] [--root ROOT]
[--autoimport_dir AUTOIMPORT_DIR] [--lora_dir LORA_DIR] [--embedding_dir EMBEDDING_DIR] [--controlnet_dir CONTROLNET_DIR] [--conf_path CONF_PATH]
[--models_dir MODELS_DIR] [--legacy_conf_dir LEGACY_CONF_DIR] [--db_dir DB_DIR] [--outdir OUTDIR] [--from_file FROM_FILE]
[--use_memory_db | --no-use_memory_db] [--model MODEL] [--log_handlers [LOG_HANDLERS ...]] [--log_format {plain,color,syslog,legacy}]
[--log_level {debug,info,warning,error,critical}] [--version | --no-version]
```
## The Configuration Settings
The configuration settings are divided into several distinct
groups in `invokeia.yaml`:
### Web Server
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].
### Features
These configuration settings allow you to enable and disable various InvokeAI features:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `esrgan` | `true` | Activate the ESRGAN upscaling options|
| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet |
| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected |
| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting |
| `restore` | `true` | Activate the facial restoration features (DEPRECATED; restoration features will be removed in 3.0.0) |
### Memory/Performance
These options tune InvokeAI's memory and performance characteristics.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `always_use_cpu` | `false` | Use the CPU to generate images, even if a GPU is available |
| `free_gpu_mem` | `false` | Aggressively free up GPU memory after each operation; this will allow you to run in low-VRAM environments with some performance penalties |
| `max_cache_size` | `6` | Amount of CPU RAM (in GB) to reserve for caching models in memory; more cache allows you to keep models in memory and switch among them quickly |
| `max_vram_cache_size` | `2.75` | Amount of GPU VRAM (in GB) to reserve for caching models in VRAM; more cache speeds up generation but reduces the size of the images that can be generated. This can be set to zero to maximize the amount of memory available for generation. |
| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system |
| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss |
| `xformers_enabled` | `true` | If the x-formers memory-efficient attention module is installed, activate it for better memory usage and generation speed|
| `tiled_decode` | `false` | If true, then during the VAE decoding phase the image will be decoded a section at a time, reducing memory consumption at the cost of a performance hit |
### Paths
These options set the paths of various directories and files used by
InvokeAI. Relative paths are interpreted relative to INVOKEAI_ROOT, so
if INVOKEAI_ROOT is `/home/fred/invokeai` and the path is
`autoimport/main`, then the corresponding directory will be located at
`/home/fred/invokeai/autoimport/main`.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `autoimport_dir` | `autoimport/main` | At startup time, read and import any main model files found in this directory |
| `lora_dir` | `autoimport/lora` | At startup time, read and import any LoRA/LyCORIS models found in this directory |
| `embedding_dir` | `autoimport/embedding` | At startup time, read and import any textual inversion (embedding) models found in this directory |
| `controlnet_dir` | `autoimport/controlnet` | At startup time, read and import any ControlNet models found in this directory |
| `conf_path` | `configs/models.yaml` | Location of the `models.yaml` model configuration file |
| `models_dir` | `models` | Location of the directory containing models installed by InvokeAI's model manager |
| `legacy_conf_dir` | `configs/stable-diffusion` | Location of the directory containing the .yaml configuration files for legacy checkpoint models |
| `db_dir` | `databases` | Location of the directory containing InvokeAI's image, schema and session database |
| `outdir` | `outputs` | Location of the directory in which the gallery of generated and uploaded images will be stored |
| `use_memory_db` | `false` | Keep database information in memory rather than on disk; this will not preserve image gallery information across restarts |
Note that the autoimport directories will be searched recursively,
allowing you to organize the models into folders and subfolders in any
way you wish. In addition, while we have split up autoimport
directories by the type of model they contain, this isn't
necessary. You can combine different model types in the same folder
and InvokeAI will figure out what they are. So you can easily use just
one autoimport directory by commenting out the unneeded paths:
```
Paths:
autoimport_dir: autoimport
# lora_dir: null
# embedding_dir: null
# controlnet_dir: null
```
### Logging
These settings control the information, warning, and debugging
messages printed to the console log while InvokeAI is running:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `log_handlers` | `console` | This controls where log messages are sent, and can be a list of one or more destinations. Values include `console`, `file`, `syslog` and `http`. These are described in more detail below |
| `log_format` | `color` | This controls the formatting of the log messages. Values are `plain`, `color`, `legacy` and `syslog` |
| `log_level` | `debug` | This filters messages according to the level of severity and can be one of `debug`, `info`, `warning`, `error` and `critical`. For example, setting to `warning` will display all messages at the warning level or higher, but won't display "debug" or "info" messages |
Several different log handler destinations are available, and multiple destinations are supported by providing a list:
```
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```
* `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
* `syslog` is only available on Linux and Macintosh systems. It uses
the operating system's "syslog" facility to write log file entries
locally or to a remote logging machine. `syslog` offers a variety
of configuration options:
```
syslog=/dev/log` - log to the /dev/log device
syslog=localhost` - log to the network logger running on the local machine
syslog=localhost:512` - same as above, but using a non-standard port
syslog=fredserver,facility=LOG_USER,socktype=SOCK_DRAM`
- Log to LAN-connected server "fredserver" using the facility LOG_USER and datagram packets.
```
* `http` can be used to log to a remote web server. The server must be
properly configured to receive and act on log messages. The option
accepts the URL to the web server, and a `method` argument
indicating whether the message should be submitted using the GET or
POST method.
```
http=http://my.server/path/to/logger,method=POST
```
The `log_format` option provides several alternative formats:
* `color` - default format providing time, date and a message, using text colors to distinguish different log severities
* `plain` - same as above, but monochrome text only
* `syslog` - the log level and error message only, allowing the syslog system to attach the time and date
* `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases.

136
docs/features/CONTROLNET.md Normal file
View File

@ -0,0 +1,136 @@
---
title: ControlNet
---
# :material-loupe: ControlNet
## ControlNet
ControlNet
ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
apply a secondary neural network model to your image generation
process in Invoke.
With ControlNet, you can get more control over the output of your
image generation, providing you with a way to direct the network
towards generating images that better fit your desired style or
outcome.
### How it works
ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each
specific ControlNet model, and then inserting that control information
into the generation process. This can be used to adjust the style,
composition, or other aspects of the image to better achieve a
specific result.
### Models
InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images. Currently
InvokeAI only supports "diffuser" style ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
***InvokeAI does not currently support checkpoint-format
ControlNets. These come in the form of a single file with the
extension `.safetensors`.***
Diffuser-style ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname"). The easiest way to install them is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
to the CONTROLNETS section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox:
![Model Installer -
Controlnetl](../assets/installing-models/model-installer-controlnet.png){:width="640px"}
Command-line users can launch the model installer using the command
`invokeai-model-install`.
_Be aware that some ControlNet models require additional code
functionality in order to work properly, so just installing a
third-party ControlNet model may not have the desired effect._ Please
read and follow the documentation for installing a third party model
not currently included among InvokeAI's default list.
The models currently supported include:
**Canny**:
When the Canny model is used in ControlNet, Invoke will attempt to generate images that match the edges detected.
Canny edge detection works by detecting the edges in an image by looking for abrupt changes in intensity. It is known for its ability to detect edges accurately while reducing noise and false edges, and the preprocessor can identify more information by decreasing the thresholds.
**M-LSD**:
M-LSD is another edge detection algorithm used in ControlNet. It stands for Multi-Scale Line Segment Detector.
It detects straight line segments in an image by analyzing the local structure of the image at multiple scales. It can be useful for architectural imagery, or anything where straight-line structural information is needed for the resulting output.
**Lineart**:
The Lineart model in ControlNet generates line drawings from an input image. The resulting pre-processed image is a simplified version of the original, with only the outlines of objects visible.The Lineart model in ControlNet is known for its ability to accurately capture the contours of the objects in an input sketch.
**Lineart Anime**:
A variant of the Lineart model that generates line drawings with a distinct style inspired by anime and manga art styles.
**Depth**:
A model that generates depth maps of images, allowing you to create more realistic 3D models or to simulate depth effects in post-processing.
**Normal Map (BAE):**
A model that generates normal maps from input images, allowing for more realistic lighting effects in 3D rendering.
**Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
**Mediapipe Face**:
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile (experimental)**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
- It can reinterpret specific details within an image and create fresh, new elements.
- It has the ability to disregard global instructions if there's a discrepancy between them and the local context or specific parts of the image. In such cases, it uses the local context to guide the process.
The Tile Model can be a powerful tool in your arsenal for enhancing image quality and details. If there are undesirable elements in your images, such as blurriness caused by resizing, this model can effectively eliminate these issues, resulting in cleaner, crisper images. Moreover, it can generate and add refined details to your images, improving their overall quality and appeal.
**Pix2Pix (experimental)**
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
## Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
Each ControlNet has two settings that are applied to the ControlNet.
Weight - Strength of the Controlnet model applied to the generation for the section, defined by start/end.
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.

View File

@ -4,86 +4,13 @@ title: Image-to-Image
# :material-image-multiple: Image-to-Image
Both the Web and command-line interfaces provide an "img2img" feature
that lets you seed your creations with an initial drawing or
photo. This is a really cool feature that tells stable diffusion to
build the prompt on top of the image you provide, preserving the
original's basic shape and layout.
InvokeAI provides an "img2img" feature that lets you seed your
creations with an initial drawing or photo. This is a really cool
feature that tells stable diffusion to build the prompt on top of the
image you provide, preserving the original's basic shape and layout.
See the [WebUI Guide](WEB.md) for a walkthrough of the img2img feature
in the InvokeAI web server. This document describes how to use img2img
in the command-line tool.
## Basic Usage
Launch the command-line client by launching `invoke.sh`/`invoke.bat`
and choosing option (1). Alternative, activate the InvokeAI
environment and issue the command `invokeai`.
Once the `invoke> ` prompt appears, you can start an img2img render by
pointing to a seed file with the `-I` option as shown here:
!!! example ""
```commandline
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
```
<figure markdown>
| original image | generated image |
| :------------: | :-------------: |
| ![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 } | ![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 } |
</figure>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
the original intact), to `1.0` (ignore the original completely). The default is
`0.75`, and ranges from `0.25-0.90` give interesting results. Other relevant
options include `-C` (classification free guidance scale), and `-s` (steps).
Unlike `txt2img`, adding steps will continuously change the resulting image and
it will not converge.
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>`
count variants on the original image. This is done by passing the first
generated image back into img2img the requested number of times. It generates
interesting variants.
Note that the prompt makes a big difference. For example, this slight variation
on the prompt produces a very different image:
<figure markdown>
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 }
<caption markdown>photograph of a tree on a hill with a river</caption>
</figure>
!!! tip
When designing prompts, think about how the images scraped from the internet were
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
They will, however, be captioned with the publication, photographer, camera model,
or film settings.
If the initial image contains transparent regions, then Stable Diffusion will
only draw within the transparent regions, a process called
[`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting).
However, for this to work correctly, the color information underneath the
transparent needs to be preserved, not erased.
!!! warning "**IMPORTANT ISSUE** "
`img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
## How does it actually work, though?
For a walkthrough of using Image-to-Image in the Web UI, see [InvokeAI
Web Server](./WEB.md#image-to-image).
The main difference between `img2img` and `prompt2img` is the starting point.
While `prompt2img` always starts with pure gaussian noise and progressively
@ -99,10 +26,6 @@ seed `1592514025` develops something like this:
!!! example ""
```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
<figure markdown>
![latent steps](../assets/img2img/000019.steps.png){ width=720 }
</figure>
@ -157,17 +80,8 @@ Diffusion has less chance to refine itself, so the result ends up inheriting all
the problems of my bad drawing.
If you want to try this out yourself, all of these are using a seed of
`1592514025` with a width/height of `384`, step count `10`, the default sampler
(`k_lms`), and the single-word prompt `"fire"`:
```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch
[document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) -
run `invoke.py` and check your `outputs/img-samples/intermediates` folder while
generating an image.
`1592514025` with a width/height of `384`, step count `10`, the
`k_lms` sampler, and the single-word prompt `"fire"`.
### Compensating for the reduced step count
@ -180,10 +94,6 @@ give each generation 20 steps.
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
does `20` steps from my image):
```bash
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```
<figure markdown>
![000035.1592514025](../assets/img2img/000035.1592514025.png)
</figure>
@ -191,10 +101,6 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
make sure SD does `20` steps from my image):
```commandline
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
```
<figure markdown>
![000046.1592514025](../assets/img2img/000046.1592514025.png)
</figure>

View File

@ -1,306 +0,0 @@
---
title: Inpainting
---
# :octicons-paintbrush-16: Inpainting
## **Creating Transparent Regions for Inpainting**
Inpainting is really cool. To do it, you start with an initial image and use a
photoeditor to make one or more regions transparent (i.e. they have a "hole" in
them). You then provide the path to this image at the dream> command line using
the `-I` switch. Stable Diffusion will only paint within the transparent region.
There's a catch. In the current implementation, you have to prepare the initial
image correctly so that the underlying colors are preserved under the
transparent area. Many imaging editing applications will by default erase the
color information under the transparent pixels and replace them with white or
black, which will lead to suboptimal inpainting. It often helps to apply
incomplete transparency, such as any value between 1 and 99%
You also must take care to export the PNG file in such a way that the color
information is preserved. There is often an option in the export dialog that
lets you specify this.
If your photoeditor is erasing the underlying color information, `dream.py` will
give you a big fat warning. If you can't find a way to coax your photoeditor to
retain color values under transparent areas, then you can combine the `-I` and
`-M` switches to provide both the original unedited image and the masked
(partially transparent) image:
```bash
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
```
## **Masking using Text**
You can also create a mask using a text prompt to select the part of the image
you want to alter, using the [clipseg](https://github.com/timojl/clipseg)
algorithm. This works on any image, not just ones generated by InvokeAI.
The `--text_mask` (short form `-tm`) option takes two arguments. The first
argument is a text description of the part of the image you wish to mask (paint
over). If the text description contains a space, you must surround it with
quotation marks. The optional second argument is the minimum threshold for the
mask classifier's confidence score, described in more detail below.
To see how this works in practice, here's an image of a still life painting that
I got off the web.
<figure markdown>
![still life scaled](../assets/still-life-scaled.jpg)
</figure>
You can selectively mask out the orange and replace it with a baseball in this
way:
```bash
invoke> a baseball -I /path/to/still_life.png -tm orange
```
<figure markdown>
![](../assets/still-life-inpainted.png)
</figure>
The clipseg classifier produces a confidence score for each region it
identifies. Generally regions that score above 0.5 are reliable, but if you are
getting too much or too little masking you can adjust the threshold down (to get
more mask), or up (to get less). In this example, by passing `-tm` a higher
value, we are insisting on a tigher mask. However, if you make it too high, the
orange may not be picked up at all!
```bash
invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6
```
The `!mask` command may be useful for debugging problems with the text2mask
feature. The syntax is `!mask /path/to/image.png -tm <text> <threshold>`
It will generate three files:
- The image with the selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.selected.png
- The image with the un-selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.deselected.png
- The image with the selected area converted into a black and white image
according to the threshold level
- it will be named XXXXX.<imagename>.<prompt>.masked.png
The `.masked.png` file can then be directly passed to the `invoke>` prompt in
the CLI via the `-M` argument. Do not attempt this with the `selected.png` or
`deselected.png` files, as they contain some transparency throughout the image
and will not produce the desired results.
Here is an example of how `!mask` works:
```bash
invoke> !mask ./test-pictures/curly.png -tm hair 0.5
>> generating masks from ./test-pictures/curly.png
>> Initializing clipseg model for text to mask inference
Outputs:
[941.1] outputs/img-samples/000019.curly.hair.deselected.png: !mask ./test-pictures/curly.png -tm hair 0.5
[941.2] outputs/img-samples/000019.curly.hair.selected.png: !mask ./test-pictures/curly.png -tm hair 0.5
[941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5
```
<figure markdown>
![curly](../assets/outpainting/curly.png)
<figcaption>Original image "curly.png"</figcaption>
</figure>
<figure markdown>
![curly hair selected](../assets/inpainting/000019.curly.hair.selected.png)
<figcaption>000019.curly.hair.selected.png</figcaption>
</figure>
<figure markdown>
![curly hair deselected](../assets/inpainting/000019.curly.hair.deselected.png)
<figcaption>000019.curly.hair.deselected.png</figcaption>
</figure>
<figure markdown>
![curly hair masked](../assets/inpainting/000019.curly.hair.masked.png)
<figcaption>000019.curly.hair.masked.png</figcaption>
</figure>
It looks like we selected the hair pretty well at the 0.5 threshold (which is
the default, so we didn't actually have to specify it), so let's have some fun:
```bash
invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20
>> loaded input image of size 512x512 from ./test-pictures/curly.png
...
Outputs:
[946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75
```
<figure markdown>
![](../assets/inpainting/000024.801380492.png)
</figure>
You can also skip the `!mask` creation step and just select the masked
region directly:
```bash
invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20
```
## Using the RunwayML inpainting model
The
[RunwayML Inpainting Model v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting)
is a specialized version of
[Stable Diffusion v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5)
that contains extra channels specifically designed to enhance inpainting and
outpainting. While it can do regular `txt2img` and `img2img`, it really shines
when filling in missing regions. It has an almost uncanny ability to blend the
new regions with existing ones in a semantically coherent way.
To install the inpainting model, follow the
[instructions](../installation/050_INSTALLING_MODELS.md) for installing a new model.
You may use either the CLI (`invoke.py` script) or directly edit the
`configs/models.yaml` configuration file to do this. The main thing to watch out
for is that the the model `config` option must be set up to use
`v1-inpainting-inference.yaml` rather than the `v1-inference.yaml` file that is
used by Stable Diffusion 1.4 and 1.5.
After installation, your `models.yaml` should contain an entry that looks like
this one:
inpainting-1.5: weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
description: SD inpainting v1.5 config:
configs/stable-diffusion/v1-inpainting-inference.yaml vae:
models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512
height: 512
As shown in the example, you may include a VAE fine-tuning weights file as well.
This is strongly recommended.
To use the custom inpainting model, launch `invoke.py` with the argument
`--model inpainting-1.5` or alternatively from within the script use the
`!switch inpainting-1.5` command to load and switch to the inpainting model.
You can now do inpainting and outpainting exactly as described above, but there
will (likely) be a noticeable improvement in coherence. Txt2img and Img2img will
work as well.
There are a few caveats to be aware of:
1. The inpainting model is larger than the standard model, and will use nearly 4
GB of GPU VRAM. This makes it unlikely to run on a 4 GB graphics card.
2. When operating in Img2img mode, the inpainting model is much less steerable
than the standard model. It is great for making small changes, such as
changing the pattern of a fabric, or slightly changing a subject's expression
or hair, but the model will resist making the dramatic alterations that the
standard model lets you do.
3. While the `--hires` option works fine with the inpainting model, some special
features, such as `--embiggen` are disabled.
4. Prompt weighting (`banana++ sushi`) and merging work well with the inpainting
model, but prompt swapping
(`a ("fluffy cat").swap("smiling dog") eating a hotdog`) will not have any
effect due to the way the model is set up. You may use text masking (with
`-tm thing-to-mask`) as an effective replacement.
5. The model tends to oversharpen image if you use high step or CFG values. If
you need to do large steps, use the standard model.
6. The `--strength` (`-f`) option has no effect on the inpainting model due to
its fundamental differences with the standard model. It will always take the
full number of steps you specify.
## Troubleshooting
Here are some troubleshooting tips for inpainting and outpainting.
## Inpainting is not changing the masked region enough!
One of the things to understand about how inpainting works is that it is
equivalent to running img2img on just the masked (transparent) area. img2img
builds on top of the existing image data, and therefore will attempt to preserve
colors, shapes and textures to the best of its ability. Unfortunately this means
that if you want to make a dramatic change in the inpainted region, for example
replacing a red wall with a blue one, the algorithm will fight you.
You have a couple of options. The first is to increase the values of the
requested steps (`-sXXX`), strength (`-f0.XX`), and/or condition-free guidance
(`-CXX.X`). If this is not working for you, a more extreme step is to provide
the `--inpaint_replace 0.X` (`-r0.X`) option. This value ranges from 0.0 to 1.0.
The higher it is the less attention the algorithm will pay to the data
underneath the masked region. At high values this will enable you to replace
colored regions entirely, but beware that the masked region mayl not blend in
with the surrounding unmasked regions as well.
---
## Recipe for GIMP
[GIMP](https://www.gimp.org/) is a popular Linux photoediting tool.
1. Open image in GIMP.
2. Layer->Transparency->Add Alpha Channel
3. Use lasso tool to select region to mask
4. Choose Select -> Float to create a floating selection
5. Open the Layers toolbar (^L) and select "Floating Selection"
6. Set opacity to a value between 0% and 99%
7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from transparent
pixels" checkbox is selected.
---
## Recipe for Adobe Photoshop
1. Open image in Photoshop
<figure markdown>
![step1](../assets/step1.png)
</figure>
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area
you desire to inpaint.
<figure markdown>
![step2](../assets/step2.png)
</figure>
3. Because we'll be applying a mask over the area we want to preserve, you
should now select the inverse by using the ++shift+ctrl+i++ shortcut, or
right clicking and using the "Select Inverse" option.
4. You'll now create a mask by selecting the image layer, and Masking the
selection. Make sure that you don't delete any of the underlying image, or
your inpainting results will be dramatically impacted.
<figure markdown>
![step4](../assets/step4.png)
</figure>
5. Make sure to hide any background layers that are present. You should see the
mask applied to your image layer, and the image on your canvas should display
the checkered background.
<figure markdown>
![step5](../assets/step5.png)
</figure>
6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the
menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
<figure markdown>
![step6](../assets/step6.png)
</figure>
7. After following the inpainting instructions above (either through the CLI or
the Web UI), marvel at your newfound ability to selectively invoke. Lookin'
good!
<figure markdown>
![step7](../assets/step7.png)
</figure>
8. In the export dialogue, Make sure the "Save colour values from transparent
pixels" checkbox is selected.

171
docs/features/LOGGING.md Normal file
View File

@ -0,0 +1,171 @@
---
title: Controlling Logging
---
# :material-image-off: Controlling Logging
## Controlling How InvokeAI Logs Status Messages
InvokeAI logs status messages using a configurable logging system. You
can log to the terminal window, to a designated file on the local
machine, to the syslog facility on a Linux or Mac, or to a properly
configured web server. You can configure several logs at the same
time, and control the level of message logged and the logging format
(to a limited extent).
Three command-line options control logging:
### `--log_handlers <handler1> <handler2> ...`
This option activates one or more log handlers. Options are "console",
"file", "syslog" and "http". To specify more than one, separate them
by spaces:
```bash
invokeai-web --log_handlers console syslog=/dev/log file=C:\Users\fred\invokeai.log
```
The format of these options is described below.
### `--log_format {plain|color|legacy|syslog}`
This controls the format of log messages written to the console. Only
the "console" log handler is currently affected by this setting.
* "plain" provides formatted messages like this:
```bash
[2023-05-24 23:18:2[2023-05-24 23:18:50,352]::[InvokeAI]::DEBUG --> this is a debug message
[2023-05-24 23:18:50,352]::[InvokeAI]::INFO --> this is an informational messages
[2023-05-24 23:18:50,352]::[InvokeAI]::WARNING --> this is a warning
[2023-05-24 23:18:50,352]::[InvokeAI]::ERROR --> this is an error
[2023-05-24 23:18:50,352]::[InvokeAI]::CRITICAL --> this is a critical error
```
* "color" produces similar output, but the text will be color coded to
indicate the severity of the message.
* "legacy" produces output similar to InvokeAI versions 2.3 and earlier:
```bash
### this is a critical error
*** this is an error
** this is a warning
>> this is an informational messages
| this is a debug message
```
* "syslog" produces messages suitable for syslog entries:
```bash
InvokeAI [2691178] <CRITICAL> this is a critical error
InvokeAI [2691178] <ERROR> this is an error
InvokeAI [2691178] <WARNING> this is a warning
InvokeAI [2691178] <INFO> this is an informational messages
InvokeAI [2691178] <DEBUG> this is a debug message
```
(note that the date, time and hostname will be added by the syslog
system)
### `--log_level {debug|info|warning|error|critical}`
Providing this command-line option will cause only messages at the
specified level or above to be emitted.
## Console logging
When "console" is provided to `--log_handlers`, messages will be
written to the command line window in which InvokeAI was launched. By
default, the color formatter will be used unless overridden by
`--log_format`.
## File logging
When "file" is provided to `--log_handlers`, entries will be written
to the file indicated in the path argument. By default, the "plain"
format will be used:
```bash
invokeai-web --log_handlers file=/var/log/invokeai.log
```
## Syslog logging
When "syslog" is requested, entries will be sent to the syslog
system. There are a variety of ways to control where the log message
is sent:
* Send to the local machine using the `/dev/log` socket:
```
invokeai-web --log_handlers syslog=/dev/log
```
* Send to the local machine using a UDP message:
```
invokeai-web --log_handlers syslog=localhost
```
* Send to the local machine using a UDP message on a nonstandard
port:
```
invokeai-web --log_handlers syslog=localhost:512
```
* Send to a remote machine named "loghost" on the local LAN using
facility LOG_USER and UDP packets:
```
invokeai-web --log_handlers syslog=loghost,facility=LOG_USER,socktype=SOCK_DGRAM
```
This can be abbreviated `syslog=loghost`, as LOG_USER and SOCK_DGRAM
are defaults.
* Send to a remote machine named "loghost" using the facility LOCAL0
and using a TCP socket:
```
invokeai-web --log_handlers syslog=loghost,facility=LOG_LOCAL0,socktype=SOCK_STREAM
```
If no arguments are specified (just a bare "syslog"), then the logging
system will look for a UNIX socket named `/dev/log`, and if not found
try to send a UDP message to `localhost`. The Macintosh OS used to
support logging to a socket named `/var/run/syslog`, but this feature
has since been disabled.
## Web logging
If you have access to a web server that is configured to log messages
when a particular URL is requested, you can log using the "http"
method:
```
invokeai-web --log_handlers http=http://my.server/path/to/logger,method=POST
```
The optional [,method=] part can be used to specify whether the URL
accepts GET (default) or POST messages.
Currently password authentication and SSL are not supported.
## Using the configuration file
You can set and forget logging options by adding a "Logging" section
to `invokeai.yaml`:
```
InvokeAI:
[... other settings...]
Logging:
log_handlers:
- console
- syslog=/dev/log
log_level: info
log_format: color
```

View File

@ -1,100 +0,0 @@
---
title: Low-Rank Adaptation (LoRA) Models
---
# :material-library-shelves: Using Low-Rank Adaptation (LoRA) Models
## Introduction
LoRA is a technique for fine-tuning Stable Diffusion models using much
less time and memory than traditional training techniques. The
resulting model files are much smaller than full model files, and can
be used to generate specialized styles and subjects.
LoRAs are built on top of Stable Diffusion v1.x or 2.x checkpoint or
diffusers models. To load a LoRA, you include its name in the text
prompt using a simple syntax described below. While you will generally
get the best results when you use the same model the LoRA was trained
on, they will work to a greater or lesser extent with other models.
The major caveat is that a LoRA built on top of a SD v1.x model cannot
be used with a v2.x model, and vice-versa. If you try, you will get an
error! You may refer to multiple LoRAs in your prompt.
When you apply a LoRA in a prompt you can specify a weight. The higher
the weight, the more influence it will have on the image. Useful
ranges for weights are usually in the 0.0 to 1.0 range (with ranges
between 0.5 and 1.0 being most typical). However you can specify a
higher weight if you wish. Like models, each LoRA has a slightly
different useful weight range and will interact with other generation
parameters such as the CFG, step count and sampler. The author of the
LoRA will often provide guidance on the best settings, but feel free
to experiment. Be aware that it often helps to reduce the CFG value
when using LoRAs.
## Installing LoRAs
This is very easy! Download a LoRA model file from your favorite site
(e.g. [CIVITAI](https://civitai.com) and place it in the `loras`
folder in the InvokeAI root directory (usually `~invokeai/loras` on
Linux/Macintosh machines, and `C:\Users\your-name\invokeai/loras` on
Windows systems). If the `loras` folder does not already exist, just
create it. The vast majority of LoRA models use the Kohya file format,
which is a type of `.safetensors` file.
You may change where InvokeAI looks for the `loras` folder by passing the
`--lora_directory` option to the `invoke.sh`/`invoke.bat` launcher, or
by placing the option in `invokeai.init`. For example:
```
invoke.sh --lora_directory=C:\Users\your-name\SDModels\lora
```
## Using a LoRA in your prompt
To activate a LoRA use the syntax `withLora(my-lora-name,weight)`
somewhere in the text of the prompt. The position doesn't matter; use
whatever is most comfortable for you.
For example, if you have a LoRA named `parchment_people.safetensors`
in your `loras` directory, you can load it with a weight of 0.9 with a
prompt like this one:
```
family sitting at dinner table withLora(parchment_people,0.9)
```
Add additional `withLora()` phrases to load more LoRAs.
You may omit the weight entirely to default to a weight of 1.0:
```
family sitting at dinner table withLora(parchment_people)
```
If you watch the console as your prompt executes, you will see
messages relating to the loading and execution of the LoRA. If things
don't work as expected, note down the console messages and report them
on the InvokeAI Issues pages or Discord channel.
That's pretty much all you need to know!
## Training Kohya Models
InvokeAI cannot currently train LoRA models, but it can load and use
existing LoRA ones to generate images. While there are several LoRA
model file formats, the predominant one is ["Kohya"
format](https://github.com/kohya-ss/sd-scripts), written by [Kohya
S.](https://github.com/kohya-ss). InvokeAI provides support for this
format. For creating your own Kohya models, we recommend the Windows
GUI written by former InvokeAI-team member
[bmaltais](https://github.com/bmaltais), which can be found at
[kohya_ss](https://github.com/bmaltais/kohya_ss).
We can also recommend the [HuggingFace DreamBooth Training
UI](https://huggingface.co/spaces/lora-library/LoRA-DreamBooth-Training-UI),
a paid service that supports both Textual Inversion and LoRA training.
You may also be interested in [Textual
Inversion](TEXTUAL_INVERSION.md) training, which is supported by
InvokeAI as a text console and command-line tool.

View File

@ -71,6 +71,3 @@ under the selected name and register it with InvokeAI.
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
## Caveats
This is a new script and may contain bugs.

208
docs/features/NODES.md Normal file
View File

@ -0,0 +1,208 @@
# Nodes Editor (Experimental)
🚨
*The node editor is experimental. We've made it accessible because we use it to develop the application, but we have not addressed the many known rough edges. It's very easy to shoot yourself in the foot, and we cannot offer support for it until it sees full release (ETA v3.1). Everything is subject to change without warning.*
🚨
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. The node processing flow is usually done from left (inputs) to right (outputs), though linearity can become abstracted the more complex the node graph becomes. Nodes inputs and outputs are connected by dragging connectors from node to node.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
## Anatomy of a Node
Individual nodes are made up of the following:
- Inputs: Edge points on the left side of the node window where you connect outputs from other nodes.
- Outputs: Edge points on the right side of the node window where you connect to inputs on other nodes.
- Options: Various options which are either manually configured, or overridden by connecting an output from another node to the input.
## Diffusion Overview
Taking the time to understand the diffusion process will help you to understand how to set up your nodes in the nodes editor.
There are two main spaces Stable Diffusion works in: image space and latent space.
Image space represents images in pixel form that you look at. Latent space represents compressed inputs. Its in latent space that Stable Diffusion processes images. A VAE (Variational Auto Encoder) is responsible for compressing and encoding inputs into latent space, as well as decoding outputs back into image space.
When you generate an image using text-to-image, multiple steps occur in latent space:
1. Random noise is generated at the chosen height and width. The noises characteristics are dictated by the chosen (or not chosen) seed. This noise tensor is passed into latent space. Well call this noise A.
1. Using a models U-Net, a noise predictor examines noise A, and the words tokenized by CLIP from your prompt (conditioning). It generates its own noise tensor to predict what the final image might look like in latent space. Well call this noise B.
1. Noise B is subtracted from noise A in an attempt to create a final latent image indicative of the inputs. This step is repeated for the number of sampler steps chosen.
1. The VAE decodes the final latent image from latent space into image space.
image-to-image is a similar process, with only step 1 being different:
1. The input image is decoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how much noise is added, 0 being none, and 1 being all-encompassing. Well call this noise A. The process is then the same as steps 2-4 in the text-to-image explanation above.
Furthermore, a model provides the CLIP prompt tokenizer, the VAE, and a U-Net (where noise prediction occurs given a prompt and initial noise tensor).
A noise scheduler (eg. DPM++ 2M Karras) schedules the subtraction of noise from the latent image across the sampler steps chosen (step 3 above). Less noise is usually subtracted at higher sampler steps.
## Node Types (Base Nodes)
| Node <img width=160 align="right"> | Function |
| ---------------------------------- | --------------------------------------------------------------------------------------|
| Add | Adds two numbers |
| CannyImageProcessor | Canny edge detection for ControlNet |
| ClipSkip | Skip layers in clip text_encoder model |
| Collect | Collects values into a collection |
| Prompt (Compel) | Parse prompt using compel package to conditioning |
| ContentShuffleImageProcessor | Applies content shuffle processing to image |
| ControlNet | Collects ControlNet info to pass to other nodes |
| CvInpaint | Simple inpaint using opencv |
| Divide | Divides two numbers |
| DynamicPrompt | Parses a prompt using adieyal/dynamic prompt's random or combinatorial generator |
| FloatLinearRange | Creates a range |
| HedImageProcessor | Applies HED edge detection to image |
| ImageBlur | Blurs an image |
| ImageChannel | Gets a channel from an image |
| ImageCollection | Load a collection of images and provide it as output |
| ImageConvert | Converts an image to a different mode |
| ImageCrop | Crops an image to a specified box. The box can be outside of the image. |
| ImageInverseLerp | Inverse linear interpolation of all pixels of an image |
| ImageLerp | Linear interpolation of all pixels of an image |
| ImageMultiply | Multiplies two images together using `PIL.ImageChops.Multiply()` |
| ImageNSFWBlurInvocation | Detects and blurs images that may contain sexually explicit content |
| ImagePaste | Pastes an image into another image |
| ImageProcessor | Base class for invocations that reprocess images for ControlNet |
| ImageResize | Resizes an image to specific dimensions |
| ImageScale | Scales an image by a factor |
| ImageToLatents | Scales latents by a given factor |
| ImageWatermarkInvocation | Adds an invisible watermark to images |
| InfillColor | Infills transparent areas of an image with a solid color |
| InfillPatchMatch | Infills transparent areas of an image using the PatchMatch algorithm |
| InfillTile | Infills transparent areas of an image with tiles of the image |
| Inpaint | Generates an image using inpaint |
| Iterate | Iterates over a list of items |
| LatentsToImage | Generates an image from latents |
| LatentsToLatents | Generates latents using latents as base image |
| LeresImageProcessor | Applies leres processing to image |
| LineartAnimeImageProcessor | Applies line art anime processing to image |
| LineartImageProcessor | Applies line art processing to image |
| LoadImage | Load an image and provide it as output |
| Lora Loader | Apply selected lora to unet and text_encoder |
| Model Loader | Loads a main model, outputting its submodels |
| MaskFromAlpha | Extracts the alpha channel of an image as a mask |
| MediapipeFaceProcessor | Applies mediapipe face processing to image |
| MidasDepthImageProcessor | Applies Midas depth processing to image |
| MlsdImageProcessor | Applied MLSD processing to image |
| Multiply | Multiplies two numbers |
| Noise | Generates latent noise |
| NormalbaeImageProcessor | Applies NormalBAE processing to image |
| OpenposeImageProcessor | Applies Openpose processing to image |
| ParamFloat | A float parameter |
| ParamInt | An integer parameter |
| PidiImageProcessor | Applies PIDI processing to an image |
| Progress Image | Displays the progress image in the Node Editor |
| RandomInit | Outputs a single random integer |
| RandomRange | Creates a collection of random numbers |
| Range | Creates a range of numbers from start to stop with step |
| RangeOfSize | Creates a range from start to start + size with step |
| ResizeLatents | Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8. |
| RestoreFace | Restores faces in the image |
| ScaleLatents | Scales latents by a given factor |
| SegmentAnythingProcessor | Applies segment anything processing to image |
| ShowImage | Displays a provided image, and passes it forward in the pipeline |
| StepParamEasing | Experimental per-step parameter for easing for denoising steps |
| Subtract | Subtracts two numbers |
| TextToLatents | Generates latents from conditionings |
| TileResampleProcessor | Bass class for invocations that preprocess images for ControlNet |
| Upscale | Upscales an image |
| VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput |
| ZoeDepthImageProcessor | Applies Zoe depth processing to image |
## Node Grouping Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
### Noise
As described, an initial noise tensor is necessary for the latent diffusion process. As a result, all non-image *ToLatents nodes require a noise node input.
![groupsnoise](../assets/nodes/groupsnoise.png)
### Conditioning
As described, conditioning is necessary for the latent diffusion process, whether empty or not. As a result, all non-image *ToLatents nodes require positive and negative conditioning inputs. Conditioning is reliant on a CLIP tokenizer provided by the Model Loader node.
![groupsconditioning](../assets/nodes/groupsconditioning.png)
### Image Space & VAE
The ImageToLatents node doesn't require a noise node input, but requires a VAE input to convert the image from image space into latent space. In reverse, the LatentsToImage node requires a VAE input to convert from latent space back into image space.
![groupsimgvae](../assets/nodes/groupsimgvae.png)
### Defined & Random Seeds
It is common to want to use both the same seed (for continuity) and random seeds (for variance). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png)
### Control
Control means to guide the diffusion process to adhere to a defined input or structure. Control can be provided as input to non-image *ToLatents nodes from ControlNet nodes. ControlNet nodes usually require an image processor which converts an input image for use with ControlNet.
![groupscontrol](../assets/nodes/groupscontrol.png)
### LoRA
The Lora Loader node lets you load a LoRA (say that ten times fast) and pass it as output to both the Prompt (Compel) and non-image *ToLatents nodes. A model's CLIP tokenizer is passed through the LoRA into Prompt (Compel), where it affects conditioning. A model's U-Net is also passed through the LoRA into a non-image *ToLatents node, where it affects noise prediction.
![groupslora](../assets/nodes/groupslora.png)
### Scaling
Use the ImageScale, ScaleLatents, and Upscale nodes to upscale images and/or latent images. The chosen method differs across contexts. However, be aware that latents are already noisy and compressed at their original resolution; scaling an image could produce more detailed results.
![groupsallscale](../assets/nodes/groupsallscale.png)
### Iteration + Multiple Images as Input
Iteration is a common concept in any processing, and means to repeat a process with given input. In nodes, you're able to use the Iterate node to iterate through collections usually gathered by the Collect node. The Iterate node has many potential uses, from processing a collection of images one after another, to varying seeds across multiple image generations and more. This screenshot demonstrates how to collect several images and pass them out one at a time.
![groupsiterate](../assets/nodes/groupsiterate.png)
### Multiple Image Generation + Random Seeds
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)
## Examples
With our knowledge of node grouping and the diffusion process, lets break down some basic graphs in the nodes editor. Note that a node's options can be overridden by inputs from other nodes. These examples aren't strict rules to follow and only demonstrate some basic configurations.
### Basic text-to-image Node Graph
![nodest2i](../assets/nodes/nodest2i.png)
- Model Loader: A necessity to generating images (as weve read above). We choose our model from the dropdown. It outputs a U-Net, CLIP tokenizer, and VAE.
- Prompt (Compel): Another necessity. Two prompt nodes are created. One will output positive conditioning (what you want, dog), one will output negative (what you dont want, cat). They both input the CLIP tokenizer that the Model Loader node outputs.
- Noise: Consider this noise A from step one of the text-to-image explanation above. Choose a seed number, width, and height.
- TextToLatents: This node takes many inputs for converting and processing text & noise from image space into latent space, hence the name TextTo**Latents**. In this setup, it inputs positive and negative conditioning from the prompt nodes for processing (step 2 above). It inputs noise from the noise node for processing (steps 2 & 3 above). Lastly, it inputs a U-Net from the Model Loader node for processing (step 2 above). It outputs latents for use in the next LatentsToImage node. Choose number of sampler steps, CFG scale, and scheduler.
- LatentsToImage: This node takes in processed latents from the TextToLatents node, and the models VAE from the Model Loader node which is responsible for decoding latents back into the image space, hence the name LatentsTo**Image**. This node is the last stop, and once the image is decoded, it is saved to the gallery.
### Basic image-to-image Node Graph
![nodesi2i](../assets/nodes/nodesi2i.png)
- Model Loader: Choose a model from the dropdown.
- Prompt (Compel): Two prompt nodes. One positive (dog), one negative (dog). Same CLIP inputs from the Model Loader node as before.
- ImageToLatents: Upload a source image directly in the node window, via drag'n'drop from the gallery, or passed in as input. The ImageToLatents node inputs the VAE from the Model Loader node to decode the chosen image from image space into latent space, hence the name ImageTo**Latents**. It outputs latents for use in the next LatentsToLatents node. It also outputs the source image's width and height for use in the next Noise node if the final image is to be the same dimensions as the source image.
- Noise: A noise tensor is created with the width and height of the source image, and connected to the next LatentsToLatents node. Notice the width and height fields are overridden by the input from the ImageToLatents width and height outputs.
- LatentsToLatents: The inputs and options are nearly identical to TextToLatents, except that LatentsToLatents also takes latents as an input. Considering our source image is already converted to latents in the last ImageToLatents node, and text + noise are no longer the only inputs to process, we use the LatentsToLatents node.
- LatentsToImage: Like previously, the LatentsToImage node will use the VAE from the Model Loader as input to decode the latents from LatentsToLatents into image space, and save it to the gallery.
### Basic ControlNet Node Graph
![nodescontrol](../assets/nodes/nodescontrol.png)
- Model Loader
- Prompt (Compel)
- Noise: Width and height of the CannyImageProcessor ControlNet image is passed in to set the dimensions of the noise passed to TextToLatents.
- CannyImageProcessor: The CannyImageProcessor node is used to process the source image being used as a ControlNet. Each ControlNet processor node applies control in different ways, and has some different options to configure. Width and height are passed to noise, as mentioned. The processed ControlNet image is output to the ControlNet node.
- ControlNet: Select the type of control model. In this case, canny is chosen as the CannyImageProcessor was used to generate the ControlNet image. Configure the control node options, and pass the control output to TextToLatents.
- TextToLatents: Similar to the basic text-to-image example, except ControlNet is passed to the control input edge point.
- LatentsToImage

View File

@ -1,89 +0,0 @@
---
title: The NSFW Checker
---
# :material-image-off: NSFW Checker
## The NSFW ("Safety") Checker
The Stable Diffusion image generation models will produce sexual
imagery if deliberately prompted, and will occasionally produce such
images when this is not intended. Such images are colloquially known
as "Not Safe for Work" (NSFW). This behavior is due to the nature of
the training set that Stable Diffusion was trained on, which culled
millions of "aesthetic" images from the Internet.
You may not wish to be exposed to these images, and in some
jurisdictions it may be illegal to publicly distribute such imagery,
including mounting a publicly-available server that provides
unfiltered images to the public. Furthermore, the [Stable Diffusion
weights
License](https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE-ModelWeights.txt)
forbids the model from being used to "exploit any of the
vulnerabilities of a specific group of persons."
For these reasons Stable Diffusion offers a "safety checker," a
machine learning model trained to recognize potentially disturbing
imagery. When a potentially NSFW image is detected, the checker will
blur the image and paste a warning icon on top. The checker can be
turned on and off on the command line using `--nsfw_checker` and
`--no-nsfw_checker`.
At installation time, InvokeAI will ask whether the checker should be
activated by default (neither argument given on the command line). The
response is stored in the InvokeAI initialization file (usually
`.invokeai` in your home directory). You can change the default at any
time by opening this file in a text editor and commenting or
uncommenting the line `--nsfw_checker`.
## Caveats
There are a number of caveats that you need to be aware of.
### Accuracy
The checker is [not perfect](https://arxiv.org/abs/2210.04610).It will
occasionally flag innocuous images (false positives), and will
frequently miss violent and gory imagery (false negatives). It rarely
fails to flag sexual imagery, but this has been known to happen. For
these reasons, the InvokeAI team prefers to refer to the software as a
"NSFW Checker" rather than "safety checker."
### Memory Usage and Performance
The NSFW checker consumes an additional 1.2G of GPU VRAM on top of the
3.4G of VRAM used by Stable Diffusion v1.5 (this is with
half-precision arithmetic). This means that the checker will not run
successfully on GPU cards with less than 6GB VRAM, and will reduce the
size of the images that you can produce.
The checker also introduces a slight performance penalty. Images will
take ~1 second longer to generate when the checker is
activated. Generally this is not noticeable.
### Intermediate Images in the Web UI
The checker only operates on the final image produced by the Stable
Diffusion algorithm. If you are using the Web UI and have enabled the
display of intermediate images, you will briefly be exposed to a
low-resolution (mosaicized) version of the final image before it is
flagged by the checker and replaced by a fully blurred version. You
are encouraged to turn **off** intermediate image rendering when you
are using the checker. Future versions of InvokeAI will apply
additional blurring to intermediate images when the checker is active.
### Watermarking
InvokeAI does not apply any sort of watermark to images it
generates. However, it does write metadata into the PNG data area,
including the prompt used to generate the image and relevant parameter
settings. These fields can be examined using the `sd-metadata.py`
script that comes with the InvokeAI package.
Note that several other Stable Diffusion distributions offer
wavelet-based "invisible" watermarking. We have experimented with the
library used to generate these watermarks and have reached the
conclusion that while the watermarking library may be adding
watermarks to PNG images, the currently available version is unable to
retrieve them successfully. If and when a functioning version of the
library becomes available, we will offer this feature as well.

View File

@ -16,48 +16,24 @@ Output Example:
---
## **Seamless Tiling**
## **Invisible Watermark**
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
`--seamless` option when starting the script which will result in all generated images to tile, or
for each `invoke>` prompt as shown here:
In keeping with the principles for responsible AI generation, and to
help AI researchers avoid synthetic images contaminating their
training sets, InvokeAI adds an invisible watermark to each of the
final images it generates. The watermark consists of the text
"InvokeAI" and can be viewed using the
[invisible-watermarks](https://github.com/ShieldMnt/invisible-watermark)
tool.
```python
invoke> "pond garden with lotus by claude monet" --seamless -s100 -n4
Watermarking is controlled using the `invisible-watermark` setting in
`invokeai.yaml`. To turn it off, add the following line under the `Features`
category.
```
invisible_watermark: false
```
By default this will tile on both the X and Y axes. However, you can also specify specific axes to tile on with `--seamless_axes`.
Possible values are `x`, `y`, and `x,y`:
```python
invoke> "pond garden with lotus by claude monet" --seamless --seamless_axes=x -s100 -n4
```
---
## **Shortcuts: Reusing Seeds**
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version
1.11. Provide a `-S` (or `--seed`) switch of `-1` to use the seed of the most recent image
generated. If you produced multiple images with the `-n` switch, then you can go back further
using `-2`, `-3`, etc. up to the first image generated by the previous command. Sorry, but you can't go
back further than one command.
Here's an example of using this to do a quick refinement. It also illustrates using the new `-G`
switch to turn on upscaling and face enhancement (see previous section):
```bash
invoke> a cute child playing hopscotch -G0.5
[...]
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
invoke> a cute child playing hopscotch -G1.0 -s100 -S -1
reusing previous seed 3498014304
[...]
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
```
---
## **Weighted Prompts**
@ -66,73 +42,10 @@ priority to them, by adding `:<percent>` to the end of the section you wish to u
example consider this prompt:
```bash
tabby cat:0.25 white duck:0.75 hybrid
(tabby cat):0.25 (white duck):0.75 hybrid
```
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75%
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
combination of integers and floating point numbers, and they do not need to add up to 1.
---
## **Filename Format**
The argument `--fnformat` allows to specify the filename of the
image. Supported wildcards are all arguments what can be set such as
`perlin`, `seed`, `threshold`, `height`, `width`, `gfpgan_strength`,
`sampler_name`, `steps`, `model`, `upscale`, `prompt`, `cfg_scale`,
`prefix`.
The following prompt
```bash
dream> a red car --steps 25 -C 9.8 --perlin 0.1 --fnformat {prompt}_steps.{steps}_cfg.{cfg_scale}_perlin.{perlin}.png
```
generates a file with the name: `outputs/img-samples/a red car_steps.25_cfg.9.8_perlin.0.1.png`
---
## **Thresholding and Perlin Noise Initialization Options**
Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images.
For better intuition into what these options do in practice:
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise):
```bash
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
```
Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2:
```bash
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
```
!!! note
currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
---
## **Simplified API**
For programmers who wish to incorporate stable-diffusion into other products, this repository
includes a simplified API for text to image generation, which lets you create images from a prompt
in just three lines of code:
```bash
from ldm.generate import Generate
g = Generate()
outputs = g.txt2img("a unicorn in manhattan")
```
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
Please see the documentation in ldm/generate.py for more information.
---

View File

@ -4,160 +4,38 @@ title: Postprocessing
# :material-image-edit: Postprocessing
## Intro
This extension provides the ability to restore faces and upscale images.
Face restoration and upscaling can be applied at the time you generate the
images, or at any later time against a previously-generated PNG file, using the
[!fix](#fixing-previously-generated-images) command.
[Outpainting and outcropping](OUTPAINTING.md) can only be applied after the
fact.
This sections details the ability to improve faces and upscale images.
## Face Fixing
The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see
[CodeFormer Support](#codeformer-support) below.
As of InvokeAI 3.0, the easiest way to improve faces created during image generation is through the Inpainting functionality of the Unified Canvas. Simply add the image containing the faces that you would like to improve to the canvas, mask the face to be improved and run the invocation. For best results, make sure to use an inpainting specific model; these are usually identified by the "-inpainting" term in the model name.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
should "just work" without further intervention. Simply pass the `--upscale`
(`-U`) option on the `invoke>` command line, or indicate the desired scale on
the popup in the Web GUI.
## Upscaling
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `invokeai-configure`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
Open the upscaling dialog by clicking on the "expand" icon located
above the image display area in the Web UI:
```bash
invokeai-configure
```
<figure markdown>
![upscale1](../assets/features/upscale-dialog.png)
</figure>
If you do not run this script in advance, the GFPGAN module will attempt to
download the models files the first time you try to perform facial
reconstruction.
The default upscaling option is Real-ESRGAN x2 Plus, which will scale your image by a factor of two. This means upscaling a 512x512 image will result in a new 1024x1024 image.
### Upscaling
Other options are the x4 upscalers, which will scale your image by a factor of 4.
`-U : <upscaling_factor> <upscaling_strength>`
The upscaling prompt argument takes two values. The first value is a scaling
factor and should be set to either `2` or `4` only. This will either scale the
image 2x or 4x respectively using different models.
You can set the scaling stength between `0` and `1.0` to control intensity of
the of the scaling. This is handy because AI upscalers generally tend to smooth
out texture details. If you wish to retain some of those for natural looking
results, we recommend using values between `0.5 to 0.8`.
If you do not explicitly specify an upscaling_strength, it will default to 0.75.
### Face Restoration
`-G : <facetool_strength>`
This prompt argument controls the strength of the face restoration that is being
applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
You can use either one or both without any conflicts. In cases where you use
both, the image will be first upscaled and then the face restoration process
will be executed to ensure you get the highest quality facial features.
`--save_orig`
When you use either `-U` or `-G`, the final result you get is upscaled or face
modified. If you want to save the original Stable Diffusion generation, you can
use the `-save_orig` prompt argument to save the original unaffected version
too.
### Example Usage
```bash
invoke> "superman dancing with a panda bear" -U 2 0.6 -G 0.4
```
This also works with img2img:
```bash
invoke> "a man wearing a pineapple hat" -I path/to/your/file.png -U 2 0.5 -G 0.6
```
!!! note
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid crashes and memory overloads
Real-ESRGAN is memory intensive. In order to avoid crashes and memory overloads
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed
its work.
In single image generations, you will see the output right away but when you are using multiple
iterations, the images will first be generated and then upscaled and face restored after that
iterations, the images will first be generated and then upscaled after that
process is complete. While the image generation is taking place, you will still be able to preview
the base images.
If you wish to stop during the image generation but want to upscale or face
restore a particular generated image, pass it again with the same prompt and
generated seed along with the `-U` and `-G` prompt arguments to perform those
actions.
## CodeFormer Support
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `invokeai-configure` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
### CodeFormer Usage
The following command will perform face restoration with CodeFormer instead of
the default gfpgan.
`<prompt> -G 0.8 -ft codeformer`
### Other Options
- `-cf` - cf or CodeFormer Fidelity takes values between `0` and `1`. 0 produces
high quality results but low accuracy and 1 produces lower quality results but
higher accuacy to your original face.
The following command will perform face restoration with CodeFormer. CodeFormer
will output a result that is closely matching to the input face.
`<prompt> -G 1.0 -ft codeformer -cf 0.9`
The following command will perform face restoration with CodeFormer. CodeFormer
will output a result that is the best restoration possible. This may deviate
slightly from the original face. This is an excellent option to use in
situations when there is very little facial data to work with.
`<prompt> -G 1.0 -ft codeformer -cf 0.1`
## Fixing Previously-Generated Images
It is easy to apply face restoration and/or upscaling to any
previously-generated file. Just use the syntax
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```bash
invoke> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
```
A new file named `000044.2945021133.fixed.png` will be created in the output
directory. Note that the `!fix` command does not replace the original file,
unlike the behavior at generate time.
## How to disable
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_restore` and
`--no_upscale` options, respectively.
If, for some reason, you do not wish to load the ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_esrgan` options.

Some files were not shown because too many files have changed in this diff Show More