Commit Graph

7024 Commits

Author SHA1 Message Date
psychedelicious
2514af79a0 feat(ui): crude node outputs display
Resets on invoke. Nothing fancy for the UI yet, just simple text (for numbers and strings) or image. For other output types, the output in JSON.
2023-08-21 19:17:36 +10:00
psychedelicious
f952f8f685 feat(ui): add typegen customisation for invocation outputs
The `type` property is required on all of them, but because this is defined in pydantic as a Literal, it is not required in the OpenAPI schema. Easier to fix this by changing the generated types than fiddling around with pydantic.
2023-08-21 19:17:36 +10:00
psychedelicious
484b572023 feat(nodes): primitives have value instead of a as field names 2023-08-21 19:17:36 +10:00
psychedelicious
cd9baf8092 fix(stats): fix InvocationStatsService types
- move docstrings to ABC
- `start_time: int` -> `start_time: float`
- remove class attribute assignments in `StatsContext`
- add `update_mem_stats()` to ABC
- add class attributes to ABC, because they are referenced in instances of the class. if they should not be on the ABC, then maybe there needs to be some restructuring
2023-08-21 19:17:36 +10:00
psychedelicious
81385d7d35 fix(stats): fix fail case when previous graph is invalid
When retrieving a graph, it is parsed through pydantic. It is possible that this graph is invalid, and an error is thrown.

Handle this by deleting the failed graph from the stats if this occurs.
2023-08-21 19:17:36 +10:00
psychedelicious
519bcb38c1 feat(ui): node delete, copy, paste 2023-08-21 19:17:36 +10:00
psychedelicious
567d46b646 feat(ui): delete key works on workflow editor 2023-08-21 19:17:36 +10:00
psychedelicious
030802295b feat(ui): reset only specific nodes/cnet that use images
Previously if an image was used in nodes and you deleted it, it would reset all of node editor. Same for controlnet.

Now it only resets the specific nodes or controlnets that used that image.
2023-08-21 19:17:36 +10:00
psychedelicious
a495c8c156 feat(ui): misc cleanups 2023-08-21 19:17:36 +10:00
psychedelicious
ae6db67068 feat(ui): add width to mantine selects 2023-08-21 19:17:36 +10:00
psychedelicious
3d84e7756a fix(nodes): fix field names 2023-08-21 19:17:36 +10:00
psychedelicious
98431b3de4 feat: add Scheduler as field type
- update node schemas
- add `UIType.Scheduler`
- add field type to schema parser, input components
2023-08-21 19:17:36 +10:00
psychedelicious
210a3f9aa7 feat(ui): make mantine single selects *exactly* the same size as chakra ones 2023-08-21 19:17:36 +10:00
psychedelicious
9332ce639c fix(ui): fix node mouse interactions
Add "nodrag", "nowheel" and "nopan" class names in interactable elements, as neeeded. This fixes the mouse interactions and also makes the node draggable from anywhere without needing shift.

Also fixes ctrl/cmd multi-select to support deselecting.
2023-08-21 19:17:36 +10:00
psychedelicious
84cf8bdc08 feat(ui): field context menu, add/remove from linear ui 2023-08-21 19:17:36 +10:00
psychedelicious
64a6aa0293 fix(ui): move BoardContextMenu to use IAIContextMenu 2023-08-21 19:17:36 +10:00
psychedelicious
5ae14bffba fix(ui): clear exposedFields when resetting graph 2023-08-21 19:17:36 +10:00
psychedelicious
0909812c84 chore: black 2023-08-21 19:17:15 +10:00
psychedelicious
66c0aea9e7 fix(nodes): removed duplicate node 2023-08-21 19:17:15 +10:00
Damian Stewart
2bcded78e1 add BlendInvocation 2023-08-21 19:17:15 +10:00
Sergey Borisov
beb3e5aeb7 Report correctly to compel if we want get pooled in future(affects blend computation) 2023-08-21 19:05:40 +10:00
Kevin Turner
0f1b975d0e
dep(diffusers): upgrade diffusers to 0.20 (#4311) 2023-08-18 18:22:11 -07:00
Kevin Turner
2fef478497 fix(convert_ckpt): Removed is_safetensors_available as safetensors is now a required dependency. 2023-08-18 11:05:59 -07:00
Kevin Turner
6df6abf6f6
Merge branch 'main' into dep/diffusers020 2023-08-18 11:02:52 -07:00
Martin Kristiansen
c96ae4c331 Reverting late imports to fix tests 2023-08-18 15:52:04 +10:00
Martin Kristiansen
ce465acf04 Fixed OnnxRuntimeModel import 2023-08-18 15:52:04 +10:00
Martin Kristiansen
33ee418d8c Fixing class level import 2023-08-18 15:52:04 +10:00
Martin Kristiansen
4f1008f31f Installing Flake8-pyproject in GHA workflow 2023-08-18 15:52:04 +10:00
Martin Kristiansen
6cc629e19d Adding flake8 to GHA and pre-commit. Fixing missing flake8 2023-08-18 15:52:04 +10:00
Martin Kristiansen
537ae2f901 Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
psychedelicious
f6db9da06c chore(ui): rename file to not cause madge to fail 2023-08-18 13:20:29 +10:00
psychedelicious
a17dbd7df6 feat(ui): improve error toast messages 2023-08-18 13:20:29 +10:00
Kevin Turner
98a4cc20a9
Merge branch 'main' into dep/diffusers020 2023-08-17 20:04:11 -07:00
Lincoln Stein
498d2ecc2b
allow symbolic links to be followed during autoimport (#4268)
## What type of PR is this? (check all applicable)

- [X] Feature
- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [X] Yes

## Description

Follow symbolic links when auto importing from a directory. Previously
links to files worked, but links to directories weren’t entered during
the scanning/import process.
2023-08-17 20:31:00 -04:00
Lincoln Stein
4ebe839d54
Merge branch 'main' into bugfix/enable-links-in-autoimport 2023-08-17 18:55:45 -04:00
Lincoln Stein
bc16b50302 add followlinks to all os.walk() calls 2023-08-17 18:54:18 -04:00
Kevin Turner
4267132926 dep(diffusers): upgrade diffusers to 0.20
Removed `is_safetensors_available` as safetensors is now a required dependency of diffusers.
2023-08-17 13:42:29 -07:00
Lincoln Stein
832335998f
Update 'monkeypatched' controlnet class (#4269)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Should be removed when added in diffusers
https://github.com/huggingface/diffusers/pull/4599
2023-08-17 15:49:54 -04:00
Lincoln Stein
1102c12084
Merge branch 'main' into fix/sdxl_controlnet 2023-08-17 15:40:51 -04:00
Lincoln Stein
b5cee7d20c blackify chore 2023-08-17 15:40:15 -04:00
Lincoln Stein
842eb4bb0a
Merge branch 'main' into bugfix/enable-links-in-autoimport 2023-08-17 07:20:26 -04:00
blessedcoolant
89b82b3dc4
(feat): Add Seam Painting to Canvas (1.x, 2.x & SDXL w/ Refiner) (#4292)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes
      
## Description

PR to add Seam Painting back to the Canvas.

## TODO Later

While the graph works as intended, it has become extremely large and
complex. I don't know if there's a simpler way to do this. Maybe there
is but there's soo many connections and visualizing the graph in my head
is extremely difficult. We might need to create some kind of tooling for
this. Coz it's going going to get crazier.

But well works for now.
2023-08-17 21:24:39 +12:00
blessedcoolant
8923201fdf
Merge branch 'main' into seam-painting 2023-08-17 21:21:44 +12:00
mickr777
226409107b Fix for Image Deletion issue 2023-08-17 17:18:11 +10:00
blessedcoolant
ae986bf873
Report RAM usage and RAM cache statistics after each generation (#4287)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

     
## Have you updated all relevant documentation?
- [X] Yes


## Description

This PR enhances the logging of performance statistics to include RAM
and model cache information. After each generation, the following will
be logged. The new information follows TOTAL GRAPH EXECUTION TIME.

```
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Graph stats: 2408dbec-50d0-44a3-bbc4-427037e3f7d4
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Node                 Calls    Seconds VRAM Used
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> main_model_loader        1     0.004s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> clip_skip                1     0.002s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> compel                   2     2.706s     0.246G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> rand_int                 1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> range_of_size            1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> iterate                  1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> metadata_accumulator     1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> noise                    1     0.003s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> denoise_latents          1     2.429s     2.022G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> l2i                      1     1.020s     1.858G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:    6.171s
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used by InvokeAI process: 4.50G (delta=0.10G)
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used to load models: 1.99G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> VRAM in use: 0.303G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM cache statistics:
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache hits: 2
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache misses: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cached: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cleared from cache: 0
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Cache high water mark: 1.99/7.50G    
```

There may be a memory leak in InvokeAI. I'm seeing the process memory
usage increasing by about 100 MB with each generation as shown in the
example above.
2023-08-17 16:10:18 +12:00
Lincoln Stein
daf75a1361 blackify 2023-08-16 21:47:29 -04:00
Lincoln Stein
fe4b2d53ed Merge branch 'feat/collect-more-stats' of github.com:invoke-ai/InvokeAI into feat/collect-more-stats 2023-08-16 21:39:29 -04:00
Lincoln Stein
c39f8b478b fix misplaced ram_used and ram_changed attributes 2023-08-16 21:39:18 -04:00
Lincoln Stein
1f82d8013e
Merge branch 'main' into feat/collect-more-stats 2023-08-16 18:51:17 -04:00
Lincoln Stein
e373bfca54 fix several broken links in the installation index 2023-08-16 17:54:39 -04:00