Commit Graph

7490 Commits

Author SHA1 Message Date
842eb4bb0a Merge branch 'main' into bugfix/enable-links-in-autoimport 2023-08-17 07:20:26 -04:00
89b82b3dc4 (feat): Add Seam Painting to Canvas (1.x, 2.x & SDXL w/ Refiner) (#4292)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes
      
## Description

PR to add Seam Painting back to the Canvas.

## TODO Later

While the graph works as intended, it has become extremely large and
complex. I don't know if there's a simpler way to do this. Maybe there
is but there's soo many connections and visualizing the graph in my head
is extremely difficult. We might need to create some kind of tooling for
this. Coz it's going going to get crazier.

But well works for now.
2023-08-17 21:24:39 +12:00
8923201fdf Merge branch 'main' into seam-painting 2023-08-17 21:21:44 +12:00
226409107b Fix for Image Deletion issue 2023-08-17 17:18:11 +10:00
ae986bf873 Report RAM usage and RAM cache statistics after each generation (#4287)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

     
## Have you updated all relevant documentation?
- [X] Yes


## Description

This PR enhances the logging of performance statistics to include RAM
and model cache information. After each generation, the following will
be logged. The new information follows TOTAL GRAPH EXECUTION TIME.

```
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Graph stats: 2408dbec-50d0-44a3-bbc4-427037e3f7d4
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> Node                 Calls    Seconds VRAM Used
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> main_model_loader        1     0.004s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> clip_skip                1     0.002s     0.000G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> compel                   2     2.706s     0.246G
[2023-08-15 21:55:39,010]::[InvokeAI]::INFO --> rand_int                 1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> range_of_size            1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> iterate                  1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> metadata_accumulator     1     0.002s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> noise                    1     0.003s     0.244G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> denoise_latents          1     2.429s     2.022G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> l2i                      1     1.020s     1.858G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:    6.171s
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used by InvokeAI process: 4.50G (delta=0.10G)
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM used to load models: 1.99G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> VRAM in use: 0.303G
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO --> RAM cache statistics:
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache hits: 2
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Model cache misses: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cached: 5
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Models cleared from cache: 0
[2023-08-15 21:55:39,011]::[InvokeAI]::INFO -->    Cache high water mark: 1.99/7.50G    
```

There may be a memory leak in InvokeAI. I'm seeing the process memory
usage increasing by about 100 MB with each generation as shown in the
example above.
2023-08-17 16:10:18 +12:00
59bc9ed399 fix(backend): handle BatchSessionNotFoundException in BatchManager._process()
The internal `BatchProcessStorage.get_session()` method throws when it finds nothing, but we were not catching any exceptions.

This caused a exception when the batch manager handles a `graph_execution_state_complete` event that did not originate from a batch.

Fixed by handling the exception.
2023-08-17 13:58:11 +10:00
e62d5478fd fix(backend): fix sqlite cannot commit - no transaction is active
The `commit()` was called even if we hadn't executed anything
2023-08-17 13:55:38 +10:00
2cf0d61b3e Merge branch 'main' into feat/batch-graphs 2023-08-17 13:33:17 +10:00
cc3c2756bd feat(backend): rename batch changes variable
`updateSession` -> `changes`
2023-08-17 13:32:32 +10:00
503e3bca54 revise config but need to migrate old format to new 2023-08-16 23:30:00 -04:00
67cf594bb3 feat(backend): add missing types to batch_manager_storage.py 2023-08-17 13:29:19 +10:00
c5b963f1a6 fix(backend): typo
`relavent` -> `relevant`
2023-08-17 12:47:58 +10:00
4d2dd6bb10 feat(backend): rename BatchManager.process to _process
Just to make it clear that this is not a method on the ABC.
2023-08-17 12:47:05 +10:00
7e4beab4ff feat(backend): surface BatchSessionNodeFoundException
Catch this exception in the router and return an appropriate `HTTPException`.
2023-08-17 12:45:32 +10:00
e16b5f7cdc feat(backend): deserialize batch session directly
If the values from the `session_dict` are invalid, the model instantiation will fail, or if we end up with an invalid `batch_id`, the app will not run. So I think just parsing the dict directly is equivalent.

Also the LSP analyser is pleased now - no red squigglies.
2023-08-17 12:37:03 +10:00
1f355d5810 feat(backend): update batch_manager_storage.py docstrings 2023-08-17 12:31:51 +10:00
df7370f9d9 chore(backend): remove unused code 2023-08-17 12:16:34 +10:00
5bec64d65b fix(backend): fix typings in batch_manager.py
- `batch_indicies` is `tuple[int]` not `list[int]`
- explicit `None` return values
2023-08-17 12:07:20 +10:00
8cf9bd47b2 chore(backend): remove unnecessary batch validation function
The `Batch` model is fully validated by pydantic on instantiation; we do not need any validation logic for it.
2023-08-17 11:59:47 +10:00
c91621b46c fix(backend): BatchProcess.batch_id is required
Providing a `default_factory` is enough for pydantic to know to create the attribute on instantiation if it's not already provided. We can then make make the typing just `str`.
2023-08-17 11:58:29 +10:00
f246b236dd fix(api): fix start_batch route responses 2023-08-17 11:51:14 +10:00
daf75a1361 blackify 2023-08-16 21:47:29 -04:00
fe4b2d53ed Merge branch 'feat/collect-more-stats' of github.com:invoke-ai/InvokeAI into feat/collect-more-stats 2023-08-16 21:39:29 -04:00
c39f8b478b fix misplaced ram_used and ram_changed attributes 2023-08-16 21:39:18 -04:00
1f82d8013e Merge branch 'main' into feat/collect-more-stats 2023-08-16 18:51:17 -04:00
e373bfca54 fix several broken links in the installation index 2023-08-16 17:54:39 -04:00
2ca8611723 add +/- sign in front of RAM delta 2023-08-16 15:53:01 -04:00
f7277a8b21 Run python black 2023-08-16 15:44:52 -04:00
796ee1246b Add a batch validation test 2023-08-16 15:42:45 -04:00
29fceb960d Fix batch_manager test 2023-08-16 15:33:15 -04:00
796ff34c8a Testing out Spencer's batch data structure 2023-08-16 15:21:11 -04:00
d6a5c2dbe3 Fix tests 2023-08-16 14:35:49 -04:00
ef8dc2e8c5 Merge branch 'main' into feat/batch-graphs 2023-08-16 14:03:34 -04:00
5aa7bfebd4 Fix masked generation with inpaint models 2023-08-16 20:28:33 +03:00
b12cf315a8 Merge branch 'main' into feat/collect-more-stats 2023-08-16 09:19:33 -04:00
975586bb40 Merge branch 'main' into seam-painting 2023-08-17 01:05:42 +12:00
a7ba142ad9 feat(ui): set min zoom on nodes to 0.1 2023-08-16 23:04:36 +10:00
0d36bab6cc fix(ui): do not rerender top panel buttons 2023-08-16 23:04:36 +10:00
c2e7f62701 fix(ui): do not rerender edges 2023-08-16 23:04:36 +10:00
1f194e3688 chore(ui): lint 2023-08-16 23:04:36 +10:00
f9b8b5cff2 fix(ui): improve node rendering performance
Previously the editor was using prop-drilling node data and templates to get values deep into nodes. This ended up causing very noticeable performance degradation. For example, any text entry fields were super laggy.

Refactor the whole thing to use memoized selectors via hooks. The hooks are mostly very narrow, returning only the data needed.

Data objects are never passed down, only node id and field name - sometimes the field kind ('input' or 'output').

The end result is a *much* smoother node editor with very minimal rerenders.
2023-08-16 23:04:36 +10:00
f7c92e1eff fix(ui): disable awkward resize animation for <Flow /> 2023-08-16 23:04:36 +10:00
70b8c3dfea fix(ui): fix context menu on workflow editor
There is a tricky mouse event interaction between chakra's `useOutsideClick()` hook (used by chakra `<Menu />`) and reactflow. The hook doesn't work when you click the main reactflow area.

To get around this, I've used a dirty hack, copy-pasting the simple context menu component we use, and extending it slightly to respond to a global `contextMenusClosed` redux action.
2023-08-16 23:04:36 +10:00
43b30355e4 feat: make primitive node titles consistent 2023-08-16 23:04:36 +10:00
a93bd01353 fix bad merge 2023-08-16 08:53:07 -04:00
bb1b8ceaa8 Update invokeai/backend/model_management/model_cache.py
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
2023-08-16 08:48:44 -04:00
be8edaf3fd Merge branch 'main' into feat/collect-more-stats 2023-08-16 08:48:14 -04:00
9cbaefaa81 feat: Add Seam Painting to SDXL 2023-08-16 19:46:48 +12:00
cc7c6e5d41 feat: Add Seam Painting with Scale Before 2023-08-16 19:35:03 +12:00
f2ee8a3da8 wip: Basic Seam Painting (only normal models) (no scale) 2023-08-16 17:26:23 +12:00