Commit Graph

6846 Commits

Author SHA1 Message Date
psychedelicious
b8b589c150 fix(nodes): fix hsl nodes rebase conflict 2023-08-06 09:57:49 +10:00
Kent Keirsey
d93900a8de Added HSL Nodes 2023-08-06 09:57:49 +10:00
Kevin Turner
7f4c387080 test(model_management): factor out name strings 2023-08-05 15:46:46 -07:00
Kevin Turner
80876bbbd1 Merge remote-tracking branch 'origin/refactor/model_manager_instantiate' into refactor/model_manager_instantiate 2023-08-05 15:25:05 -07:00
Kevin Turner
7a4ff4c089
Merge branch 'main' into refactor/model_manager_instantiate 2023-08-05 15:23:38 -07:00
Kevin Turner
44bf308192 test(model_management): add a couple tests for _get_model_path 2023-08-05 15:22:23 -07:00
Lincoln Stein
12e51c84ae blackified 2023-08-05 14:26:16 -07:00
Lincoln Stein
b2eb83deff add docs 2023-08-05 14:26:16 -07:00
Lincoln Stein
0ccc3b509e add techjedi's import script, with some filecompletion tweaks 2023-08-05 14:26:16 -07:00
Lincoln Stein
4043a4c21c blackified 2023-08-05 12:44:58 -04:00
Lincoln Stein
c8ceb96091 add docs 2023-08-05 12:26:52 -04:00
Lincoln Stein
83f75750a9 add techjedi's import script, with some filecompletion tweaks 2023-08-05 12:19:24 -04:00
Jonathan
dc96a3e79d Fix random number generator
Passing in seed=0 is not equivalent to seed=None. The latter will get a new seed from entropy in the OS, and that's what we should be using.
2023-08-06 00:29:08 +10:00
Lincoln Stein
c076f1397e rebuild frontend 2023-08-05 14:40:42 +10:00
Lincoln Stein
2568aafc0b bump version number so that pip updates work 2023-08-05 14:40:42 +10:00
Kevin Turner
65ed224bfc
Merge branch 'main' into refactor/model_manager_instantiate 2023-08-04 21:34:38 -07:00
psychedelicious
b6e369c745 chore: black 2023-08-05 12:28:35 +10:00
gogurtenjoyer
ecabfc252b devices.py - Update MPS FP16 check to account for upcoming MacOS Sonoma
float16 doesn't seem to work on MacOS Sonoma due to further changes with Metal. This'll default back to float32 for Sonoma users.
2023-08-05 12:28:35 +10:00
psychedelicious
da96a41103
Merge branch 'main' into feat/select-vram-in-config 2023-08-05 12:11:50 +10:00
Lincoln Stein
d162b78767 fix broken civitai example link 2023-08-05 12:10:52 +10:00
psychedelicious
eb6c317f04 chore: black 2023-08-05 12:05:24 +10:00
psychedelicious
6d7223238f fix: fix typo in message 2023-08-05 12:05:24 +10:00
Damian Stewart
8607d124c5 improve message about the consequences of the --ignore_missing_core_models flag 2023-08-05 12:05:24 +10:00
Damian Stewart
23497bf759 add --ignore_missing_core_models CLI flag to bypass checking for missing core models 2023-08-05 12:05:24 +10:00
Kevin Turner
b10cf20eb1 Merge branch 'main' into refactor/model_manager_instantiate
# Conflicts:
#	invokeai/backend/model_management/model_manager.py
2023-08-04 18:28:18 -07:00
StAlKeR7779
3d93851dba
Installer should download fp16 models if user has specified 'auto' in config (#4129)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

At install time, when the user's config specified "auto" precision, the
installer was downloading the fp32 models even when an fp16 model would
be appropriate for the OS.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4127
2023-08-05 01:56:25 +03:00
StAlKeR7779
9bacd77a79
Merge branch 'main' into bugfix/fp16-models 2023-08-05 01:42:43 +03:00
Lincoln Stein
1b158f62c4 resolve vae overrides correctly 2023-08-04 18:24:47 -04:00
Lincoln Stein
6ad565d84c folded in changes from 4099 2023-08-04 18:24:47 -04:00
Sergey Borisov
04229082d6 Provide ti name from model manager, not from ti itself 2023-08-04 18:24:47 -04:00
Lincoln Stein
03c27412f7
[WIP] Add sdxl lora support (#4097)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Add lora loading for sdxl.
NOT TESTED - I run only 2 loras, please check more(including lycoris if
they already exists).

## QA Instructions, Screenshots, Recordings
https://civitai.com/models/118536/voxel-xl

![image](https://github.com/invoke-ai/InvokeAI/assets/7768370/76a6abff-cb0a-43b4-b779-a0b0e5b46e56)


## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-04 16:12:22 -04:00
Sergey Borisov
f0613bb0ef Fix merge conflict resolve - restore full/diff layer support 2023-08-04 19:53:27 +03:00
StAlKeR7779
0e9f92b868
Merge branch 'main' into feat/sdxl_lora 2023-08-04 19:22:13 +03:00
psychedelicious
7d0cc6ec3f chore: black 2023-08-05 02:04:22 +10:00
Sergey Borisov
2f8b928486 Add support for diff/full lora layers 2023-08-05 02:04:22 +10:00
StAlKeR7779
0d3c27f46c Fix typo
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-08-04 11:44:56 -04:00
Sergey Borisov
cff91f06d3 Add lora apply in sdxl l2l node 2023-08-04 11:44:56 -04:00
Lincoln Stein
1d5d187ba1 model probe detects sdxl lora models 2023-08-04 11:44:56 -04:00
Sergey Borisov
1ac14a1e43 add sdxl lora support 2023-08-04 11:44:56 -04:00
Mary Hipp
cfc3a20565 autoAddBoardId should always be defined 2023-08-04 22:19:11 +10:00
Lincoln Stein
05ae4e283c
Stop checking for unet/model.onnx when a model_index.json is detected (#4132)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-03 22:10:37 -04:00
Lincoln Stein
f06fee4581
Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-03 22:02:05 -04:00
Lincoln Stein
9091e19de8
Add execution stat reporting after each invocation (#4125)
## What type of PR is this? (check all applicable)

- [X] Feature


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No

## Description

This PR adds execution time and VRAM usage reporting to each graph
invocation. The log output will look like this:

```
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Graph stats: c7764585-9c68-4d9d-a199-55e8186790f3                                                                                              
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> Node                 Calls  Seconds  VRAM Used                                                                                                 
[2023-08-02 18:03:04,507]::[InvokeAI]::INFO --> main_model_loader        1   0.005s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> clip_skip                1   0.004s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> compel                   2   0.512s     0.26G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> rand_int                 1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> range_of_size            1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> iterate                  1   0.001s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> metadata_accumulator     1   0.002s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> noise                    1   0.002s     0.01G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> t2l                      1   3.541s     1.93G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> l2i                      1   0.679s     0.58G                                                                                                  
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> TOTAL GRAPH EXECUTION TIME:  4.749s                                                                                                            
[2023-08-02 18:03:04,508]::[InvokeAI]::INFO --> Current VRAM utilization 0.01G                                                                                                                 
```
On systems without CUDA, the VRAM stats are not printed.

The current implementation keeps track of graph ids separately so will
not be confused when several graphs are executing in parallel. It
handles exceptions, and it is integrated into the app framework by
defining an abstract base class and storing an implementation instance
in `InvocationServices`.
2023-08-03 20:05:21 -04:00
Lincoln Stein
0a0b7141af
Merge branch 'main' into feat/execution-stats 2023-08-03 19:49:00 -04:00
Lincoln Stein
1deca89fde
Merge branch 'main' into feat/select-vram-in-config 2023-08-03 19:27:58 -04:00
Lincoln Stein
446fb4a438 blackify 2023-08-03 19:24:23 -04:00
Lincoln Stein
ab5d938a1d use variant instead of revision 2023-08-03 19:23:52 -04:00
Brandon
9942af756a
Merge branch 'main' into remove-onnx-model-check-from-pipeline-download 2023-08-03 10:10:51 -04:00
Lincoln Stein
06742faca7 Merge branch 'feat/execution-stats' of github.com:invoke-ai/InvokeAI into feat/execution-stats 2023-08-03 08:48:05 -04:00
Lincoln Stein
d2bddf7f91 tweak formatting to accommodate longer runtimes 2023-08-03 08:47:56 -04:00