Compare commits

..

211 Commits

Author SHA1 Message Date
b79740d61d back out torch.no_grad() 2023-07-02 23:03:24 -04:00
8c93c8dda8 add web dist files to enable network pip install 2023-07-02 22:02:53 -04:00
176504a475 add .js, .woff2 and .css files to web/dist/assets 2023-07-02 21:50:29 -04:00
fa8ccd2a94 add no_grad() to compel node invoke() method 2023-07-02 18:20:16 -04:00
6935858ef3 add debugging messages to aid in memory leak tracking 2023-07-02 13:34:53 -04:00
2b67509061 bump version; rebuild frontend 2023-07-02 13:02:31 -04:00
fa1f9939cc adjust invokeai-configure TUI vertical height to show NEXT button on Mac 2023-07-02 09:44:16 -04:00
2d314d2b3d another fix to repo_id loading 2023-07-02 09:18:11 -04:00
b2775d6b4c Merge branch 'lstein/recognize-legacy-sampler-names' into release/invokeai-3-0-alpha 2023-07-01 21:45:39 -04:00
06694d465d add missing k-* legacy sampler names to init file migrate list 2023-07-01 21:45:14 -04:00
3c2ce51f10 Merge branch 'lstein/remove-hardcoded-cuda-device' into release/invokeai-3-0-alpha 2023-07-01 21:15:58 -04:00
0f02915012 remove hardcoded cuda device in model manager init 2023-07-01 21:15:42 -04:00
0016236889 Merge branch 'lstein/fix-imported-model-names' into release/invokeai-3-0-alpha 2023-07-01 21:09:29 -04:00
f4bd5bb986 when migrating models, changes / to _ in model names to avoid breaking model name keys 2023-07-01 21:08:59 -04:00
1cf61feead print GPU device at startup 2023-07-01 20:47:11 -04:00
5de820f2dc fix updater and model installer 2023-07-01 20:13:28 -04:00
f1fb1c9a60 Merge branch 'lstein/fix-update-script' into release/invokeai-3-0-alpha 2023-07-01 20:13:04 -04:00
9724143ab7 rolled back changes to package.json 2023-07-01 20:05:00 -04:00
ecc5b6eec5 change single to double quotes so that pip install works on windows 2023-07-01 19:56:18 -04:00
4ac9be115e rebuild frontend 2023-07-01 14:48:23 -04:00
7d64a5849f merge draft docs 2023-07-01 14:45:00 -04:00
054b5f484a resolve conflicts with main 2023-07-01 14:42:48 -04:00
3458f45a2b Merge branch 'lstein/improve-model-install-stability' into release/invokeai-3-0-alpha 2023-07-01 14:35:35 -04:00
6c80620c25 Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-01 14:34:38 -04:00
f1928d2588 prevent crashes on malformed models 2023-07-01 14:32:58 -04:00
96212bb35f feat(ui): gallery minSize tweak (#3618)
- Set min size for floating gallery panel
- Correct the default pinned width (it cannot be less than the min width
and this was sometimes happening during window resize)
2023-07-01 22:37:08 +12:00
f46c50f69a feat(ui): gallery minSize tweak
- Set min size for floating gallery panel
- Correct the default pinned width (it cannot be less than the min width and this was sometimes happening during window resize)
2023-07-01 20:27:52 +10:00
3aa6a7e7df feat(ui): minimum gallery size
Add `useMinimumPanelSize()` hook to provide minimum resizable panel sizes (in pixels).

The library we are using for the gallery panel uses percentages only. To provide a minimum size in pixels, we need to do some math to calculate the percentage of window size that corresponds to the desired min width in pixels.
2023-07-01 18:29:55 +10:00
d9ac36df1d fix incorrect VAE config file path during conversion of ckpts (#3616)
This fixes a "config file not found" error when loading VAE checkpoints.
2023-07-01 11:26:36 +12:00
c74bb5cdbf Merge branch 'main' into lstein/fix-vae-convert 2023-07-01 11:18:21 +12:00
1347fc2f00 fix incorrect VAE config file path during conversion of ckpts 2023-06-30 19:14:06 -04:00
d0834cfa19 export new ColorModeButton component (#3614)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-06-30 09:07:36 -04:00
2b6c9c93e0 fix(ui): fix canvas crash by rolling back swagger-parser (#3611)
The node polyfills needed to run the `swagger-parser` library (used to
dereference the OpenAPI schema) cause the canvas tab to immediately
crash when the package build is used in another react application.

I'm sure this is fixable but it's not clear what is causing the issue
and troubleshooting is very time consuming.

Selectively rolling back the implementation of `swagger-parser`.
2023-06-30 23:34:06 +12:00
9a123ed662 Merge branch 'main' into fix/ui/fix-canvas-crash 2023-06-30 23:31:42 +12:00
a9bc45b8af feat(ui): tweak light mode colors, buttons pop (#3612)
the light mode button colors were way off, much improved
2023-06-30 23:31:30 +12:00
d6cfbe982f feat(ui): tweak light mode colors, buttons pop 2023-06-30 13:15:58 +10:00
30464f4fe1 fix(ui): fix canvas crash by rolling back swagger-parser
The node polyfills needed to run the `swagger-parser` library (used to dereference the OpenAPI schema) cause the canvas tab to immediately crash when the package build is used in another react application.

I'm sure this is fixable but it's not clear what is causing the issue and troubleshooting is very time consuming.

Selectively rolling back the implementation of `swagger-parser`.
2023-06-30 12:24:28 +10:00
877483093a ui: support dark mode (#3592)
[feat(ui): remove themes, add hand-crafted dark and light
modes](032c7e68d0)

[032c7e6](032c7e68d0)

Themes are very fun but due to the differences in perceived saturation
and lightness across the
the color spectrum, it's impossible to have have multiple themes that
look great without hand-
crafting *every* shade for *every* theme. We've ended up with 4 OK
themes (well, 3, because the
light theme was pretty bad).

I've removed the themes and added color mode support. There is now a
single dark and light mode,
each with their own color palette and the classic grey / purple / yellow
invoke colors that
@blessedcoolant first designed.

I've re-styled almost everything except the model manager and lightbox,
which I keep forgetting
to work on.

One new concept is the Chakra `layerStyle`. This lets us define "layers"
- think body, first layer,
second layer, etc - that can be applied on various components. By
defining layers, we can be more
consistent about the z-axis and its relationship to color and lightness.
2023-06-30 06:13:43 +12:00
295444c730 cleanup: Minor theme related cleanup 2023-06-30 06:09:14 +12:00
fb015332f2 feat: Add tooltips to color mode switcher 2023-06-30 06:05:08 +12:00
6e917dcbb0 chore: More colors to own files + small color tweaks 2023-06-30 06:04:42 +12:00
032c7e68d0 feat(ui): remove themes, add hand-crafted dark and light modes
Themes are very fun but due to the differences in perceived saturation and lightness across the
the color spectrum, it's impossible to have have multiple themes that look great without hand-
crafting *every* shade for *every* theme. We've ended up with 4 OK themes (well, 3, because the
light theme was pretty bad).

I've removed the themes and added color mode support. There is now a single dark and light mode,
each with their own color palette and the classic grey / purple / yellow invoke colors that
@blessedcoolant first designed.

I've re-styled almost everything except the model manager and lightbox, which I keep forgetting
to work on.

One new concept is the Chakra `layerStyle`. This lets us define "layers" - think body, first layer,
second layer, etc - that can be applied on various components. By defining layers, we can be more
consistent about the z-axis and its relationship to color and lightness.
2023-06-30 03:24:36 +10:00
28d78a8fb4 Add image board support to invokeai-node-cli (#3594)
This PR corrects a crash during startup of `invokeai-node-cli` due to
failure to initialize the image board service.
2023-06-29 08:20:07 -04:00
2c5b050d82 add image board support to invokeai-node-cli 2023-06-29 22:12:34 +10:00
723d68e496 add image usage for board images and listener to handle actual deletion 2023-06-29 21:14:53 +10:00
ba67e57a7e (wip) delete images along with board 2023-06-29 21:14:53 +10:00
45935caf1d fix query 2023-06-29 21:14:53 +10:00
73f2092ec5 (api) add option to board delete route and logic to services 2023-06-29 21:14:53 +10:00
8297b7e1ae Fix duplicate model key addition when root directory is a relative path (#3607)
This fixes model directory scanning so that it works properly when the
root is a relative path (e.g. ".").
2023-06-29 18:01:22 +12:00
5be1e71d1b Merge branch 'main' into lstein/fix-model-scan-on-rel-root 2023-06-29 17:54:12 +12:00
e65e635944 Fix Typo in migrate_to_3.py (#3610)
this caused the vae in the models.yaml to point to the wrong folder
2023-06-29 17:53:50 +12:00
30a917f70c Fix Typo in migrate_to_3.py 2023-06-29 14:45:55 +10:00
4308d593c3 fix(ui): improve IDE TS performance by not resolving JSON
The TS Language Server slows down immensely with our translation JSON, which is used to provide kinda-type-safe translation keys. I say "kinda", because you don't get autocomplete - you only get red squigglies when the key is incorrect.

To improve the performance, we can opt out of this process entirely, at the cost of no red squigglies for translation keys. Hopefully we can resolve this in the future.

It's not clear why this became an issue only recently (like past couple weeks). We've tried rolling back the app dependencies, VSCode extensions, VSCode itself, and the TS version to before the time when the issue started, but nothing seems to improve the performance.

1. Disable `resolveJsonModule` in `tsconfig.json`
2. Ignore TS in `i18n.ts` when importing the JSON
3. Comment out the custom types in `i18.d.ts` entirely

It's possible that only `3` is needed to fix the issue.

I've tested building the app and running the build - it works fine, and translation works fine.
2023-06-28 23:55:44 -04:00
8f6b3660c5 Set use-credentials on commercial deployment if authToken is set on canvas image calls, comment out the UpdateImageUrls on connect listener 2023-06-29 13:55:03 +10:00
fe5e0b103f update README; chnage default root directory to invokeai-3 2023-06-28 17:47:04 -04:00
218eb8522f tweak launcher option wording 2023-06-28 17:10:07 -04:00
1e97ba3628 merge with fix needed to run installer 2023-06-28 17:04:44 -04:00
ace4f6d586 fix duplicate model key addition when root directory is a relative path 2023-06-28 17:02:03 -04:00
261ca823c0 bump version number 2023-06-28 17:00:38 -04:00
8a90e51408 Apply lora by model patching (#3583)
Rewrite lora to be applied by model patching as it gives us benefits:
1) On model execution calculates result only on model weight, while with
hooks we need to calculate on model and each lora
2) As lora now patched in model weights, there no need to store lora in
vram

Results:
Speed:
| loras count | hook | patch |
| --- | --- | --- |
| 0 | ~4.92 it/s | ~4.92 it/s |
| 1 | ~3.51 it/s | ~4.89 it/s |
| 2 | ~2.76 it/s | ~4.92 it/s |

VRAM:
| loras count | hook | patch |
| --- | --- | --- |
| 0 | ~3.6 gb | ~3.6 gb |
| 1 | ~4.0 gb | ~3.6 gb |
| 2 | ~4.4 gb | ~3.7 gb |

As based on #3547 wait to merge.
2023-06-28 15:48:57 -04:00
ac46b129bf Merge branch 'main' into feat/lora_model_patch 2023-06-28 22:43:58 +03:00
ff2ae683d1 Update 060_INSTALL_PATCHMATCH.md (#3591)
installing the package 'blas' is needed in Archlinux, otherwise
patchmatch fails initializing with a "libblas.so.3 missing" error.
2023-06-28 15:40:45 -04:00
2714138af2 Merge branch 'main' into patch-1 2023-06-28 15:40:22 -04:00
2d85f9a123 Configuration and model installer for new model layout (#3547)
# Restore invokeai-configure and invokeai-model-install

This PR updates invokeai-configure and invokeai-model-install to work
with the new model manager file layout. It addresses a naming issue for
`ModelType.Main` (was `ModelType.Pipeline`) requested by
@blessedcoolant, and adds back the feature that allows users to dump
models into an `autoimport` directory for discovery at startup time.
2023-06-28 15:31:46 -04:00
79fc708580 warn but do not crash when model scan finds random cruft in models directory 2023-06-28 15:26:42 -04:00
72209d0cc3 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-28 14:49:37 -04:00
fffeb6f7f5 nodes: default to CPU noise (#3598)
this provides reproducible results across platforms.
we can expose this in the app.
2023-06-28 18:24:47 +12:00
75614bbba3 Merge branch 'main' into feat/nodes/cpu-noise 2023-06-28 18:22:08 +12:00
201b8430e4 Feat/controlnet extras (#3596)
Trying to get a few ControlNet extras in before 3.0 release:

- SegmentAnything ControlNet preprocessor node
- LeResDepth ControlNet preprocessor node (but commented out till
controlnet_aux v0.0.6 is released & required by InvokeAI)
- TileResampler ControlNet preprocessor node (should be equivalent to
Mikubill/sd-webui-controlnet extension tile_resampler)
- fix for Midas ControlNet preprocessor error with images that have
alpha channel

Example usage of SegmentAnything preprocessor node:
![Screenshot from 2023-06-26
16-53-44](https://github.com/invoke-ai/InvokeAI/assets/303100/c6278f9a-5f6b-44bd-98b1-fcaf77251a76)
2023-06-28 17:56:24 +12:00
32883adf6e Merge branch 'main' into feat/controlnet_extras 2023-06-28 17:36:21 +12:00
00c78b1cbc feat(ui): use max prompts for combinatorial, iterations for non-combi… (#3600)
…natorial
2023-06-28 17:35:45 +12:00
1ea3160594 Merge branch 'main' into feat/ui/dynamic-prompts-ux 2023-06-28 17:34:36 +12:00
fc322aa9f7 Update controlnet-aux to 0.0.6 and add LeReS 2023-06-27 23:45:47 -04:00
e12dbef18f fix(nodes): use context for logger in param_easing (#3529) 2023-06-27 23:36:01 -04:00
73f63853ba fix(nodes): use context for logger in param_easing 2023-06-27 23:30:10 -04:00
e8ed0fad6c autoimport from embedding/controlnet/lora folders designated in startup file 2023-06-27 12:30:53 -04:00
1f3e5582f4 feat(ui): add type extraction helpers 2023-06-28 01:17:34 +10:00
642db657c2 feat(ui): use max prompts for combinatorial, iterations for non-combinatorial 2023-06-27 20:29:41 +10:00
246298d1d6 chore(ui): regen types 2023-06-27 13:57:41 +10:00
2e14528e4c feat(nodes): default to CPU noise 2023-06-27 13:57:31 +10:00
f15d28d141 improved wording of v2 selection prompt 2023-06-26 20:30:08 -04:00
862bfa2c36 Merge branch 'main' of github.com:invoke-ai/InvokeAI into feat/controlnet_extras 2023-06-26 16:39:31 -07:00
044fe6bb20 remove dangling debug statement 2023-06-26 17:48:06 -04:00
8c74f49a18 Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout 2023-06-26 16:31:00 -04:00
823e098b7c prompt user for prediction type when autoimporting a v2 model without .yaml file
don't ask user for prediction type of a config.yaml provided
2023-06-26 16:30:34 -04:00
b7e9d09537 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-26 16:22:23 -04:00
3c30368c62 Configure and model install TUI tweaks (#3519)
The installer TUI requires a minimum window width and height to provide
a satisfactory user experience. If, after trying and exhausting all
means of enlarging the window (on Linux, Mac and Windows) the window is
still too small, this PR generates a message telling the user to enlarge
the window and pausing until they do so. If the user fails to enlarge
the window the program will proceed, and either issue an error message
that it can't continue (on Windows), or show a clipped display that the
user can remedy by enlarging the window.
2023-06-26 16:08:56 -04:00
ea15d037f9 Merge branch 'main' into lstein/tweak-installer-ui 2023-06-26 15:05:16 -04:00
f67dec7f0c Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-26 15:03:22 -04:00
10d2d85c83 Started to add ControlNet resize_crop and resize_fill options, but commented out, not ready to deploy yet. 2023-06-26 12:03:05 -07:00
4208766e19 Merge branch 'main' into patch-1 2023-06-26 15:00:50 -04:00
bf1f2eb128 Bypass failing tests (#3593)
"Fixes" the test suite generally so it doesn't fail CI, but some tests
needed to be skipped/xfailed due to recent refactor.

- ignore three test suites that broke following the model manager
refactor
- move `InvocationServices` fixture to `conftest.py`
- add `boards` items to the `InvocationServices`  fixture

This PR makes the unit tests work, but end-to-end tests are temporarily
commented out due to `invokeai-configure` being broken in `main` -
pending #3547

Looks like a lot of the tests need to be rewritten as they reference
`TextToImageInvocation` / `ImageToImageInvocation`
2023-06-26 14:41:56 -04:00
16829682c8 Merge branch 'main' into ebr/make-tests-pass 2023-06-26 14:27:31 -04:00
011adfc958 merge with main 2023-06-26 13:53:59 -04:00
befd95eb19 rename root_dir to root_path attributes to emphasize return of a Path 2023-06-26 13:52:25 -04:00
a2ddb3823b fix add_model() logic 2023-06-26 13:33:38 -04:00
cc400c9fa5 (ci) temporarily comment out end-to-end tests 2023-06-26 13:08:43 -04:00
4eb7a5fc60 (ci) clean up pip tests 2023-06-26 13:08:43 -04:00
587203d589 (tests) make fixture reusable; support boards
fixes the test suite generally, but some tests needed to be
skipped/xfailed due to recent refactor

- ignore three test suites that broke following the model manager
  refactor
- move InvocationServices fixture to conftest.py
- add `boards` InvocationServices to the fixture
2023-06-26 13:08:34 -04:00
e3f136cdda Update 060_INSTALL_PATCHMATCH.md
installing the packaged 'blas' is needed in Archlinux, otherwise patchmatch fails initializing with a "libblas.so.3 missing" error.
2023-06-26 14:23:10 +02:00
af566adf56 For MediapipeFace ControlNet preprocessor, if input image is RGBA format then convert to RGB (otherwise MediapipeFace image processing throws an error) 2023-06-26 04:29:43 -07:00
873c18bc4b Added TileResampler ControlNet preprocessor node.
Also fixes to SegmentAnything ControlNet preprocessor node.
2023-06-26 04:27:26 -07:00
d905d0e42a feat(ui): only show canvas image fallback on loading error (#3589) 2023-06-26 21:40:10 +12:00
6ccf62a863 feat(ui): only show canvas image fallback on loading error 2023-06-26 19:20:05 +10:00
6390af229d feat(ui): add dynamic prompts to t2i tab
- add param accordion for dynamic prompts
- update graphs
2023-06-26 19:15:54 +10:00
47e651225d query for 'main' model type when populating UI lists
to support renaming of 'pipeline' models to 'main'
2023-06-26 01:39:46 -04:00
9cfac4175f feat(ui): improved node parsing (#3584)
- use `swagger-parser` to dereference openapi schema
- tidy vite plugins
- use mantine select for node add menu
2023-06-26 17:38:23 +12:00
3a19be1606 fix: Add missing IAIMantineSelect disabled styles 2023-06-26 17:37:47 +12:00
b51ab056f2 Merge branch 'main' into feat/ui/update-node-parsing 2023-06-26 17:32:44 +12:00
e206fad22a fix(ui): fix controlnet image size (#3585) 2023-06-26 17:32:07 +12:00
7b97639961 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-26 01:24:30 -04:00
60780e990d fix(ui): fix controlnet image size 2023-06-26 12:03:11 +10:00
8d43cf92f6 feat(ui): update action santizer for schema actions 2023-06-26 12:00:38 +10:00
862bf7546c feat(ui): improved node parsing
- use `swagger-parser` to dereference openapi schema
- tidy vite plugins
- use mantine select for node add menu
2023-06-26 11:53:54 +10:00
91c3a58fb6 Fix lycoris layers init 2023-06-26 04:33:37 +03:00
5cebf67ee4 Apply lora by patching lora instead of hooks 2023-06-26 03:57:33 +03:00
1ba94a92b3 Fixes 2023-06-26 03:54:42 +03:00
23c22ac933 Refactor logic/small fixes 2023-06-26 03:07:54 +03:00
160b5d7992 add support for an autoimport models directory scanned at startup time 2023-06-25 18:50:15 -04:00
10e8389fa4 Commenting out LeReS ControlNet image preprocessor until release of controlnet_aux v0.0.6 (supported on controlnet_aux current main, but not on latest release v0.0.5) 2023-06-25 14:25:14 -07:00
45aa338a98 Changed pyproject.toml to require controlnet_aux >= 0.0.5 (to enable use of SAM ControlNet preprocessor) 2023-06-25 14:22:34 -07:00
414a04774c Added LeReS ControlNet image preprocessor. 2023-06-25 14:19:55 -07:00
c91d1eacba Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout 2023-06-25 16:04:48 -04:00
60b37b7ff4 fix model manager documentation 2023-06-25 16:04:43 -04:00
b872e7a5e0 Simplifying ControlNet SAM preprocessor segmentation color mapping. 2023-06-25 12:54:48 -07:00
de4064bdac Fixed problem with with non-reproducible results from ControlNet SegmentAnything preprocessor. Cause was controlnet_aux randomization of segmentation coloring, which seems to lead to some randomization of resulting images using ControlNet seg model. Switched to using deterministic ADE20K color palette instead, which solved the problem. 2023-06-25 12:38:17 -07:00
10c3753d7f Added SAM preprocessor 2023-06-25 11:16:39 -07:00
a3c22b5fe6 Remove upcast_attention and prediction_type from stable diffusion model logic, fix ckpt conversion according to this 2023-06-25 21:06:22 +03:00
922468b836 Add control_mode parameter to ControlNet (#3535)
This PR adds the "control_mode" option to ControlNet implementation. 
Possible control_mode options are: 

- balanced -- this is the default, same as previous implementation
without control_mode
- more_prompt -- pays more attention to the prompt
- more _control -- pays more attention to the ControlNet (in earlier
implementations this was called "guess_mode")
- unbalanced -- pays even more attention to the ControlNet 

balanced, more_prompt, and more_control should be nearly identical to
the equivalent options in the [auto1111 sd-webui-controlnet
extension](https://github.com/Mikubill/sd-webui-controlnet#more-control-modes-previously-called-guess-mode)

The changes to enable balanced, more_prompt, and more_control are
managed deeper in the code by two booleans, "soft_injection" and
"cfg_injection". The three control mode options in sd-webui-controlnet
map to these booleans like:
 
!soft_injection && !cfg_injection ⇒  BALANCED            
 soft_injection &&  cfg_injection ⇒  MORE_CONTROL 
 soft_injection && !cfg_injection ⇒  MORE_PROMPT   
 
The "unbalanced" option simply exposes the fourth possible combination
of these two booleans:
!soft_injection &&  cfg_injection ⇒ UNBALANCED

With "unbalanced" mode it is very easy to overdrive the controlnet
inputs. It's recommended to use a cfg_scale between 2 and 4 to mitigate
this, along with lowering controlnet weight and possibly lowering "end
step percent". With those caveats, "unbalanced" can yield interesting
results.

Example of all four modes using Canny edge detection ControlNet with
prompt "old man", identical params except for control_mode:

![Screenshot from 2023-06-11
23-53-00](https://github.com/invoke-ai/InvokeAI/assets/303100/c9e31e7f-50de-4d85-94f2-b5a4af3d067b)
Top middle:       BALANCED
Top right:          MORE_CONTROL
Bottom middle: MORE_PROMPT
Bottom right :    UNBALANCED

I kind of chose this seed because it shows pretty rough results with
BALANCED (the default), but in my opinion better results with both
MORE_CONTROL and MORE_PROMPT. And you can definitely see how MORE_PROMPT
pays more attention to the prompt, and MORE_CONTROL pays more attention
to the control image. And shows that UNBALANCED with default cfg_scale
etc is unusable.

But here are four examples from same series (same seed etc), all have
control_mode = UNBALANCED but now cfg_scale is set to 3.
![Screenshot from 2023-06-11
23-48-44](https://github.com/invoke-ai/InvokeAI/assets/303100/5a495306-2164-40aa-9cc8-ce737d7671e7)
And param differences are:
Top middle: prompt="old man", control_weight=0.3, end_step_percent=0.5
Top right: prompt="old man", control_weight=0.4, end_step_percent=1.0
Bottom middle: prompt=None, control_weight=0.3, end_step_percent=0.5
Bottom right: prompt=None, control_weight=0.4, end_step_percent=1.0

So with the right settings UNBALANCED seems useful.
2023-06-25 16:09:26 +12:00
57e719702d fix(ui): add missing ControlNetInvocation type; tidy schema-derived types 2023-06-25 14:04:53 +10:00
11378a9236 chore(ui): regen api schema 2023-06-25 14:04:16 +10:00
132829c88f fix(ui): fix path of generated schema types 2023-06-25 14:04:00 +10:00
4d4b5b56dc Merge branch 'main' into feat/controlnet-control-modes 2023-06-25 15:48:07 +12:00
a9334128c9 chore(ui): bump all packages (#3579)
Everything seems to be working.

- Due to a change to `reactflow`, I regenerated `yarn.lock`
- New chakra CLI fixes issue I had made a patch for; removed the patch
- Change to fontsource changed how we import that font
- Change to fontawesome means we lost the txt2img tab icon, just chose a
similar one
2023-06-25 15:45:39 +12:00
6b276587d8 chore(ui): bump all packages
Everything seems to be working.

- Due to a change to `reactflow`, I regenerated `yarn.lock`
- New chakra CLI fixes issue I had made a patch for; removed the patch
- Change to fontsource changed how we import that font
- Change to fontawesome means we lost the txt2img tab icon, just chose a similar one
2023-06-25 13:44:10 +10:00
c5faffc18b Merge branch 'main' of github.com:invoke-ai/InvokeAI into feat/controlnet-control-modes
Only "real" conflicts were in:
     invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx
     invokeai/frontend/web/src/features/controlNet/store/controlNetSlice.ts
2023-06-24 17:05:57 -07:00
c3c4a71173 implemented Stalker's suggested improvements 2023-06-24 12:37:26 -04:00
d5f742620f Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-24 11:58:06 -04:00
ba1371a88f rename ModelType.Pipeline to ModelType.Main 2023-06-24 11:45:49 -04:00
3ae996ebcb fix(ui): fix metadata viewer too stronk 2023-06-24 18:15:49 +10:00
3d16605762 fix(ui): fix controlnet upload button 2023-06-24 18:15:49 +10:00
b6dec2b826 fix(ui): fix controlnet dnd overlay not showing on dragover 2023-06-24 18:15:49 +10:00
013e2aa2a1 fix(ui): fix control image sizes
they were all weird
2023-06-24 18:15:49 +10:00
8f9fa15fc8 fix(ui): fix image fetching query string 2023-06-24 18:15:49 +10:00
dde497404b fix(ui): fix init image display buttons
- Reset and Upload buttons along top of initial image
- Also had to mess around with the control net & DnD image stuff after changing the styles
- Abstract image upload logic into hook - does not handle native HTML drag and drop upload - only the button click upload
2023-06-24 18:15:49 +10:00
0472b33164 fix(ui): fix duplicate is_intermediate query param when fetching images 2023-06-24 17:57:39 +10:00
a6c615a98c fix(ui): fix canvas staging area
Missed some of the `imageUpdated` stuff
2023-06-24 17:57:39 +10:00
bab3a9504e fix(nodes): fix LatentsToImage not using is_intermediate when creating images
Appears this was removed during a merge conflict resolution.
2023-06-24 17:57:39 +10:00
13f25edb1e fix(ui): fix incorrect boards endpoint matchers being used
Should fix some stale-data issues with the auto-adding of images to selected boards, and deleting images from boards.
2023-06-24 17:57:39 +10:00
8bacee115a fix(ui): fix thunks not using configured api client 2023-06-24 17:57:39 +10:00
3619c86f07 fix(ui): fix deleting image does not refresh board
I had some some wonkiness in the thunks
2023-06-24 17:57:39 +10:00
8e724b5abe fix(ui): fix image upload
`openapi-fetch` does not handle non-JSON `body`s, always stringifying them, and sets the `content-type` to `application/json`.

The patch here does two things:
- Do not stringify `body` if it is one of the types that should not be stringified (https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#body)
- Do not add `content-type: application/json` unless it really is stringified JSON.

Upstream issue: https://github.com/drwpow/openapi-typescript/issues/1123

I'm not a bit lost on fixing the types and adding tests, so not raising a PR upstream.
2023-06-24 17:57:39 +10:00
e076231398 fix(ui): fix node editor image fields
I had broken this when converting to rtk-query
2023-06-24 17:57:39 +10:00
e386b5dc53 feat(ui): api layer refactor
*migrate from `openapi-typescript-codegen` to `openapi-typescript` and `openapi-fetch`*

`openapi-typescript-codegen` is not very actively maintained - it's been over a year since the last update.
`openapi-typescript` and `openapi-fetch` are part of the actively maintained repo. key differences:

- provides a `fetch` client instead of `axios`, which means we need to be a bit more verbose with typing thunks
- fetch client is created at runtime and has a very nice typescript DX
- generates a single file with all types in it, from which we then extract individual types. i don't like how verbose this is, but i do like how it is more explicit.
- removed npm api generation scripts - now we have a single `typegen` script

overall i have more confidence in this new library.

*use nanostores for api base and token*

very simple reactive store for api base url and token. this was suggested in the `openapi-fetch` docs and i quite like the strategy.

*organise rtk-query api*

split out each endpoint (models, images, boards, boardImages) into their own api extensions. tidy!
2023-06-24 17:57:39 +10:00
8137a99981 simplify 2023-06-24 17:57:39 +10:00
878847defd use BASE and TOKEN from OpenAPI if they are set 2023-06-24 17:57:39 +10:00
539d1f3bde remove redundant prediction_type and attention_upscaling flags 2023-06-23 16:54:52 -04:00
466ec3ab5e add router API support for model manager heuristic_import()` 2023-06-23 16:35:39 -04:00
54b74427f4 adjust for change in list_models() API 2023-06-23 14:13:37 -04:00
58d1857ab6 merge with main 2023-06-23 13:57:25 -04:00
3043af4620 implement vae passthru 2023-06-23 13:56:30 -04:00
9de54b2266 Fix vae conversion (#3555)
Unsure at which moment it broke, but now I can't convert vae(and model
as vae it's part) without this fix.
Need further research - maybe it's breaking change in `transformers`?
2023-06-23 15:55:26 +01:00
afd19ab61a merge 2023-06-23 10:53:48 -04:00
56bd873d7a make relative model paths work in model manager 2023-06-23 10:52:59 -04:00
5aaaaf64a1 Fix ckpt conversion 2023-06-23 17:29:54 +03:00
9140e2c0f2 Merge branch 'main' into fix/vae_conversion 2023-06-23 15:03:59 +03:00
65d0e80e96 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-23 02:18:34 +01:00
83e2b7578b fix(linux): installer script prints maximum python version usable (#3546)
Changes:
* Linux `install.sh` now prints the maximum python version to use in
case no installed python version matches

Commits:
fix(linux): installer script prints maximum python version usable
2023-06-23 02:16:01 +01:00
df1907e849 Merge branch 'main' into install-script-python-version-error-prompt-fix 2023-06-23 02:15:36 +01:00
a910403003 correctly migrate models that have relative paths 2023-06-22 21:10:31 -04:00
c7b7e087e4 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-23 01:45:05 +01:00
d65c833b90 migration now integrated into invokeai-configure 2023-06-22 16:44:55 -04:00
33b04f6386 migration script working well 2023-06-22 15:47:12 -04:00
1c31efa57c punctuation fix in user message 2023-06-21 09:37:24 -04:00
b727442f84 better window size behavior under alacritty & terminator 2023-06-21 09:32:58 -04:00
90df316835 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-20 22:50:41 +01:00
8639794c12 Merge branch 'main' into install-script-python-version-error-prompt-fix 2023-06-20 18:24:54 +01:00
2fc19d9afa suppress description in "other models" tab for space reasons 2023-06-20 11:45:37 -04:00
ac6403f877 address some of ebr issues 2023-06-20 11:08:27 -04:00
678bb4fe10 Merge branch 'lstein/installer-for-new-model-layout' of github.com:invoke-ai/InvokeAI into lstein/installer-for-new-model-layout 2023-06-20 09:42:21 -04:00
294b1e83e6 test and fix edge cases 2023-06-20 09:42:10 -04:00
82091b9a66 Fix vae conversion 2023-06-18 23:46:07 +03:00
e1d53b86f3 Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-17 16:26:56 -07:00
ddb3f4b02b make configure script work properly on empty rootdir 2023-06-17 19:26:35 -04:00
15f8132e17 add direct-call script for model installer 2023-06-16 22:57:53 -04:00
f28d50070e configure/install basically working; needs edge case testing 2023-06-16 22:54:36 -04:00
f6f66307fc WIP README.md Updates 2023-06-16 17:27:02 -04:00
469dae8c88 fix(linux): installer script prints maximum python version usable 2023-06-16 15:18:23 +02:00
ada7399753 rewrite of widget display - marshalling needs rewrite 2023-06-15 23:32:33 -04:00
4ca325e8e6 chore: Rebuild API 2023-06-15 03:20:49 +12:00
6b8e88ad7f Merge branch 'main' into feat/controlnet-control-modes 2023-06-15 03:18:41 +12:00
6c53abc034 feat: Add ControlMode to Linear UI 2023-06-14 20:01:17 +12:00
eb7047b21d chore: Rebuild WebAPI 2023-06-14 19:26:02 +12:00
43419ac761 Merge branch 'main' into feat/controlnet-control-modes 2023-06-14 19:04:42 +12:00
5cd0e90816 Renamed ControlNet control_mode option "even_more_control" to "unbalanced" 2023-06-13 22:30:17 -07:00
cfd49e3921 Removing vestigial comments. 2023-06-13 21:33:15 -07:00
a8e0490133 Merge branch 'feat/controlnet-control-modes' of https://github.com/invoke-ai/InvokeAI into feat/controlnet-control-modes 2023-06-13 21:21:13 -07:00
de3e6cdb02 Switched over to ControlNet control_mode with 4 options: balanced, more_prompt, more_control, even_more_control. Based on True/False combinations of internal booleans cfg_injection and soft_injection 2023-06-13 21:08:34 -07:00
0ee0c16a3b Update CONTROLNET.md 2023-06-13 16:46:58 -04:00
8495764d45 Moving from ControlNet guess_mode to separate booleans for cfg_injection and soft_injection for testing control modes 2023-06-13 00:41:36 -07:00
8b7fac75ed First pass at ControlNet "guess mode" implementation. 2023-06-13 00:41:36 -07:00
9e0e26f4c4 Moving from ControlNet guess_mode to separate booleans for cfg_injection and soft_injection for testing control modes 2023-06-12 23:57:57 -07:00
fd715026a7 First pass at ControlNet "guess mode" implementation. 2023-06-11 02:00:39 -07:00
27b5e43ea4 add messages to the user to tell them to enlarge window 2023-06-08 16:37:10 -04:00
3c40e7fc1c most (all?) references to CLI deprecated 2023-05-31 21:29:52 -04:00
a0b6654f6a updated postprocessing, prompts, img2img and web docs 2023-05-29 10:55:57 -04:00
00cb8a0c64 Merge branch 'main' into doc_updates_23 2023-05-29 08:13:12 -04:00
10c55310c0 index.md, features and concepts documents updated 2023-05-28 19:51:18 -04:00
cf12c7b1d9 Rename contributing.md to CONTRIBUTING.md 2023-05-24 16:33:25 -04:00
1f4a9365a0 Create contributing.md 2023-05-24 16:33:10 -04:00
bf94a48a6c Update CHANGELOG.md 2023-05-24 16:29:06 -04:00
466 changed files with 15233 additions and 14255 deletions

View File

@ -1,10 +1,16 @@
name: Test invoke.py pip
# This is a dummy stand-in for the actual tests
# we don't need to run python tests on non-Python changes
# But PRs require passing tests to be mergeable
on:
pull_request:
paths:
- '**'
- '!pyproject.toml'
- '!invokeai/**'
- '!tests/**'
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
@ -19,48 +25,26 @@ jobs:
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- run: 'echo "No build required"'
- name: skip
run: echo "no build required"

View File

@ -11,6 +11,7 @@ on:
paths:
- 'pyproject.toml'
- 'invokeai/**'
- 'tests/**'
- '!invokeai/frontend/web/**'
types:
- 'ready_for_review'
@ -32,19 +33,12 @@ jobs:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
@ -62,14 +56,6 @@ jobs:
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
env:
@ -100,40 +86,38 @@ jobs:
id: run-pytest
run: pytest
- name: run invokeai-configure
id: run-preload-models
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
run: >
invokeai-configure
--yes
--default_only
--full-precision
# can't use fp16 weights without a GPU
# - name: run invokeai-configure
# env:
# HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
# run: >
# invokeai-configure
# --yes
# --default_only
# --full-precision
# # can't use fp16 weights without a GPU
- name: run invokeai
id: run-invokeai
env:
# Set offline mode to make sure configure preloaded successfully.
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--precision=float32
--always_use_cpu
--use_memory_db
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
--from_file ${{ env.TEST_PROMPTS }}
# - name: run invokeai
# id: run-invokeai
# env:
# # Set offline mode to make sure configure preloaded successfully.
# HF_HUB_OFFLINE: 1
# HF_DATASETS_OFFLINE: 1
# TRANSFORMERS_OFFLINE: 1
# INVOKEAI_OUTDIR: ${{ github.workspace }}/results
# run: >
# invokeai
# --no-patchmatch
# --no-nsfw_checker
# --precision=float32
# --always_use_cpu
# --use_memory_db
# --outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
# --from_file ${{ env.TEST_PROMPTS }}
- name: Archive results
id: archive-results
env:
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
uses: actions/upload-artifact@v3
with:
name: results
path: ${{ env.INVOKEAI_OUTDIR }}
# - name: Archive results
# env:
# INVOKEAI_OUTDIR: ${{ github.workspace }}/results
# uses: actions/upload-artifact@v3
# with:
# name: results
# path: ${{ env.INVOKEAI_OUTDIR }}

4
.gitignore vendored
View File

@ -201,7 +201,9 @@ checkpoints
# If it's a Mac
.DS_Store
invokeai/frontend/web/dist/*
# LS: the frontend dist files need to be in the repository in order to
# do a pip network install
# invokeai/frontend/web/dist/*
# Let the frontend manage its own gitignore
!invokeai/frontend/web/*

203
README.md
View File

@ -1,8 +1,11 @@
<div align="center">
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
# Invoke AI - Generative AI for Professional Creatives
## Image Generation for Stable Diffusion, Custom-Trained Models, and more.
Learn more about us and get started instantly at [invoke.ai](https://invoke.ai)
# InvokeAI: A Stable Diffusion Toolkit
[![discord badge]][discord link]
@ -33,32 +36,32 @@
</div>
_**Note: The UI is not fully functional on `main`. If you need a stable UI based on `main`, use the `pre-nodes` tag while we [migrate to a new backend](https://github.com/invoke-ai/InvokeAI/discussions/3246).**_
_**Note: This is an alpha release. Bugs are expected and not all
features are fully implemented. Please use the GitHub [Issues
pages](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen)
to report unexpected problems. Also note that InvokeAI root directory
which contains models, outputs and configuration files, has changed
between the 2.x and 3.x release. If you wish to use your v2.3 root
directory with v3.0, please follow the directions in [Migrating a 2.3
root directory to 3.0](#migrating-to-3).**_
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI is a leading creative engine built to empower professionals
and enthusiasts alike. Generate and create stunning visual media using
the latest AI-driven technologies. InvokeAI offers an industry leading
Web Interface, interactive Command Line Interface, and also serves as
the foundation for multiple commercial products.
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
## FOR DEVELOPERS - MIGRATING TO THE 3.0.0 MODELS FORMAT
The models directory and models.yaml have changed. To migrate to the
new layout, please follow this recipe:
1. Run `python scripts/migrate_models_to_3.0.py <path_to_root_directory>
2. This will create a new models directory named `models-3.0` and a
new config directory named `models.yaml-3.0`, both in the current
working directory. If you prefer to name them something else, pass
the `--dest-directory` and/or `--dest-yaml` arguments.
3. Check that the new models directory and yaml file look ok.
4. Replace the existing directory and file, keeping backup copies just in
case.
**Quick links**: [[How to
Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/">Code and
Downloads</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>]
<div align="center">
@ -68,22 +71,30 @@ case.
## Table of Contents
1. [Quick Start](#getting-started-with-invokeai)
2. [Installation](#detailed-installation-instructions)
3. [Hardware Requirements](#hardware-requirements)
4. [Features](#features)
5. [Latest Changes](#latest-changes)
6. [Troubleshooting](#troubleshooting)
7. [Contributing](#contributing)
8. [Contributors](#contributors)
9. [Support](#support)
10. [Further Reading](#further-reading)
Table of Contents 📝
## Getting Started with InvokeAI
**Getting Started**
1. 🏁 [Quick Start](#quick-start)
3. 🖥️ [Hardware Requirements](#hardware-requirements)
**More About Invoke**
1. 🌟 [Features](#features)
2. 📣 [Latest Changes](#latest-changes)
3. 🛠️ [Troubleshooting](#troubleshooting)
**Supporting the Project**
1. 🤝 [Contributing](#contributing)
2. 👥 [Contributors](#contributors)
3. 💕 [Support](#support)
## Quick Start
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
If upgrading from version 2.3, please read [Migrating a 2.3 root
directory to 3.0](#migrating-to-3) first.
### Automatic Installer (suggested for 1st time users)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
@ -92,9 +103,8 @@ For full installation and upgrade instructions, please see:
3. Unzip the file.
4. If you are on Windows, double-click on the `install.bat` script. On
macOS, open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. On Linux, run `install.sh`.
4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. **Linux:** run `install.sh`.
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
@ -120,7 +130,7 @@ and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for users familiar with Terminals)
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
not supported.
@ -196,7 +206,7 @@ not supported.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
### Detailed Installation Instructions
## Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
@ -204,6 +214,87 @@ AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
<a name="migrating-to-3"></a>
### Migrating a v2.3 InvokeAI root directory
The InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named `invokeai` and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.
We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g. `invokeai-3` and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place. This section gives both recipes.
#### Creating a new root directory and migrating old models
This is the safer recipe because it leaves your old root directory in
place to fall back on.
1. Follow the instructions above to create and install InvokeAI in a
directory that has a different name from the 2.3 invokeai directory.
In this example, we will use "invokeai-3"
2. When you are prompted to select models to install, select a minimal
set of models, such as stable-diffusion-v1.5 only.
3. After installation is complete launch `invokeai.sh` (Linux/Mac) or
`invokeai.bat` and select option 8 "Open the developers console". This
will take you to the command line.
4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to
/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`
paths for your v2.3 and v3.0 root directories respectively.
This will copy and convert your old models from 2.3 format to 3.0
format and create a new `models` directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed `models.orig` and can be deleted once you have
confirmed that the migration was successful.
#### Migrating in place
For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. The recipe is as follows>
1. Launch the InvokeAI launcher script in your current v2.3 root directory.
2. Select option [9] "Update InvokeAI" to bring up the updater dialog.
3a. During the alpha release phase, select option [3] and manually
enter the tag name `v3.0.0+a2`.
3b. Once 3.0 is released, select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and
update it to the 3.0 format. The following files will be replaced:
- The invokeai.init file, replaced by invokeai.yaml
- The models directory
- The configs/models.yaml model index
The original versions of these files will be saved with the suffix
".orig" appended to the end. Once you have confirmed that the upgrade
worked, you can safely remove these files. Alternatively you can
restore a working v2.3 directory by removing the new files and
restoring the ".orig" files' original names.
#### Migration Caveats
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. The released
version of 3.0 is expected to have an interface for importing an
entire directory of image files as a batch.
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
@ -222,13 +313,9 @@ We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
### Memory
**Memory** - At least 12 GB Main Memory RAM.
- At least 12 GB Main Memory RAM.
### Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
@ -244,7 +331,7 @@ The Unified Canvas is a fully integrated canvas implementation with support for
### *Advanced Prompt Syntax*
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
Invoke AI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
### *Command Line Interface*
@ -254,16 +341,12 @@ For users utilizing a terminal-based environment, or who want to take advantage
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
- *Popular Sampler Support*
- *Upscaling & Face Restoration Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
### Coming Soon
- *Node-Based Architecture & UI*
- And more...
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
- *Boards & Gallery Management
### Latest Changes
@ -271,12 +354,12 @@ For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
## Troubleshooting
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
## Contributing
## 🤝 Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
@ -295,14 +378,12 @@ to become part of our community.
Welcome to InvokeAI!
### Contributors
### 👥 Contributors
This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.

View File

@ -4,6 +4,236 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.3.5 <small>(22 May 2023)</small>
This release (along with the post1 and post2 follow-on releases) expands support for additional LoRA and LyCORIS models, upgrades diffusers versions, and fixes a few bugs.
### LoRA and LyCORIS Support Improvement
A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
Support for the newer LoKR LyCORIS files has been added.
### Library Updates and Speed/Reproducibility Advancements
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
Library Version
Torch 2.0.0
Diffusers 0.16.1
Xformers 0.0.19
Compel 1.1.5
Other Improvements
### Performance Improvements
When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
### Bug Fixes
The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running
## v2.3.4 <small>(7 April 2023)</small>
What's New in 2.3.4
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
### LoRA and LyCORIS Support
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
### New WebUI LoRA and Textual Inversion Buttons
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
### Minor features and fixes
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
### Known Bugs in 2.3.4
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.3 <small>(28 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.2 the following bugs have been fixed:
Bugs
When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
The batch script log file names have been fixed to be compatible with Windows.
Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
Support loading of legacy config files that have no personalization (textual inversion) section.
An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
Enhancements
It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
### Known Bugs in 2.3.3
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.2 <small>(11 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
Crashes that occurred during model merging.
Restore previous naming of Stable Diffusion base and 768 models.
Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
New "Invokeai-batch" script
### Invoke AI Batch
2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...
If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
### Known Bugs in 2.3.2
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
## v2.3.1 <small>(22 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
Using the Model Installer App
Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command invokeai-model-install.
Using the Command Line Client (CLI)
The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see INSTALLING MODELS for more information on model management.
### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
Command-line users can launch the new configure app using invokeai-configure.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
Command-line users can run this interface by typing invokeai-configure
### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
An easier way to contribute translations to the WebUI
We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
Numerous internal bugfixes and performance issues
### Bug Fixes
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.
Summary of InvokeAI command line scripts (all accessible via the launcher menu)
Command Description
invokeai Command line interface
invokeai --web Web interface
invokeai-model-install Model installer with console forms-based front end
invokeai-ti --gui Textual inversion, with a console forms-based front end
invokeai-merge --gui Model merging, with a console forms-based front end
invokeai-configure Startup configuration; can also be used to reinstall support models
invokeai-update InvokeAI software updater
### Known Bugs in 2.3.1
These are known bugs in the release.
MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
@ -264,7 +494,7 @@ sections describe what's new for InvokeAI.
[Manual Installation](installation/020_INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps,
sampler, etc) in a `.invokeai` file. See
[Client](features/CLI.md)
[Client](deprecated/CLI.md)
- Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
@ -387,7 +617,7 @@ sections describe what's new for InvokeAI.
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for [inpainting](features/INPAINTING.md) and
- Support for [inpainting](deprecated/INPAINTING.md) and
[outpainting](features/OUTPAINTING.md)
- img2img runs on all k\* samplers
- Support for
@ -399,7 +629,7 @@ sections describe what's new for InvokeAI.
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
[larger images to be created without duplicating elements](deprecated/CLI.md#this-is-an-example-of-txt2img),
at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control
variation during image generation (see
@ -408,7 +638,7 @@ sections describe what's new for InvokeAI.
of images and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
platforms.
- Improved [command-line completion behavior](features/CLI.md) New commands
- Improved [command-line completion behavior](deprecated/CLI.md) New commands
added:
- List command-line history with `!history`
- Search command-line history with `!search`

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 MiB

View File

@ -0,0 +1,54 @@
## Welcome to Invoke AI
We're thrilled to have you here and we're excited for you to contribute.
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
Here are some guidelines to help you get started:
### Technical Prerequisites
Front-end: You'll need a working knowledge of React and TypeScript.
Back-end: Depending on the scope of your contribution, you may need to know SQLite, FastAPI, Python, and Socketio. Also, a good majority of the backend logic involved in processing images is built in a modular way using a concept called "Nodes", which are isolated functions that carry out individual, discrete operations. This design allows for easy contributions of novel pipelines and capabilities.
### How to Submit Contributions
To start contributing, please follow these steps:
1. Familiarize yourself with our roadmap and open projects to see where your skills and interests align. These documents can serve as a source of inspiration.
2. Open a Pull Request (PR) with a clear description of the feature you're adding or the problem you're solving. Make sure your contribution aligns with the project's vision.
3. Adhere to general best practices. This includes assuming interoperability with other nodes, keeping the scope of your functions as small as possible, and organizing your code according to our architecture documents.
### Types of Contributions We're Looking For
We welcome all contributions that improve the project. Right now, we're especially looking for:
1. Quality of life (QOL) enhancements on the front-end.
2. New backend capabilities added through nodes.
3. Incorporating additional optimizations from the broader open-source software community.
### Communication and Decision-making Process
Project maintainers and code owners review PRs to ensure they align with the project's goals. They may provide design or architectural guidance, suggestions on user experience, or provide more significant feedback on the contribution itself. Expect to receive feedback on your submissions, and don't hesitate to ask questions or propose changes.
For more robust discussions, or if you're planning to add capabilities not currently listed on our roadmap, please reach out to us on our Discord server. That way, we can ensure your proposed contribution aligns with the project's direction before you start writing code.
### Code of Conduct and Contribution Expectations
We want everyone in our community to have a positive experience. To facilitate this, we've established a code of conduct and a statement of values that we expect all contributors to adhere to. Please take a moment to review these documents—they're essential to maintaining a respectful and inclusive environment.
By making a contribution to this project, you certify that:
1. The contribution was created in whole or in part by you and you have the right to submit it under the open-source license indicated in this projects GitHub repository; or
2. The contribution is based upon previous work that, to the best of your knowledge, is covered under an appropriate open-source license and you have the right under that license to submit that work with modifications, whether created in whole or in part by you, under the same open-source license (unless you are permitted to submit under a different license); or
3. The contribution was provided directly to you by some other person who certified (1) or (2) and you have not modified it; or
4. You understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information you submit with it, including your sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open-source license(s) involved.
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
---
Remember, your contributions help make this project great. We're excited to see what you'll bring to our community!

View File

@ -205,14 +205,14 @@ Here are the invoke> command that apply to txt2img:
| `--seamless` | | `False` | Activate seamless tiling for interesting effects |
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](../features/OTHER.md#weighted-prompts) |
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](../features/VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](../features/VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
@ -257,7 +257,7 @@ additional options:
by `-M`. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](./INPAINTING.md) for details.
[Inpainting](INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as well as
the --mask (-M) and --text_mask (-tm) arguments:
@ -297,7 +297,7 @@ invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
You can load and use hundreds of community-contributed Textual
Inversion models just by typing the appropriate trigger phrase. Please
see [Concepts Library](CONCEPTS.md) for more details.
see [Concepts Library](../features/CONCEPTS.md) for more details.
## Other Commands

View File

@ -65,39 +65,21 @@ find out what each concept is for, you can browse the
[Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
look at examples of what each concept produces.
When you have an idea of a concept you wish to try, go to the command-line
client (CLI) and type a `<` character and the beginning of the Hugging Face
concept name you wish to load. Press ++tab++, and the CLI will show you all
matching concepts. You can also type `<` and hit ++tab++ to get a listing of all
~800 concepts, but be prepared to scroll up to see them all! If there is more
than one match you can continue to type and ++tab++ until the concept is
completed.
To load concepts, you will need to open the Web UI's configuration
dialogue and activate "Show Textual Inversions from HF Concepts
Library". This will then add a list of HF Concepts to the dropdown
"Add Textual Inversion" menu. Select the concept(s) of your choice and
they will be incorporated into the positive prompt. A few concepts are
designed for the negative prompt, in which case you can add them to
the negative prompt box by select the down arrow icon next to the
textual inversion menu.
!!! example
if you type in `<x` and hit ++tab++, you'll be prompted with the completions:
```py
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
```
Now type `id` and press ++tab++. It will be autocompleted to `<xidiversity>`
because this is a unique match.
Finish your prompt and generate as usual. You may include multiple concept terms
in the prompt.
If you have never used this concept before, you will see a message that the TI
model is being downloaded and installed. After this, the concept will be saved
locally (in the `models/sd-concepts-library` directory) for future use.
Several steps happen during downloading and installation, including a scan of
the file for malicious code. Should any errors occur, you will be warned and the
concept will fail to load. Generation will then continue treating the trigger
term as a normal string of characters (e.g. as literal `<ghibli-face>`).
You can also use `<concept-names>` in the WebGUI's prompt textbox. There is no
autocompletion at this time.
There are nearly 1000 HF concepts, more than will fit into a menu. For
this reason we only show the most popular concepts (those which have
received 5 or more likes). If you wish to use a concept that is not on
the list, you may simply type its name surrounded by brackets. For
example, to load the concept named "xidiversity", add `<xidiversity>`
to the positive or negative prompt text.
## Installing your Own TI Files
@ -112,18 +94,11 @@ At startup time, InvokeAI will scan the `embeddings` directory and load any TI
files it finds there. At startup you will see a message similar to this one:
```bash
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
```
Note the `*` trigger term. This is a placeholder term that many early TI
tutorials taught people to use rather than a more descriptive term.
Unfortunately, if you have multiple TI files that all use this term, only the
first one loaded will be triggered by use of the term.
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
or more TI files together. If it encounters a collision of terms, the script
will prompt you to select new terms that do not collide. See
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
The terms you can use will appear in the "Add Textual Inversion"
dropdown menu above the HF Concepts.
## Further Reading

View File

@ -0,0 +1,92 @@
---
title: ControlNet
---
# :material-loupe: ControlNet
## ControlNet
ControlNet
ControlNet is a powerful set of features developed by the open-source community (notably, Stanford researcher [**@ilyasviel**](https://github.com/lllyasviel)) that allows you to apply a secondary neural network model to your image generation process in Invoke.
With ControlNet, you can get more control over the output of your image generation, providing you with a way to direct the network towards generating images that better fit your desired style or outcome.
### How it works
ControlNet works by analyzing an input image, pre-processing that image to identify relevant information that can be interpreted by each specific ControlNet model, and then inserting that control information into the generation process. This can be used to adjust the style, composition, or other aspects of the image to better achieve a specific result.
### Models
As part of the model installation, ControlNet models can be selected including a variety of pre-trained models that have been added to achieve different effects or styles in your generated images. Further ControlNet models may require additional code functionality to also be incorporated into Invoke's Invocations folder. You should expect to follow any installation instructions for ControlNet models loaded outside the default models provided by Invoke. The default models include:
**Canny**:
When the Canny model is used in ControlNet, Invoke will attempt to generate images that match the edges detected.
Canny edge detection works by detecting the edges in an image by looking for abrupt changes in intensity. It is known for its ability to detect edges accurately while reducing noise and false edges, and the preprocessor can identify more information by decreasing the thresholds.
**M-LSD**:
M-LSD is another edge detection algorithm used in ControlNet. It stands for Multi-Scale Line Segment Detector.
It detects straight line segments in an image by analyzing the local structure of the image at multiple scales. It can be useful for architectural imagery, or anything where straight-line structural information is needed for the resulting output.
**Lineart**:
The Lineart model in ControlNet generates line drawings from an input image. The resulting pre-processed image is a simplified version of the original, with only the outlines of objects visible.The Lineart model in ControlNet is known for its ability to accurately capture the contours of the objects in an input sketch.
**Lineart Anime**:
A variant of the Lineart model that generates line drawings with a distinct style inspired by anime and manga art styles.
**Depth**:
A model that generates depth maps of images, allowing you to create more realistic 3D models or to simulate depth effects in post-processing.
**Normal Map (BAE):**
A model that generates normal maps from input images, allowing for more realistic lighting effects in 3D rendering.
**Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
**Mediapipe Face**:
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile (experimental)**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
- It can reinterpret specific details within an image and create fresh, new elements.
- It has the ability to disregard global instructions if there's a discrepancy between them and the local context or specific parts of the image. In such cases, it uses the local context to guide the process.
The Tile Model can be a powerful tool in your arsenal for enhancing image quality and details. If there are undesirable elements in your images, such as blurriness caused by resizing, this model can effectively eliminate these issues, resulting in cleaner, crisper images. Moreover, it can generate and add refined details to your images, improving their overall quality and appeal.
**Pix2Pix (experimental)**
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
## Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
Each ControlNet has two settings that are applied to the ControlNet.
Weight - Strength of the Controlnet model applied to the generation for the section, defined by start/end.
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.

View File

@ -4,86 +4,13 @@ title: Image-to-Image
# :material-image-multiple: Image-to-Image
Both the Web and command-line interfaces provide an "img2img" feature
that lets you seed your creations with an initial drawing or
photo. This is a really cool feature that tells stable diffusion to
build the prompt on top of the image you provide, preserving the
original's basic shape and layout.
InvokeAI provides an "img2img" feature that lets you seed your
creations with an initial drawing or photo. This is a really cool
feature that tells stable diffusion to build the prompt on top of the
image you provide, preserving the original's basic shape and layout.
See the [WebUI Guide](WEB.md) for a walkthrough of the img2img feature
in the InvokeAI web server. This document describes how to use img2img
in the command-line tool.
## Basic Usage
Launch the command-line client by launching `invoke.sh`/`invoke.bat`
and choosing option (1). Alternative, activate the InvokeAI
environment and issue the command `invokeai`.
Once the `invoke> ` prompt appears, you can start an img2img render by
pointing to a seed file with the `-I` option as shown here:
!!! example ""
```commandline
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
```
<figure markdown>
| original image | generated image |
| :------------: | :-------------: |
| ![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 } | ![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 } |
</figure>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
the original intact), to `1.0` (ignore the original completely). The default is
`0.75`, and ranges from `0.25-0.90` give interesting results. Other relevant
options include `-C` (classification free guidance scale), and `-s` (steps).
Unlike `txt2img`, adding steps will continuously change the resulting image and
it will not converge.
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>`
count variants on the original image. This is done by passing the first
generated image back into img2img the requested number of times. It generates
interesting variants.
Note that the prompt makes a big difference. For example, this slight variation
on the prompt produces a very different image:
<figure markdown>
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 }
<caption markdown>photograph of a tree on a hill with a river</caption>
</figure>
!!! tip
When designing prompts, think about how the images scraped from the internet were
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
They will, however, be captioned with the publication, photographer, camera model,
or film settings.
If the initial image contains transparent regions, then Stable Diffusion will
only draw within the transparent regions, a process called
[`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting).
However, for this to work correctly, the color information underneath the
transparent needs to be preserved, not erased.
!!! warning "**IMPORTANT ISSUE** "
`img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
## How does it actually work, though?
For a walkthrough of using Image-to-Image in the Web UI, see [InvokeAI
Web Server](./WEB.md#image-to-image).
The main difference between `img2img` and `prompt2img` is the starting point.
While `prompt2img` always starts with pure gaussian noise and progressively
@ -99,10 +26,6 @@ seed `1592514025` develops something like this:
!!! example ""
```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
<figure markdown>
![latent steps](../assets/img2img/000019.steps.png){ width=720 }
</figure>
@ -157,17 +80,8 @@ Diffusion has less chance to refine itself, so the result ends up inheriting all
the problems of my bad drawing.
If you want to try this out yourself, all of these are using a seed of
`1592514025` with a width/height of `384`, step count `10`, the default sampler
(`k_lms`), and the single-word prompt `"fire"`:
```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch
[document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) -
run `invoke.py` and check your `outputs/img-samples/intermediates` folder while
generating an image.
`1592514025` with a width/height of `384`, step count `10`, the
`k_lms` sampler, and the single-word prompt `"fire"`.
### Compensating for the reduced step count
@ -180,10 +94,6 @@ give each generation 20 steps.
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
does `20` steps from my image):
```bash
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```
<figure markdown>
![000035.1592514025](../assets/img2img/000035.1592514025.png)
</figure>
@ -191,10 +101,6 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
make sure SD does `20` steps from my image):
```commandline
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
```
<figure markdown>
![000046.1592514025](../assets/img2img/000046.1592514025.png)
</figure>

View File

@ -71,6 +71,3 @@ under the selected name and register it with InvokeAI.
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
## Caveats
This is a new script and may contain bugs.

View File

@ -31,10 +31,22 @@ turned on and off on the command line using `--nsfw_checker` and
At installation time, InvokeAI will ask whether the checker should be
activated by default (neither argument given on the command line). The
response is stored in the InvokeAI initialization file (usually
`invokeai.init` in your home directory). You can change the default at any
time by opening this file in a text editor and commenting or
uncommenting the line `--nsfw_checker`.
response is stored in the InvokeAI initialization file
(`invokeai.yaml` in the InvokeAI root directory). You can change the
default at any time by opening this file in a text editor and
changing the line `nsfw_checker:` from true to false or vice-versa:
```
...
Features:
esrgan: true
internet_available: true
log_tokenization: false
nsfw_checker: true
patchmatch: true
restore: true
```
## Caveats
@ -79,11 +91,3 @@ generates. However, it does write metadata into the PNG data area,
including the prompt used to generate the image and relevant parameter
settings. These fields can be examined using the `sd-metadata.py`
script that comes with the InvokeAI package.
Note that several other Stable Diffusion distributions offer
wavelet-based "invisible" watermarking. We have experimented with the
library used to generate these watermarks and have reached the
conclusion that while the watermarking library may be adding
watermarks to PNG images, the currently available version is unable to
retrieve them successfully. If and when a functioning version of the
library becomes available, we will offer this feature as well.

View File

@ -18,43 +18,16 @@ Output Example:
## **Seamless Tiling**
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
`--seamless` option when starting the script which will result in all generated images to tile, or
for each `invoke>` prompt as shown here:
The seamless tiling mode causes generated images to seamlessly tile
with itself creating repetitive wallpaper-like patterns. To use it,
activate the Seamless Tiling option in the Web GUI and then select
whether to tile on the X (horizontal) and/or Y (vertical) axes. Tiling
will then be active for the next set of generations.
A nice prompt to test seamless tiling with is:
```python
invoke> "pond garden with lotus by claude monet" --seamless -s100 -n4
```
By default this will tile on both the X and Y axes. However, you can also specify specific axes to tile on with `--seamless_axes`.
Possible values are `x`, `y`, and `x,y`:
```python
invoke> "pond garden with lotus by claude monet" --seamless --seamless_axes=x -s100 -n4
```
---
## **Shortcuts: Reusing Seeds**
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version
1.11. Provide a `-S` (or `--seed`) switch of `-1` to use the seed of the most recent image
generated. If you produced multiple images with the `-n` switch, then you can go back further
using `-2`, `-3`, etc. up to the first image generated by the previous command. Sorry, but you can't go
back further than one command.
Here's an example of using this to do a quick refinement. It also illustrates using the new `-G`
switch to turn on upscaling and face enhancement (see previous section):
```bash
invoke> a cute child playing hopscotch -G0.5
[...]
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
invoke> a cute child playing hopscotch -G1.0 -s100 -S -1
reusing previous seed 3498014304
[...]
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
pond garden with lotus by claude monet"
```
---
@ -73,66 +46,27 @@ This will tell the sampler to invest 25% of its effort on the tabby cat aspect o
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
combination of integers and floating point numbers, and they do not need to add up to 1.
---
## **Filename Format**
The argument `--fnformat` allows to specify the filename of the
image. Supported wildcards are all arguments what can be set such as
`perlin`, `seed`, `threshold`, `height`, `width`, `gfpgan_strength`,
`sampler_name`, `steps`, `model`, `upscale`, `prompt`, `cfg_scale`,
`prefix`.
The following prompt
```bash
dream> a red car --steps 25 -C 9.8 --perlin 0.1 --fnformat {prompt}_steps.{steps}_cfg.{cfg_scale}_perlin.{perlin}.png
```
generates a file with the name: `outputs/img-samples/a red car_steps.25_cfg.9.8_perlin.0.1.png`
---
## **Thresholding and Perlin Noise Initialization Options**
Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images.
Under the Noise section of the Web UI, you will find two options named
Perlin Noise and Noise Threshold. [Perlin
noise](https://en.wikipedia.org/wiki/Perlin_noise) is a type of
structured noise used to simulate terrain and other natural
textures. The slider controls the percentage of perlin noise that will
be mixed into the image at the beginning of generation. Adding a little
perlin noise to a generation will alter the image substantially.
The noise threshold limits the range of the latent values during
sampling and helps combat the oversharpening seem with higher CFG
scale values.
For better intuition into what these options do in practice:
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise):
```bash
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
```
Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2:
```bash
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
```
!!! note
currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
---
## **Simplified API**
For programmers who wish to incorporate stable-diffusion into other products, this repository
includes a simplified API for text to image generation, which lets you create images from a prompt
in just three lines of code:
```bash
from ldm.generate import Generate
g = Generate()
outputs = g.txt2img("a unicorn in manhattan")
```
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
Please see the documentation in ldm/generate.py for more information.
---
In generating this graphic, perlin noise at initialization was
programmatically varied going across on the diagram by values 0.0,
0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied
going down from 0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are
fixed using the prompt "a portrait of a beautiful young lady" a CFG of
20, 100 steps, and a seed of 1950357039.

View File

@ -8,12 +8,6 @@ title: Postprocessing
This extension provides the ability to restore faces and upscale images.
Face restoration and upscaling can be applied at the time you generate the
images, or at any later time against a previously-generated PNG file, using the
[!fix](#fixing-previously-generated-images) command.
[Outpainting and outcropping](OUTPAINTING.md) can only be applied after the
fact.
## Face Fixing
The default face restoration module is GFPGAN. The default upscale is
@ -23,8 +17,7 @@ Real-ESRGAN. For an alternative face restoration module, see
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
should "just work" without further intervention. Simply pass the `--upscale`
(`-U`) option on the `invoke>` command line, or indicate the desired scale on
should "just work" without further intervention. Simply indicate the desired scale on
the popup in the Web GUI.
**GFPGAN** requires a series of downloadable model files to work. These are
@ -41,48 +34,75 @@ reconstruction.
### Upscaling
`-U : <upscaling_factor> <upscaling_strength>`
Open the upscaling dialog by clicking on the "expand" icon located
above the image display area in the Web UI:
The upscaling prompt argument takes two values. The first value is a scaling
factor and should be set to either `2` or `4` only. This will either scale the
image 2x or 4x respectively using different models.
<figure markdown>
![upscale1](../assets/features/upscale-dialog.png)
</figure>
You can set the scaling stength between `0` and `1.0` to control intensity of
the of the scaling. This is handy because AI upscalers generally tend to smooth
out texture details. If you wish to retain some of those for natural looking
results, we recommend using values between `0.5 to 0.8`.
There are three different upscaling parameters that you can
adjust. The first is the scale itself, either 2x or 4x.
If you do not explicitly specify an upscaling_strength, it will default to 0.75.
The second is the "Denoising Strength." Higher values will smooth out
the image and remove digital chatter, but may lose fine detail at
higher values.
Third, "Upscale Strength" allows you to adjust how the You can set the
scaling stength between `0` and `1.0` to control the intensity of the
scaling. AI upscalers generally tend to smooth out texture details. If
you wish to retain some of those for natural looking results, we
recommend using values between `0.5 to 0.8`.
[This figure](../assets/features/upscaling-montage.png) illustrates
the effects of denoising and strength. The original image was 512x512,
4x scaled to 2048x2048. The "original" version on the upper left was
scaled using simple pixel averaging. The remainder use the ESRGAN
upscaling algorithm at different levels of denoising and strength.
<figure markdown>
![upscaling](../assets/features/upscaling-montage.png){ width=720 }
</figure>
Both denoising and strength default to 0.75.
### Face Restoration
`-G : <facetool_strength>`
InvokeAI offers alternative two face restoration algorithms,
[GFPGAN](https://github.com/TencentARC/GFPGAN) and
[CodeFormer](https://huggingface.co/spaces/sczhou/CodeFormer). These
algorithms improve the appearance of faces, particularly eyes and
mouths. Issues with faces are less common with the latest set of
Stable Diffusion models than with the original 1.4 release, but the
restoration algorithms can still make a noticeable improvement in
certain cases. You can also apply restoration to old photographs you
upload.
This prompt argument controls the strength of the face restoration that is being
applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
To access face restoration, click the "smiley face" icon in the
toolbar above the InvokeAI image panel. You will be presented with a
dialog that offers a choice between the two algorithm and sliders that
allow you to adjust their parameters. Alternatively, you may open the
left-hand accordion panel labeled "Face Restoration" and have the
restoration algorithm of your choice applied to generated images
automatically.
You can use either one or both without any conflicts. In cases where you use
both, the image will be first upscaled and then the face restoration process
will be executed to ensure you get the highest quality facial features.
`--save_orig`
Like upscaling, there are a number of parameters that adjust the face
restoration output. GFPGAN has a single parameter, `strength`, which
controls how much the algorithm is allowed to adjust the
image. CodeFormer has two parameters, `strength`, and `fidelity`,
which together control the quality of the output image as described in
the [CodeFormer project
page](https://shangchenzhou.com/projects/CodeFormer/). Default values
are 0.75 for both parameters, which achieves a reasonable balance
between changing the image too much and not enough.
When you use either `-U` or `-G`, the final result you get is upscaled or face
modified. If you want to save the original Stable Diffusion generation, you can
use the `-save_orig` prompt argument to save the original unaffected version
too.
[This figure](../assets/features/restoration-montage.png) illustrates
the effects of adjusting GFPGAN and CodeFormer parameters.
### Example Usage
```bash
invoke> "superman dancing with a panda bear" -U 2 0.6 -G 0.4
```
This also works with img2img:
```bash
invoke> "a man wearing a pineapple hat" -I path/to/your/file.png -U 2 0.5 -G 0.6
```
<figure markdown>
![upscaling](../assets/features/restoration-montage.png){ width=720 }
</figure>
!!! note
@ -95,69 +115,8 @@ invoke> "a man wearing a pineapple hat" -I path/to/your/file.png -U 2 0.5 -G 0.6
process is complete. While the image generation is taking place, you will still be able to preview
the base images.
If you wish to stop during the image generation but want to upscale or face
restore a particular generated image, pass it again with the same prompt and
generated seed along with the `-U` and `-G` prompt arguments to perform those
actions.
## CodeFormer Support
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `invokeai-configure` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
### CodeFormer Usage
The following command will perform face restoration with CodeFormer instead of
the default gfpgan.
`<prompt> -G 0.8 -ft codeformer`
### Other Options
- `-cf` - cf or CodeFormer Fidelity takes values between `0` and `1`. 0 produces
high quality results but low accuracy and 1 produces lower quality results but
higher accuacy to your original face.
The following command will perform face restoration with CodeFormer. CodeFormer
will output a result that is closely matching to the input face.
`<prompt> -G 1.0 -ft codeformer -cf 0.9`
The following command will perform face restoration with CodeFormer. CodeFormer
will output a result that is the best restoration possible. This may deviate
slightly from the original face. This is an excellent option to use in
situations when there is very little facial data to work with.
`<prompt> -G 1.0 -ft codeformer -cf 0.1`
## Fixing Previously-Generated Images
It is easy to apply face restoration and/or upscaling to any
previously-generated file. Just use the syntax
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```bash
invoke> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
```
A new file named `000044.2945021133.fixed.png` will be created in the output
directory. Note that the `!fix` command does not replace the original file,
unlike the behavior at generate time.
## How to disable
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_restore` and
`--no_upscale` options, respectively.
`--no_esrgan` options, respectively.

View File

@ -4,77 +4,12 @@ title: Prompting-Features
# :octicons-command-palette-24: Prompting-Features
## **Reading Prompts from a File**
You can automate `invoke.py` by providing a text file with the prompts you want
to run, one line per prompt. The text file must be composed with a text editor
(e.g. Notepad) and not a word processor. Each line should look like what you
would type at the invoke> prompt:
```bash
"a beautiful sunny day in the park, children playing" -n4 -C10
"stormy weather on a mountain top, goats grazing" -s100
"innovative packaging for a squid's dinner" -S137038382
```
Then pass this file's name to `invoke.py` when you invoke it:
```bash
python scripts/invoke.py --from_file "/path/to/prompts.txt"
```
You may also read a series of prompts from standard input by providing
a filename of `-`. For example, here is a python script that creates a
matrix of prompts, each one varying slightly:
```bash
#!/usr/bin/env python
adjectives = ['sunny','rainy','overcast']
samplers = ['k_lms','k_euler_a','k_heun']
cfg = [7.5, 9, 11]
for adj in adjectives:
for samp in samplers:
for cg in cfg:
print(f'a {adj} day -A{samp} -C{cg}')
```
Its output looks like this (abbreviated):
```bash
a sunny day -Aklms -C7.5
a sunny day -Aklms -C9
a sunny day -Aklms -C11
a sunny day -Ak_euler_a -C7.5
a sunny day -Ak_euler_a -C9
...
a overcast day -Ak_heun -C9
a overcast day -Ak_heun -C11
```
To feed it to invoke.py, pass the filename of "-"
```bash
python matrix.py | python scripts/invoke.py --from_file -
```
When the script is finished, each of the 27 combinations
of adjective, sampler and CFG will be executed.
The command-line interface provides `!fetch` and `!replay` commands
which allow you to read the prompts from a single previously-generated
image or a whole directory of them, write the prompts to a file, and
then replay them. Or you can create your own file of prompts and feed
them to the command-line client from within an interactive session.
See [Command-Line Interface](CLI.md) for details.
---
## **Negative and Unconditioned Prompts**
Any words between a pair of square brackets will instruct Stable Diffusion to
attempt to ban the concept from the generated image.
Any words between a pair of square brackets will instruct Stable
Diffusion to attempt to ban the concept from the generated image. The
same effect is achieved by placing words in the "Negative Prompts"
textbox in the Web UI.
```text
this is a test prompt [not really] to make you understand [cool] how this works.
@ -87,7 +22,9 @@ Here's a prompt that depicts what it does.
original prompt:
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve"`
`#!bash parameters: steps=20, dimensions=512x768, CFG=7.5, Scheduler=k_euler_a, seed=1654590180`
<figure markdown>
@ -99,7 +36,8 @@ That image has a woman, so if we want the horse without a rider, we can
influence the image not to have a woman by putting [woman] in the prompt, like
this:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]"`
(same parameters as above)
<figure markdown>
@ -110,7 +48,8 @@ this:
That's nice - but say we also don't want the image to be quite so blue. We can
add "blue" to the list of negative prompts, so it's now [woman blue]:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]"`
(same parameters as above)
<figure markdown>
@ -121,7 +60,8 @@ add "blue" to the list of negative prompts, so it's now [woman blue]:
Getting close - but there's no sense in having a saddle when our horse doesn't
have a rider, so we'll add one more negative prompt: [woman blue saddle].
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]"`
(same parameters as above)
<figure markdown>
@ -261,19 +201,6 @@ Prompt2prompt `.swap()` is not compatible with xformers, which will be temporari
The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
Note that `prompt2prompt` is not currently working with the runwayML inpainting
model, and may never work due to the way this model is set up. If you attempt to
use `prompt2prompt` you will get the original image back. However, since this
model is so good at inpainting, a good substitute is to use the `clipseg` text
masking option:
```bash
invoke> a fluffy cat eating a hotdog
Outputs:
[1010] outputs/000025.2182095108.png: a fluffy cat eating a hotdog
invoke> a smiling dog eating a hotdog -I 000025.2182095108.png -tm cat
```
### Escaping parantheses () and speech marks ""
If the model you are using has parentheses () or speech marks "" as part of its
@ -374,6 +301,5 @@ summoning up the concept of some sort of scifi creature? Let's find out.
Indeed, removing the word "hybrid" produces an image that is more like what we'd
expect.
In conclusion, prompt blending is great for exploring creative space, but can be
difficult to direct. A forthcoming release of InvokeAI will feature more
deterministic prompt weighting.
In conclusion, prompt blending is great for exploring creative space,
but takes some trial and error to achieve the desired effect.

View File

@ -46,11 +46,19 @@ start the front end by selecting choice (3):
```sh
Do you want to generate images using the
1. command-line
2. browser-based UI
3. textual inversion training
4. open the developer console
Please enter 1, 2, 3, or 4: [1] 3
1: Browser-based UI
2: Command-line interface
3: Run textual inversion training
4: Merge models (diffusers type only)
5: Download and install models
6: Change InvokeAI startup options
7: Re-run the configure script to fix a broken install
8: Open the developer console
9: Update InvokeAI
10: Command-line help
Q: Quit
Please enter 1-10, Q: [1]
```
From the command line, with the InvokeAI virtual environment active,

View File

@ -6,9 +6,7 @@ title: Variations
## Intro
Release 1.13 of SD-Dream adds support for image variations.
You are able to do the following:
InvokeAI's support for variations enables you to do the following:
1. Generate a series of systematic variations of an image, given a prompt. The
amount of variation from one image to the next can be controlled.
@ -30,19 +28,7 @@ The prompt we will use throughout is:
This will be indicated as `#!bash "prompt"` in the examples below.
First we let SD create a series of images in the usual way, in this case
requesting six iterations:
```bash
invoke> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
...
Outputs:
./outputs/Xena/000001.1579445059.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1579445059
./outputs/Xena/000001.1880768722.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1880768722
./outputs/Xena/000001.332057179.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S332057179
./outputs/Xena/000001.2224800325.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S2224800325
./outputs/Xena/000001.465250761.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S465250761
./outputs/Xena/000001.3357757885.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S3357757885
```
requesting six iterations.
<figure markdown>
![var1](../assets/variation_walkthru/000001.3357757885.png)
@ -53,22 +39,16 @@ Outputs:
## Step 2 - Generating Variations
Let's try to generate some variations. Using the same seed, we pass the argument
`-v0.1` (or --variant_amount), which generates a series of variations each
differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
with higher numbers being larger amounts of variation.
Let's try to generate some variations on this image. We select the "*"
symbol in the line of icons above the image in order to fix the prompt
and seed. Then we open up the "Variations" section of the generation
panel and use the slider to set the variation amount to 0.2. The
higher this value, the more each generated image will differ from the
previous one.
```bash
invoke> "prompt" -n6 -S3357757885 -v0.2
...
Outputs:
./outputs/Xena/000002.784039624.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 784039624:0.2 -S3357757885
./outputs/Xena/000002.3647897225.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.2 -S3357757885
./outputs/Xena/000002.917731034.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 917731034:0.2 -S3357757885
./outputs/Xena/000002.4116285959.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 4116285959:0.2 -S3357757885
./outputs/Xena/000002.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1614299449:0.2 -S3357757885
./outputs/Xena/000002.1335553075.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1335553075:0.2 -S3357757885
```
Now we run the prompt a second time, requesting six iterations. You
will see six images that are thematically related to each other. Try
increasing and decreasing the variation amount and see what happens.
### **Variation Sub Seeding**

View File

@ -299,14 +299,6 @@ initial image" icons are located.
See the [Unified Canvas Guide](UNIFIED_CANVAS.md)
## Parting remarks
This concludes the walkthrough, but there are several more features that you can
explore. Please check out the [Command Line Interface](CLI.md) documentation for
further explanation of the advanced features that were not covered here.
The WebUI is only rapid development. Check back regularly for updates!
## Reference
### Additional Options
@ -349,11 +341,9 @@ the settings configured in the toolbar.
See below for additional documentation related to each feature:
- [Core Prompt Settings](./CLI.md)
- [Variations](./VARIATIONS.md)
- [Upscaling](./POSTPROCESS.md#upscaling)
- [Image to Image](./IMG2IMG.md)
- [Inpainting](./INPAINTING.md)
- [Other](./OTHER.md)
#### Invocation Gallery

View File

@ -13,28 +13,16 @@ Build complex scenes by combine and modifying multiple images in a stepwise
fashion. This feature combines img2img, inpainting and outpainting in
a single convenient digital artist-optimized user interface.
### * The [Command Line Interface (CLI)](CLI.md)
Scriptable access to InvokeAI's features.
## Image Generation
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
## * [Post-Processing](POSTPROCESS.md)
Restore mangled faces and make images larger with upscaling. Also see the [Embiggen Upscaling Guide](EMBIGGEN.md).
## * The [Concepts Library](CONCEPTS.md)
Add custom subjects and styles using HuggingFace's repository of embeddings.
### * [Image-to-Image Guide for the CLI](IMG2IMG.md)
### * [Image-to-Image Guide](IMG2IMG.md)
Use a seed image to build new creations in the CLI.
### * [Inpainting Guide for the CLI](INPAINTING.md)
Selectively erase and replace portions of an existing image in the CLI.
### * [Outpainting Guide for the CLI](OUTPAINTING.md)
Extend the borders of the image with an "outcrop" function within the CLI.
### * [Generating Variations](VARIATIONS.md)
Have an image you like and want to generate many more like it? Variations
are the ticket.

View File

@ -13,6 +13,7 @@ title: Home
<div align="center" markdown>
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
[![discord badge]][discord link]
@ -131,17 +132,13 @@ This method is recommended for those familiar with running Docker containers
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
<!-- separator -->
### The InvokeAI Command Line Interface
- [Command Line Interace Reference Guide](features/CLI.md)
<!-- separator -->
### Image Management
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
<!-- separator -->
@ -156,83 +153,60 @@ This method is recommended for those familiar with running Docker containers
- [Prompt Syntax](features/PROMPTS.md)
- [Generating Variations](features/VARIATIONS.md)
## :octicons-log-16: Latest Changes
## :octicons-log-16: Important Changes Since Version 2.3
### v2.3.0 <small>(9 February 2023)</small>
### Nodes
#### Migration to Stable Diffusion `diffusers` models
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with `.ckpt` or `.safetensors`. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called `diffusers` and consists of a directory of individual models. The most immediate benefit of `diffusers` is that they load from disk very quickly. A longer term benefit is that in the near future `diffusers` models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
When you perform a new install of version 2.3.0, you will be offered the option to install the `diffusers` versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.
### Command-Line Interface Retired
To take advantage of the optimized loading times of `diffusers` models, InvokeAI offers options to convert legacy checkpoint models into optimized `diffusers` models. If you use the `invokeai` command line interface, the relevant commands are:
The original "invokeai" command-line interface has been retired. The
`invokeai` command will now launch a new command-line client that can
be used by developers to create and test nodes. It is not intended to
be used for routine image generation or manipulation.
* `!convert_model` -- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a `diffusers` model, and import it into InvokeAI's models registry file.
* `!optimize_model` -- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named `diffusers` model, optionally deleting the original checkpoint file.
* `!import_model` -- Take the local path of either a checkpoint file or a `diffusers` model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the [HuggingFace models repository](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) and it will be downloaded and installed automatically.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
The WebGUI offers similar functionality for model management.
### ControlNet
For advanced users, new command-line options provide additional functionality. Launching `invokeai` with the argument `--autoconvert <path to directory>` takes the path to a directory of checkpoint files, automatically converts them into `diffusers` models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the `--ckpt_convert` argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a `diffusers` model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management in both the command-line and Web interfaces.
### New Schedulers
#### Support for the `XFormers` Memory-Efficient Crossattention Package
The list of schedulers has been completely revamped and brought up to date:
On CUDA (Nvidia) systems, version 2.3.0 supports the `XFormers` library. Once installed, the`xformers` package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. `xformers` will be installed and activated automatically if you specify a CUDA system at install time.
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
The caveat with using `xformers` is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable `xformers` and restore fully deterministic behavior, you may launch InvokeAI using the `--no-xformers` option. This is most conveniently done by opening the file `invokeai/invokeai.init` with a text editor, and adding the line `--no-xformers` at the bottom.
#### A Negative Prompt Box in the WebUI
There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The `[negative prompt]` syntax continues to work in the main prompt box as well.
To see exactly how your prompts are being parsed, launch `invokeai` with the `--log_tokenization` option. The console window will then display the tokenization process for both positive and negative prompts.
#### Model Merging
Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in `diffusers` format, then launch the merger using a new menu item in the InvokeAI launcher script (`invoke.sh`, `invoke.bat`) or directly from the command line with `invokeai-merge --gui`. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged `diffusers` model and import it into InvokeAI for your use.
See [MODEL MERGING](https://invoke-ai.github.io/InvokeAI/features/MODEL_MERGING/) for more details.
#### Textual Inversion Training
Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including `<pointillist-style>` in your prompt.
Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any `diffusers` model. To access training you can launch from a new item in the launcher script or from the command line using `invokeai-ti --gui`.
See [TEXTUAL INVERSION](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/) for further details.
#### A New Installer Experience
The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command `pip install InvokeAI --use-pep517`. Please see [Installation](#installation) for details.
Developers should be aware that the `pip` installation procedure has been simplified and that the `conda` method is no longer supported at all. Accordingly, the `environments_and_requirements` directory has been deleted from the repository.
#### Command-line name changes
All of InvokeAI's functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the `invoke.sh` and `invoke.bat` launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:
* `invokeai` -- Command-line client
* `invokeai --web` -- Web GUI
* `invokeai-merge --gui` -- Model merging script with graphical front end
* `invokeai-ti --gui` -- Textual inversion script with graphical front end
* `invokeai-configure` -- Configuration tool for initializing the `invokeai` directory and selecting popular starter models.
For backward compatibility, the old command names are also recognized, including `invoke.py` and `configure-invokeai.py`. However, these are deprecated and will eventually be removed.
Developers should be aware that the locations of the script's source code has been moved. The new locations are:
* `invokeai` => `ldm/invoke/CLI.py`
* `invokeai-configure` => `ldm/invoke/config/configure_invokeai.py`
* `invokeai-ti`=> `ldm/invoke/training/textual_inversion.py`
* `invokeai-merge` => `ldm/invoke/merge_diffusers`
Developers are strongly encouraged to perform an "editable" install of InvokeAI using `pip install -e . --use-pep517` in the Git repository, and then to call the scripts using their 2.3.0 names, rather than executing the scripts directly. Developers should also be aware that the several important data files have been relocated into a new directory named `invokeai`. This includes the WebGUI's `frontend` and `backend` directories, and the `INITIAL_MODELS.yaml` files used by the installer to select starter models. Eventually all InvokeAI modules will be in subdirectories of `invokeai`.
Please see [2.3.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.3.0) for further details.
For older changelogs, please visit the
**[CHANGELOG](CHANGELOG/#v223-2-december-2022)**.
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
## :material-target: Troubleshooting
@ -268,8 +242,3 @@ free to send me an email if you use and like the script.
Original portions of the software are Copyright (c) 2022-23
by [The InvokeAI Team](https://github.com/invoke-ai).
## :octicons-book-24: Further Reading
Please see the original README for more information on this software and
underlying algorithm, located in the file
[README-CompViz.md](other/README-CompViz.md).

View File

@ -87,18 +87,18 @@ Prior to installing PyPatchMatch, you need to take the following steps:
sudo pacman -S --needed base-devel
```
2. Install `opencv`:
2. Install `opencv` and `blas`:
```sh
sudo pacman -S opencv
sudo pacman -S opencv blas
```
or for CUDA support
```sh
sudo pacman -S opencv-cuda
sudo pacman -S opencv-cuda blas
```
3. Fix the naming of the `opencv` package configuration file:
```sh

View File

@ -38,6 +38,7 @@ echo https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist
echo.
echo See %INSTRUCTIONS% for more details.
echo.
echo "For the best user experience we suggest enlarging or maximizing this window now."
pause
@rem ---------------------------- check Python version ---------------

View File

@ -25,7 +25,8 @@ done
if [ -z "$PYTHON" ]; then
echo "A suitable Python interpreter could not be found"
echo "Please install Python 3.9 or higher before running this script. See instructions at $INSTRUCTIONS for help."
echo "Please install Python $MINIMUM_PYTHON_VERSION or higher (maximum $MAXIMUM_PYTHON_VERSION) before running this script. See instructions at $INSTRUCTIONS for help."
echo "For the best user experience we suggest enlarging or maximizing this window now."
read -p "Press any key to exit"
exit -1
fi

View File

@ -149,7 +149,7 @@ class Installer:
return venv_dir
def install(self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None) -> None:
def install(self, root: str = "~/invokeai-3", version: str = "latest", yes_to_all=False, find_links: Path = None) -> None:
"""
Install the InvokeAI application into the given runtime path

View File

@ -293,6 +293,8 @@ def introduction() -> None:
"3. Create initial configuration files.",
"",
"[i]At any point you may interrupt this program and resume later.",
"",
"[b]For the best user experience, please enlarge or maximize this window",
),
)
)

View File

@ -14,7 +14,7 @@ echo 3. Run textual inversion training
echo 4. Merge models (diffusers type only)
echo 5. Download and install models
echo 6. Change InvokeAI startup options
echo 7. Re-run the configure script to fix a broken install
echo 7. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 8. Open the developer console
echo 9. Update InvokeAI
echo 10. Command-line help

View File

@ -81,7 +81,7 @@ do_choice() {
;;
7)
clear
printf "Re-run the configure script to fix a broken install\n"
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
;;
8)
@ -118,12 +118,12 @@ do_choice() {
do_dialog() {
options=(
1 "Generate images with a browser-based interface"
2 "Generate images using a command-line interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI")

View File

@ -5,6 +5,7 @@ from invokeai.app.services.board_record_storage import BoardChanges
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from ..dependencies import ApiDependencies
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
@ -71,11 +72,19 @@ async def update_board(
@boards_router.delete("/{board_id}", operation_id="delete_board")
async def delete_board(
board_id: str = Path(description="The id of board to delete"),
include_images: Optional[bool] = Query(
description="Permanently delete all images on the board", default=False
),
) -> None:
"""Deletes a board"""
try:
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
if include_images is True:
ApiDependencies.invoker.services.images.delete_images_on_board(
board_id=board_id
)
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
else:
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
except Exception as e:
# TODO: Does this need any exception handling at all?
pass

View File

@ -1,13 +1,13 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) and 2023 Kent Keirsey (https://github.com/hipsterusername)
from typing import Annotated, Literal, Optional, Union, Dict
from typing import Literal, Optional, Union
from fastapi import Query
from fastapi.routing import APIRouter, HTTPException
from pydantic import BaseModel, Field, parse_obj_as
from ..dependencies import ApiDependencies
from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management.models import OPENAPI_MODEL_CONFIGS
from invokeai.backend.model_management.models import OPENAPI_MODEL_CONFIGS, SchedulerPredictionType
MODEL_CONFIGS = Union[tuple(OPENAPI_MODEL_CONFIGS)]
models_router = APIRouter(prefix="/v1/models", tags=["models"])
@ -51,11 +51,14 @@ class CreateModelResponse(BaseModel):
info: Union[CkptModelInfo, DiffusersModelInfo] = Field(discriminator="format", description="The model info")
status: str = Field(description="The status of the API response")
class ImportModelRequest(BaseModel):
name: str = Field(description="A model path, repo_id or URL to import")
prediction_type: Optional[Literal['epsilon','v_prediction','sample']] = Field(description='Prediction type for SDv2 checkpoint files')
class ConversionRequest(BaseModel):
name: str = Field(description="The name of the new model")
info: CkptModelInfo = Field(description="The converted model info")
save_location: str = Field(description="The path to save the converted model weights")
class ConvertedModelResponse(BaseModel):
name: str = Field(description="The name of the new model")
@ -105,6 +108,28 @@ async def update_model(
return model_response
@models_router.post(
"/",
operation_id="import_model",
responses={200: {"status": "success"}},
)
async def import_model(
model_request: ImportModelRequest
) -> None:
""" Add Model """
items_to_import = set([model_request.name])
prediction_types = { x.value: x for x in SchedulerPredictionType }
logger = ApiDependencies.invoker.services.logger
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import = items_to_import,
prediction_type_helper = lambda x: prediction_types.get(model_request.prediction_type)
)
if len(installed_models) > 0:
logger.info(f'Successfully imported {model_request.name}')
else:
logger.error(f'Model {model_request.name} not imported')
raise HTTPException(status_code=500, detail=f'Model {model_request.name} not imported')
@models_router.delete(
"/{model_name}",

View File

@ -18,8 +18,17 @@ config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger().getLogger(config=config)
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
)
from invokeai.app.services.board_images import (
BoardImagesService,
BoardImagesServiceDependencies,
)
from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.metadata import CoreMetadataService
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
@ -230,21 +239,49 @@ def invoke_cli():
image_file_storage = DiskImageFileStorage(f"{output_folder}/images")
names = SimpleNameService()
images = ImageService(
image_record_storage=image_record_storage,
image_file_storage=image_file_storage,
metadata=metadata,
url=urls,
logger=logger,
names=names,
graph_execution_manager=graph_execution_manager,
board_record_storage = SqliteBoardRecordStorage(db_location)
board_image_record_storage = SqliteBoardImageRecordStorage(db_location)
boards = BoardService(
services=BoardServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
board_images = BoardImagesService(
services=BoardImagesServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
images = ImageService(
services=ImageServiceDependencies(
board_image_record_storage=board_image_record_storage,
image_record_storage=image_record_storage,
image_file_storage=image_file_storage,
metadata=metadata,
url=urls,
logger=logger,
names=names,
graph_execution_manager=graph_execution_manager,
)
)
services = InvocationServices(
model_manager=model_manager,
events=events,
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f'{output_folder}/latents')),
images=images,
boards=boards,
board_images=board_images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"

View File

@ -65,23 +65,20 @@ class CompelInvocation(BaseInvocation):
**self.clip.text_encoder.dict(),
)
with tokenizer_info as orig_tokenizer,\
text_encoder_info as text_encoder,\
ExitStack() as stack:
text_encoder_info as text_encoder:
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.clip.loras]
loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
stack.enter_context(
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
)
)
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
).context.model
)
except Exception:
#print(e)

View File

@ -1,10 +1,11 @@
# InvokeAI nodes for ControlNet image preprocessors
# Invocations for ControlNet image preprocessors
# initial implementation by Gregg Helt, 2023
# heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux
from builtins import float
from builtins import float, bool
import cv2
import numpy as np
from typing import Literal, Optional, Union, List
from typing import Literal, Optional, Union, List, Dict
from PIL import Image, ImageFilter, ImageOps
from pydantic import BaseModel, Field, validator
@ -29,8 +30,13 @@ from controlnet_aux import (
ContentShuffleDetector,
ZoeDetector,
MediapipeFaceDetector,
SamDetector,
LeresDetector,
)
from controlnet_aux.util import HWC3, ade_palette
from .image import ImageOutput, PILInvocationConfig
CONTROLNET_DEFAULT_MODELS = [
@ -94,6 +100,10 @@ CONTROLNET_DEFAULT_MODELS = [
]
CONTROLNET_NAME_VALUES = Literal[tuple(CONTROLNET_DEFAULT_MODELS)]
CONTROLNET_MODE_VALUES = Literal[tuple(["balanced", "more_prompt", "more_control", "unbalanced"])]
# crop and fill options not ready yet
# CONTROLNET_RESIZE_VALUES = Literal[tuple(["just_resize", "crop_resize", "fill_resize"])]
class ControlField(BaseModel):
image: ImageField = Field(default=None, description="The control image")
@ -104,6 +114,9 @@ class ControlField(BaseModel):
description="When the ControlNet is first applied (% of total steps)")
end_step_percent: float = Field(default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use")
# resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@validator("control_weight")
def abs_le_one(cls, v):
"""validate that all abs(values) are <=1"""
@ -144,11 +157,11 @@ class ControlNetInvocation(BaseInvocation):
control_model: CONTROLNET_NAME_VALUES = Field(default="lllyasviel/sd-controlnet-canny",
description="control model used")
control_weight: Union[float, List[float]] = Field(default=1.0, description="The weight given to the ControlNet")
# TODO: add support in backend core for begin_step_percent, end_step_percent, guess_mode
begin_step_percent: float = Field(default=0, ge=0, le=1,
description="When the ControlNet is first applied (% of total steps)")
end_step_percent: float = Field(default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode used")
# fmt: on
class Config(InvocationConfig):
@ -166,7 +179,6 @@ class ControlNetInvocation(BaseInvocation):
}
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
control=ControlField(
image=self.image,
@ -174,10 +186,11 @@ class ControlNetInvocation(BaseInvocation):
control_weight=self.control_weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
control_mode=self.control_mode,
),
)
# TODO: move image processors to separate file (image_analysis.py
class ImageProcessorInvocation(BaseInvocation, PILInvocationConfig):
"""Base class for invocations that preprocess images for ControlNet"""
@ -449,6 +462,104 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation, PILInvocationCo
# fmt: on
def run_processor(self, image):
# MediaPipeFaceDetector throws an error if image has alpha channel
# so convert to RGB if needed
if image.mode == 'RGBA':
image = image.convert('RGB')
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(image, max_faces=self.max_faces, min_confidence=self.min_confidence)
return processed_image
class LeresImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies leres processing to image"""
# fmt: off
type: Literal["leres_image_processor"] = "leres_image_processor"
# Inputs
thr_a: float = Field(default=0, description="Leres parameter `thr_a`")
thr_b: float = Field(default=0, description="Leres parameter `thr_b`")
boost: bool = Field(default=False, description="Whether to use boost mode")
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
def run_processor(self, image):
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
processed_image = leres_processor(image,
thr_a=self.thr_a,
thr_b=self.thr_b,
boost=self.boost,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution)
return processed_image
class TileResamplerProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
# fmt: off
type: Literal["tile_image_processor"] = "tile_image_processor"
# Inputs
#res: int = Field(default=512, ge=0, le=1024, description="The pixel resolution for each tile")
down_sampling_rate: float = Field(default=1.0, ge=1.0, le=8.0, description="Down sampling rate")
# fmt: on
# tile_resample copied from sd-webui-controlnet/scripts/processor.py
def tile_resample(self,
np_img: np.ndarray,
res=512, # never used?
down_sampling_rate=1.0,
):
np_img = HWC3(np_img)
if down_sampling_rate < 1.1:
return np_img
H, W, C = np_img.shape
H = int(float(H) / float(down_sampling_rate))
W = int(float(W) / float(down_sampling_rate))
np_img = cv2.resize(np_img, (W, H), interpolation=cv2.INTER_AREA)
return np_img
def run_processor(self, img):
np_img = np.array(img, dtype=np.uint8)
processed_np_image = self.tile_resample(np_img,
#res=self.tile_size,
down_sampling_rate=self.down_sampling_rate
)
processed_image = Image.fromarray(processed_np_image)
return processed_image
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies segment anything processing to image"""
# fmt: off
type: Literal["segment_anything_processor"] = "segment_anything_processor"
# fmt: on
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
np_img = np.array(image, dtype=np.uint8)
processed_image = segment_anything_processor(np_img)
return processed_image
class SamDetectorReproducibleColors(SamDetector):
# overriding SamDetector.show_anns() method to use reproducible colors for segmentation image
# base class show_anns() method randomizes colors,
# which seems to also lead to non-reproducible image generation
# so using ADE20k color palette instead
def show_anns(self, anns: List[Dict]):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
h, w = anns[0]['segmentation'].shape
final_img = Image.fromarray(np.zeros((h, w, 3), dtype=np.uint8), mode="RGB")
palette = ade_palette()
for i, ann in enumerate(sorted_anns):
m = ann['segmentation']
img = np.empty((m.shape[0], m.shape[1], 3), dtype=np.uint8)
# doing modulo just in case number of annotated regions exceeds number of colors in palette
ann_color = palette[i % len(palette)]
img[:, :] = ann_color
final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255)))
return np.array(final_img, dtype=np.uint8)

View File

@ -23,7 +23,7 @@ from ...backend.stable_diffusion.diffusers_pipeline import (
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import \
PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_torch_device, torch_dtype
from ...backend.util.devices import torch_dtype
from ...backend.model_management.lora import ModelPatcher
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
@ -59,31 +59,12 @@ def build_latents_output(latents_name: str, latents: torch.Tensor):
height=latents.size()[2] * 8,
)
class NoiseOutput(BaseInvocationOutput):
"""Invocation noise output"""
#fmt: off
type: Literal["noise_output"] = "noise_output"
# Inputs
noise: LatentsField = Field(default=None, description="The output noise")
width: int = Field(description="The width of the noise in pixels")
height: int = Field(description="The height of the noise in pixels")
#fmt: on
def build_noise_output(latents_name: str, latents: torch.Tensor):
return NoiseOutput(
noise=LatentsField(latents_name=latents_name),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
SAMPLER_NAME_VALUES = Literal[
tuple(list(SCHEDULER_MAP.keys()))
]
def get_scheduler(
context: InvocationContext,
scheduler_info: ModelInfo,
@ -105,62 +86,6 @@ def get_scheduler(
return scheduler
def get_noise(width:int, height:int, device:torch.device, seed:int = 0, latent_channels:int=4, use_mps_noise:bool=False, downsampling_factor:int = 8):
# limit noise to only the diffusion image channels, not the mask channels
input_channels = min(latent_channels, 4)
use_device = "cpu" if (use_mps_noise or device.type == "mps") else device
generator = torch.Generator(device=use_device).manual_seed(seed)
x = torch.randn(
[
1,
input_channels,
height // downsampling_factor,
width // downsampling_factor,
],
dtype=torch_dtype(device),
device=use_device,
generator=generator,
).to(device)
# if self.perlin > 0.0:
# perlin_noise = self.get_perlin_noise(
# width // self.downsampling_factor, height // self.downsampling_factor
# )
# x = (1 - self.perlin) * x + self.perlin * perlin_noise
return x
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""
type: Literal["noise"] = "noise"
# Inputs
seed: int = Field(ge=0, le=SEED_MAX, description="The seed to use", default_factory=get_random_seed)
width: int = Field(default=512, multiple_of=8, gt=0, description="The width of the resulting noise", )
height: int = Field(default=512, multiple_of=8, gt=0, description="The height of the resulting noise", )
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents", "noise"],
},
}
@validator("seed", pre=True)
def modulo_seed(cls, v):
"""Returns the seed modulo SEED_MAX to ensure it is within the valid range."""
return v % SEED_MAX
def invoke(self, context: InvocationContext) -> NoiseOutput:
device = torch.device(choose_torch_device())
noise = get_noise(self.width, self.height, device, self.seed)
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, noise)
return build_noise_output(latents_name=name, latents=noise)
# Text to image
class TextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
@ -287,19 +212,14 @@ class TextToLatentsInvocation(BaseInvocation):
control_height_resize = latents_shape[2] * 8
control_width_resize = latents_shape[3] * 8
if control_input is None:
# print("control input is None")
control_list = None
elif isinstance(control_input, list) and len(control_input) == 0:
# print("control input is empty list")
control_list = None
elif isinstance(control_input, ControlField):
# print("control input is ControlField")
control_list = [control_input]
elif isinstance(control_input, list) and len(control_input) > 0 and isinstance(control_input[0], ControlField):
# print("control input is list[ControlField]")
control_list = control_input
else:
# print("input control is unrecognized:", type(self.control))
control_list = None
if (control_list is None):
control_data = None
@ -341,12 +261,15 @@ class TextToLatentsInvocation(BaseInvocation):
# num_images_per_prompt=num_images_per_prompt,
device=control_model.device,
dtype=control_model.dtype,
control_mode=control_info.control_mode,
)
control_item = ControlNetData(model=control_model,
image_tensor=control_image,
weight=control_info.control_weight,
begin_step_percent=control_info.begin_step_percent,
end_step_percent=control_info.end_step_percent)
end_step_percent=control_info.end_step_percent,
control_mode=control_info.control_mode,
)
control_data.append(control_item)
# MultiControlNetModel has been refactored out, just need list[ControlNetData]
return control_data
@ -362,8 +285,7 @@ class TextToLatentsInvocation(BaseInvocation):
self.dispatch_progress(context, source_node_id, state)
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
with unet_info as unet,\
ExitStack() as stack:
with unet_info as unet:
scheduler = get_scheduler(
context=context,
@ -374,7 +296,7 @@ class TextToLatentsInvocation(BaseInvocation):
pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler)
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.unet.loras]
control_data = self.prep_control_data(
model=pipeline, context=context, control_input=self.control,
@ -438,8 +360,7 @@ class LatentsToLatentsInvocation(TextToLatentsInvocation):
**self.unet.unet.dict(),
)
with unet_info as unet,\
ExitStack() as stack:
with unet_info as unet:
scheduler = get_scheduler(
context=context,
@ -468,7 +389,7 @@ class LatentsToLatentsInvocation(TextToLatentsInvocation):
device=unet.device,
)
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.unet.loras]
with ModelPatcher.apply_lora_unet(pipeline.unet, loras):
result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
@ -543,6 +464,7 @@ class LatentsToImageInvocation(BaseInvocation):
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate
)
return ImageOutput(

View File

@ -73,7 +73,7 @@ class PipelineModelLoaderInvocation(BaseInvocation):
base_model = self.model.base_model
model_name = self.model.model_name
model_type = ModelType.Pipeline
model_type = ModelType.Main
# TODO: not found exceptions
if not context.services.model_manager.model_exists(
@ -177,9 +177,13 @@ class LoraLoaderInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> LoraLoaderOutput:
# TODO: ui rewrite
base_model = BaseModelType.StableDiffusion1
if not context.services.model_manager.model_exists(
base_model=base_model,
model_name=self.lora_name,
model_type=SDModelType.Lora,
model_type=ModelType.Lora,
):
raise Exception(f"Unkown lora name: {self.lora_name}!")
@ -195,8 +199,9 @@ class LoraLoaderInvocation(BaseInvocation):
output.unet = copy.deepcopy(self.unet)
output.unet.loras.append(
LoraInfo(
base_model=base_model,
model_name=self.lora_name,
model_type=SDModelType.Lora,
model_type=ModelType.Lora,
submodel=None,
weight=self.weight,
)
@ -206,8 +211,9 @@ class LoraLoaderInvocation(BaseInvocation):
output.clip = copy.deepcopy(self.clip)
output.clip.loras.append(
LoraInfo(
base_model=base_model,
model_name=self.lora_name,
model_type=SDModelType.Lora,
model_type=ModelType.Lora,
submodel=None,
weight=self.weight,
)

View File

@ -0,0 +1,134 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
import math
from typing import Literal
from pydantic import Field, validator
import torch
from invokeai.app.invocations.latent import LatentsField
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from ...backend.util.devices import choose_torch_device, torch_dtype
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationConfig,
InvocationContext,
)
"""
Utilities
"""
def get_noise(
width: int,
height: int,
device: torch.device,
seed: int = 0,
latent_channels: int = 4,
downsampling_factor: int = 8,
use_cpu: bool = True,
perlin: float = 0.0,
):
"""Generate noise for a given image size."""
noise_device_type = "cpu" if (use_cpu or device.type == "mps") else device.type
# limit noise to only the diffusion image channels, not the mask channels
input_channels = min(latent_channels, 4)
generator = torch.Generator(device=noise_device_type).manual_seed(seed)
noise_tensor = torch.randn(
[
1,
input_channels,
height // downsampling_factor,
width // downsampling_factor,
],
dtype=torch_dtype(device),
device=noise_device_type,
generator=generator,
).to(device)
return noise_tensor
"""
Nodes
"""
class NoiseOutput(BaseInvocationOutput):
"""Invocation noise output"""
# fmt: off
type: Literal["noise_output"] = "noise_output"
# Inputs
noise: LatentsField = Field(default=None, description="The output noise")
width: int = Field(description="The width of the noise in pixels")
height: int = Field(description="The height of the noise in pixels")
# fmt: on
def build_noise_output(latents_name: str, latents: torch.Tensor):
return NoiseOutput(
noise=LatentsField(latents_name=latents_name),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""
type: Literal["noise"] = "noise"
# Inputs
seed: int = Field(
ge=0,
le=SEED_MAX,
description="The seed to use",
default_factory=get_random_seed,
)
width: int = Field(
default=512,
multiple_of=8,
gt=0,
description="The width of the resulting noise",
)
height: int = Field(
default=512,
multiple_of=8,
gt=0,
description="The height of the resulting noise",
)
use_cpu: bool = Field(
default=True,
description="Use CPU for noise generation (for reproducible results across platforms)",
)
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents", "noise"],
},
}
@validator("seed", pre=True)
def modulo_seed(cls, v):
"""Returns the seed modulo SEED_MAX to ensure it is within the valid range."""
return v % SEED_MAX
def invoke(self, context: InvocationContext) -> NoiseOutput:
noise = get_noise(
width=self.width,
height=self.height,
device=choose_torch_device(),
seed=self.seed,
use_cpu=self.use_cpu,
)
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, noise)
return build_noise_output(latents_name=name, latents=noise)

View File

@ -133,20 +133,19 @@ class StepParamEasingInvocation(BaseInvocation):
postlist = list(num_poststeps * [self.post_end_value])
if log_diagnostics:
logger = InvokeAILogger.getLogger(name="StepParamEasing")
logger.debug("start_step: " + str(start_step))
logger.debug("end_step: " + str(end_step))
logger.debug("num_easing_steps: " + str(num_easing_steps))
logger.debug("num_presteps: " + str(num_presteps))
logger.debug("num_poststeps: " + str(num_poststeps))
logger.debug("prelist size: " + str(len(prelist)))
logger.debug("postlist size: " + str(len(postlist)))
logger.debug("prelist: " + str(prelist))
logger.debug("postlist: " + str(postlist))
context.services.logger.debug("start_step: " + str(start_step))
context.services.logger.debug("end_step: " + str(end_step))
context.services.logger.debug("num_easing_steps: " + str(num_easing_steps))
context.services.logger.debug("num_presteps: " + str(num_presteps))
context.services.logger.debug("num_poststeps: " + str(num_poststeps))
context.services.logger.debug("prelist size: " + str(len(prelist)))
context.services.logger.debug("postlist size: " + str(len(postlist)))
context.services.logger.debug("prelist: " + str(prelist))
context.services.logger.debug("postlist: " + str(postlist))
easing_class = EASING_FUNCTIONS_MAP[self.easing]
if log_diagnostics:
logger.debug("easing class: " + str(easing_class))
context.services.logger.debug("easing class: " + str(easing_class))
easing_list = list()
if self.mirror: # "expected" mirroring
# if number of steps is even, squeeze duration down to (number_of_steps)/2
@ -156,7 +155,7 @@ class StepParamEasingInvocation(BaseInvocation):
# but if even then number_of_steps/2 === ceil(number_of_steps/2), so can just use ceil always
base_easing_duration = int(np.ceil(num_easing_steps/2.0))
if log_diagnostics: logger.debug("base easing duration: " + str(base_easing_duration))
if log_diagnostics: context.services.logger.debug("base easing duration: " + str(base_easing_duration))
even_num_steps = (num_easing_steps % 2 == 0) # even number of steps
easing_function = easing_class(start=self.start_value,
end=self.end_value,
@ -166,14 +165,14 @@ class StepParamEasingInvocation(BaseInvocation):
easing_val = easing_function.ease(step_index)
base_easing_vals.append(easing_val)
if log_diagnostics:
logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(easing_val))
context.services.logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(easing_val))
if even_num_steps:
mirror_easing_vals = list(reversed(base_easing_vals))
else:
mirror_easing_vals = list(reversed(base_easing_vals[0:-1]))
if log_diagnostics:
logger.debug("base easing vals: " + str(base_easing_vals))
logger.debug("mirror easing vals: " + str(mirror_easing_vals))
context.services.logger.debug("base easing vals: " + str(base_easing_vals))
context.services.logger.debug("mirror easing vals: " + str(mirror_easing_vals))
easing_list = base_easing_vals + mirror_easing_vals
# FIXME: add alt_mirror option (alternative to default or mirror), or remove entirely
@ -206,12 +205,12 @@ class StepParamEasingInvocation(BaseInvocation):
step_val = easing_function.ease(step_index)
easing_list.append(step_val)
if log_diagnostics:
logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(step_val))
context.services.logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(step_val))
if log_diagnostics:
logger.debug("prelist size: " + str(len(prelist)))
logger.debug("easing_list size: " + str(len(easing_list)))
logger.debug("postlist size: " + str(len(postlist)))
context.services.logger.debug("prelist size: " + str(len(prelist)))
context.services.logger.debug("easing_list size: " + str(len(easing_list)))
context.services.logger.debug("postlist size: " + str(len(postlist)))
param_list = prelist + easing_list + postlist

View File

@ -15,7 +15,7 @@ InvokeAI:
conf_path: configs/models.yaml
legacy_conf_dir: configs/stable-diffusion
outdir: outputs
autoconvert_dir: null
autoimport_dir: null
Models:
model: stable-diffusion-1.5
embeddings: true
@ -367,16 +367,19 @@ setting environment variables INVOKEAI_<setting>.
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", category='Memory/Performance')
free_gpu_mem : bool = Field(default=False, description="If true, purge model from GPU after each generation.", category='Memory/Performance')
max_loaded_models : int = Field(default=2, gt=0, description="Maximum number of models to keep in memory for rapid switching", category='Memory/Performance')
max_loaded_models : int = Field(default=3, gt=0, description="Maximum number of models to keep in memory for rapid switching", category='Memory/Performance')
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='float16',description='Floating point precision', category='Memory/Performance')
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category='Memory/Performance')
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category='Memory/Performance')
root : Path = Field(default=_find_root(), description='InvokeAI runtime root directory', category='Paths')
autoconvert_dir : Path = Field(default=None, description='Path to a directory of ckpt files to be converted into diffusers and imported on startup.', category='Paths')
autoimport_dir : Path = Field(default='autoimport/main', description='Path to a directory of models files to be imported on startup.', category='Paths')
lora_dir : Path = Field(default='autoimport/lora', description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', category='Paths')
embedding_dir : Path = Field(default='autoimport/embedding', description='Path to a directory of Textual Inversion embeddings to be imported on startup.', category='Paths')
controlnet_dir : Path = Field(default='autoimport/controlnet', description='Path to a directory of ControlNet embeddings to be imported on startup.', category='Paths')
conf_path : Path = Field(default='configs/models.yaml', description='Path to models definition file', category='Paths')
models_dir : Path = Field(default='./models', description='Path to the models directory', category='Paths')
models_dir : Path = Field(default='models', description='Path to the models directory', category='Paths')
legacy_conf_dir : Path = Field(default='configs/stable-diffusion', description='Path to directory of legacy checkpoint config files', category='Paths')
db_dir : Path = Field(default='databases', description='Path to InvokeAI databases directory', category='Paths')
outdir : Path = Field(default='outputs', description='Default folder for output images', category='Paths')

View File

@ -1,4 +1,5 @@
from ..invocations.latent import LatentsToImageInvocation, NoiseInvocation, TextToLatentsInvocation
from ..invocations.latent import LatentsToImageInvocation, TextToLatentsInvocation
from ..invocations.noise import NoiseInvocation
from ..invocations.compel import CompelInvocation
from ..invocations.params import ParamIntInvocation
from .graph import Edge, EdgeConnection, ExposedNodeInput, ExposedNodeOutput, Graph, LibraryGraph

View File

@ -85,8 +85,10 @@ class DiskImageFileStorage(ImageFileStorageBase):
self.__cache_ids = Queue()
self.__max_cache_size = 10 # TODO: get this from config
self.__output_folder: Path = output_folder if isinstance(output_folder, Path) else Path(output_folder)
self.__thumbnails_folder = self.__output_folder / 'thumbnails'
self.__output_folder: Path = (
output_folder if isinstance(output_folder, Path) else Path(output_folder)
)
self.__thumbnails_folder = self.__output_folder / "thumbnails"
# Validate required output folders at launch
self.__validate_storage_folders()
@ -94,7 +96,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
def get(self, image_name: str) -> PILImageType:
try:
image_path = self.get_path(image_name)
cache_item = self.__get_cache(image_path)
if cache_item:
return cache_item
@ -155,7 +157,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
# TODO: make this a bit more flexible for e.g. cloud storage
def get_path(self, image_name: str, thumbnail: bool = False) -> Path:
path = self.__output_folder / image_name
if thumbnail:
thumbnail_name = get_thumbnail_name(image_name)
path = self.__thumbnails_folder / thumbnail_name
@ -166,7 +168,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
"""Validates the path given for an image or thumbnail."""
path = path if isinstance(path, Path) else Path(path)
return path.exists()
def __validate_storage_folders(self) -> None:
"""Checks if the required output folders exist and create them if they don't"""
folders: list[Path] = [self.__output_folder, self.__thumbnails_folder]
@ -179,7 +181,9 @@ class DiskImageFileStorage(ImageFileStorageBase):
def __set_cache(self, image_name: Path, image: PILImageType):
if not image_name in self.__cache:
self.__cache[image_name] = image
self.__cache_ids.put(image_name) # TODO: this should refresh position for LRU cache
self.__cache_ids.put(
image_name
) # TODO: this should refresh position for LRU cache
if len(self.__cache) > self.__max_cache_size:
cache_id = self.__cache_ids.get()
if cache_id in self.__cache:

View File

@ -94,6 +94,11 @@ class ImageRecordStorageBase(ABC):
"""Deletes an image record."""
pass
@abstractmethod
def delete_many(self, image_names: list[str]) -> None:
"""Deletes many image records."""
pass
@abstractmethod
def save(
self,
@ -385,6 +390,25 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
finally:
self._lock.release()
def delete_many(self, image_names: list[str]) -> None:
try:
placeholders = ",".join("?" for _ in image_names)
self._lock.acquire()
# Construct the SQLite query with the placeholders
query = f"DELETE FROM images WHERE image_name IN ({placeholders})"
# Execute the query with the list of IDs as parameters
self._cursor.execute(query, image_names)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise ImageRecordDeleteException from e
finally:
self._lock.release()
def save(
self,
image_name: str,

View File

@ -112,6 +112,11 @@ class ImageServiceABC(ABC):
"""Deletes an image."""
pass
@abstractmethod
def delete_images_on_board(self, board_id: str):
"""Deletes all images on a board."""
pass
class ImageServiceDependencies:
"""Service dependencies for the ImageService."""
@ -341,6 +346,28 @@ class ImageService(ImageServiceABC):
self._services.logger.error("Problem deleting image record and file")
raise e
def delete_images_on_board(self, board_id: str):
try:
images = self._services.board_image_records.get_images_for_board(board_id)
image_name_list = list(
map(
lambda r: r.image_name,
images.items,
)
)
for image_name in image_name_list:
self._services.image_files.delete(image_name)
self._services.image_records.delete_many(image_name_list)
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image records")
raise
except ImageFileDeleteException:
self._services.logger.error(f"Failed to delete image files")
raise
except Exception as e:
self._services.logger.error("Problem deleting image records and files")
raise e
def _get_metadata(
self, session_id: Optional[str] = None, node_id: Optional[str] = None
) -> Union[ImageMetadata, None]:

View File

@ -168,6 +168,8 @@ class ModelManagerService(ModelManagerServiceBase):
logger.debug(f'config file={config_file}')
device = torch.device(choose_torch_device())
logger.debug(f'GPU device = {device}')
precision = config.precision
if precision == "auto":
precision = choose_precision(device)

View File

@ -7,8 +7,6 @@
# Coauthor: Kevin Turner http://github.com/keturn
#
import sys
print("Loading Python libraries...\n",file=sys.stderr)
import argparse
import io
import os
@ -16,6 +14,7 @@ import shutil
import textwrap
import traceback
import warnings
import yaml
from argparse import Namespace
from pathlib import Path
from shutil import get_terminal_size
@ -25,6 +24,7 @@ from urllib import request
import npyscreen
import transformers
from diffusers import AutoencoderKL
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from huggingface_hub import HfFolder
from huggingface_hub import login as hf_hub_login
from omegaconf import OmegaConf
@ -34,6 +34,8 @@ from transformers import (
CLIPSegForImageSegmentation,
CLIPTextModel,
CLIPTokenizer,
AutoFeatureExtractor,
BertTokenizerFast,
)
import invokeai.configs as configs
@ -43,6 +45,7 @@ from invokeai.app.services.config import (
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.install.model_install import addModelsForm, process_and_execute
from invokeai.frontend.install.widgets import (
SingleSelectColumns,
CenteredButtonPress,
IntTitleSlider,
set_min_terminal_size,
@ -52,12 +55,13 @@ from invokeai.frontend.install.widgets import (
)
from invokeai.backend.install.legacy_arg_parsing import legacy_parser
from invokeai.backend.install.model_install_backend import (
default_dataset,
download_from_hf,
hf_download_with_resume,
recommended_datasets,
UserSelections,
hf_download_from_pretrained,
InstallSelections,
ModelInstall,
)
from invokeai.backend.model_management.model_probe import (
ModelType, BaseModelType
)
warnings.filterwarnings("ignore")
transformers.logging.set_verbosity_error()
@ -73,7 +77,7 @@ Weights_dir = "ldm/stable-diffusion-v1/"
Default_config_file = config.model_conf_path
SD_Configs = config.legacy_conf_path
PRECISION_CHOICES = ['auto','float16','float32','autocast']
PRECISION_CHOICES = ['auto','float16','float32']
INIT_FILE_PREAMBLE = """# InvokeAI initialization file
# This is the InvokeAI initialization file, which contains command-line default values.
@ -81,7 +85,7 @@ INIT_FILE_PREAMBLE = """# InvokeAI initialization file
# or renaming it and then running invokeai-configure again.
"""
logger=None
logger=InvokeAILogger.getLogger()
# --------------------------------------------
def postscript(errors: None):
@ -162,75 +166,91 @@ class ProgressBar:
# ---------------------------------------------
def download_with_progress_bar(model_url: str, model_dest: str, label: str = "the"):
try:
print(f"Installing {label} model file {model_url}...", end="", file=sys.stderr)
logger.info(f"Installing {label} model file {model_url}...")
if not os.path.exists(model_dest):
os.makedirs(os.path.dirname(model_dest), exist_ok=True)
request.urlretrieve(
model_url, model_dest, ProgressBar(os.path.basename(model_dest))
)
print("...downloaded successfully", file=sys.stderr)
logger.info("...downloaded successfully")
else:
print("...exists", file=sys.stderr)
logger.info("...exists")
except Exception:
print("...download failed", file=sys.stderr)
print(f"Error downloading {label} model", file=sys.stderr)
logger.info("...download failed")
logger.info(f"Error downloading {label} model")
print(traceback.format_exc(), file=sys.stderr)
# ---------------------------------------------
# this will preload the Bert tokenizer fles
def download_bert():
print("Installing bert tokenizer...", file=sys.stderr)
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
from transformers import BertTokenizerFast
def download_conversion_models():
target_dir = config.root_path / 'models/core/convert'
kwargs = dict() # for future use
try:
logger.info('Downloading core tokenizers and text encoders')
download_from_hf(BertTokenizerFast, "bert-base-uncased")
# bert
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
bert = BertTokenizerFast.from_pretrained("bert-base-uncased", **kwargs)
bert.save_pretrained(target_dir / 'bert-base-uncased', safe_serialization=True)
# sd-1
repo_id = 'openai/clip-vit-large-patch14'
hf_download_from_pretrained(CLIPTokenizer, repo_id, target_dir / 'clip-vit-large-patch14')
hf_download_from_pretrained(CLIPTextModel, repo_id, target_dir / 'clip-vit-large-patch14')
# sd-2
repo_id = "stabilityai/stable-diffusion-2"
pipeline = CLIPTokenizer.from_pretrained(repo_id, subfolder="tokenizer", **kwargs)
pipeline.save_pretrained(target_dir / 'stable-diffusion-2-clip' / 'tokenizer', safe_serialization=True)
# ---------------------------------------------
def download_sd1_clip():
print("Installing SD1 clip model...", file=sys.stderr)
version = "openai/clip-vit-large-patch14"
download_from_hf(CLIPTokenizer, version)
download_from_hf(CLIPTextModel, version)
pipeline = CLIPTextModel.from_pretrained(repo_id, subfolder="text_encoder", **kwargs)
pipeline.save_pretrained(target_dir / 'stable-diffusion-2-clip' / 'text_encoder', safe_serialization=True)
# VAE
logger.info('Downloading stable diffusion VAE')
vae = AutoencoderKL.from_pretrained('stabilityai/sd-vae-ft-mse', **kwargs)
vae.save_pretrained(target_dir / 'sd-vae-ft-mse', safe_serialization=True)
# ---------------------------------------------
def download_sd2_clip():
version = "stabilityai/stable-diffusion-2"
print("Installing SD2 clip model...", file=sys.stderr)
download_from_hf(CLIPTokenizer, version, subfolder="tokenizer")
download_from_hf(CLIPTextModel, version, subfolder="text_encoder")
# safety checking
logger.info('Downloading safety checker')
repo_id = "CompVis/stable-diffusion-safety-checker"
pipeline = AutoFeatureExtractor.from_pretrained(repo_id,**kwargs)
pipeline.save_pretrained(target_dir / 'stable-diffusion-safety-checker', safe_serialization=True)
pipeline = StableDiffusionSafetyChecker.from_pretrained(repo_id,**kwargs)
pipeline.save_pretrained(target_dir / 'stable-diffusion-safety-checker', safe_serialization=True)
except KeyboardInterrupt:
raise
except Exception as e:
logger.error(str(e))
# ---------------------------------------------
def download_realesrgan():
print("Installing models from RealESRGAN...", file=sys.stderr)
logger.info("Installing models from RealESRGAN...")
model_url = "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth"
wdn_model_url = "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth"
model_dest = config.root_path / "models/realesrgan/realesr-general-x4v3.pth"
wdn_model_dest = config.root_path / "models/realesrgan/realesr-general-wdn-x4v3.pth"
model_dest = config.root_path / "models/core/upscaling/realesrgan/realesr-general-x4v3.pth"
wdn_model_dest = config.root_path / "models/core/upscaling/realesrgan/realesr-general-wdn-x4v3.pth"
download_with_progress_bar(model_url, str(model_dest), "RealESRGAN")
download_with_progress_bar(wdn_model_url, str(wdn_model_dest), "RealESRGANwdn")
def download_gfpgan():
print("Installing GFPGAN models...", file=sys.stderr)
logger.info("Installing GFPGAN models...")
for model in (
[
"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",
"./models/gfpgan/GFPGANv1.4.pth",
"./models/core/face_restoration/gfpgan/GFPGANv1.4.pth",
],
[
"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",
"./models/gfpgan/weights/detection_Resnet50_Final.pth",
"./models/core/face_restoration/gfpgan/weights/detection_Resnet50_Final.pth",
],
[
"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",
"./models/gfpgan/weights/parsing_parsenet.pth",
"./models/core/face_restoration/gfpgan/weights/parsing_parsenet.pth",
],
):
model_url, model_dest = model[0], config.root_path / model[1]
@ -239,70 +259,32 @@ def download_gfpgan():
# ---------------------------------------------
def download_codeformer():
print("Installing CodeFormer model file...", file=sys.stderr)
logger.info("Installing CodeFormer model file...")
model_url = (
"https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth"
)
model_dest = config.root_path / "models/codeformer/codeformer.pth"
model_dest = config.root_path / "models/core/face_restoration/codeformer/codeformer.pth"
download_with_progress_bar(model_url, str(model_dest), "CodeFormer")
# ---------------------------------------------
def download_clipseg():
print("Installing clipseg model for text-based masking...", file=sys.stderr)
logger.info("Installing clipseg model for text-based masking...")
CLIPSEG_MODEL = "CIDAS/clipseg-rd64-refined"
try:
download_from_hf(AutoProcessor, CLIPSEG_MODEL)
download_from_hf(CLIPSegForImageSegmentation, CLIPSEG_MODEL)
hf_download_from_pretrained(AutoProcessor, CLIPSEG_MODEL, config.root_path / 'models/core/misc/clipseg')
hf_download_from_pretrained(CLIPSegForImageSegmentation, CLIPSEG_MODEL, config.root_path / 'models/core/misc/clipseg')
except Exception:
print("Error installing clipseg model:")
print(traceback.format_exc())
logger.info("Error installing clipseg model:")
logger.info(traceback.format_exc())
# -------------------------------------
def download_safety_checker():
print("Installing model for NSFW content detection...", file=sys.stderr)
try:
from diffusers.pipelines.stable_diffusion.safety_checker import (
StableDiffusionSafetyChecker,
)
from transformers import AutoFeatureExtractor
except ModuleNotFoundError:
print("Error installing NSFW checker model:")
print(traceback.format_exc())
return
safety_model_id = "CompVis/stable-diffusion-safety-checker"
print("AutoFeatureExtractor...", file=sys.stderr)
download_from_hf(AutoFeatureExtractor, safety_model_id)
print("StableDiffusionSafetyChecker...", file=sys.stderr)
download_from_hf(StableDiffusionSafetyChecker, safety_model_id)
# -------------------------------------
def download_vaes():
print("Installing stabilityai VAE...", file=sys.stderr)
try:
# first the diffusers version
repo_id = "stabilityai/sd-vae-ft-mse"
args = dict(
cache_dir=config.cache_dir,
)
if not AutoencoderKL.from_pretrained(repo_id, **args):
raise Exception(f"download of {repo_id} failed")
repo_id = "stabilityai/sd-vae-ft-mse-original"
model_name = "vae-ft-mse-840000-ema-pruned.ckpt"
# next the legacy checkpoint version
if not hf_download_with_resume(
repo_id=repo_id,
model_name=model_name,
model_dir=str(config.root_path / Model_dir / Weights_dir),
):
raise Exception(f"download of {model_name} failed")
except Exception as e:
print(f"Error downloading StabilityAI standard VAE: {str(e)}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
def download_support_models():
download_realesrgan()
download_gfpgan()
download_codeformer()
download_clipseg()
download_conversion_models()
# -------------------------------------
def get_root(root: str = None) -> str:
@ -378,9 +360,7 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
scroll_exit=True,
)
self.nextrely += 1
label = """If you have an account at HuggingFace you may optionally paste your access token here
to allow InvokeAI to download restricted styles & subjects from the "Concept Library". See https://huggingface.co/settings/tokens.
"""
label = """HuggingFace access token (OPTIONAL) for automatic model downloads. See https://huggingface.co/settings/tokens."""
for line in textwrap.wrap(label,width=window_width-6):
self.add_widget_intelligent(
npyscreen.FixedText,
@ -442,6 +422,7 @@ to allow InvokeAI to download restricted styles & subjects from the "Concept Lib
)
self.precision = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
columns = 2,
name="Precision",
values=PRECISION_CHOICES,
value=PRECISION_CHOICES.index(precision),
@ -465,39 +446,19 @@ to allow InvokeAI to download restricted styles & subjects from the "Concept Lib
editable=False,
color="CONTROL",
)
self.embedding_dir = self.add_widget_intelligent(
npyscreen.TitleFilename,
name=" Textual Inversion Embeddings:",
value=str(default_embedding_dir()),
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=32,
scroll_exit=True,
)
self.lora_dir = self.add_widget_intelligent(
npyscreen.TitleFilename,
name=" LoRA and LyCORIS:",
value=str(default_lora_dir()),
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=32,
scroll_exit=True,
)
self.controlnet_dir = self.add_widget_intelligent(
npyscreen.TitleFilename,
name=" ControlNets:",
value=str(default_controlnet_dir()),
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=32,
scroll_exit=True,
)
self.autoimport_dirs = {}
for description, config_name, path in autoimport_paths(old_opts):
self.autoimport_dirs[config_name] = self.add_widget_intelligent(
npyscreen.TitleFilename,
name=description+':',
value=str(path),
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=32,
scroll_exit=True
)
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
@ -562,10 +523,6 @@ https://huggingface.co/spaces/CompVis/stable-diffusion-license
bad_fields.append(
f"The output directory does not seem to be valid. Please check that {str(Path(opt.outdir).parent)} is an existing directory."
)
if not Path(opt.embedding_dir).parent.exists():
bad_fields.append(
f"The embedding directory does not seem to be valid. Please check that {str(Path(opt.embedding_dir).parent)} is an existing directory."
)
if len(bad_fields) > 0:
message = "The following problems were detected and must be corrected:\n"
for problem in bad_fields:
@ -585,12 +542,15 @@ https://huggingface.co/spaces/CompVis/stable-diffusion-license
"max_loaded_models",
"xformers_enabled",
"always_use_cpu",
"embedding_dir",
"lora_dir",
"controlnet_dir",
]:
setattr(new_opts, attr, getattr(self, attr).value)
for attr in self.autoimport_dirs:
directory = Path(self.autoimport_dirs[attr].value)
if directory.is_relative_to(config.root_path):
directory = directory.relative_to(config.root_path)
setattr(new_opts, attr, directory)
new_opts.hf_token = self.hf_token.value
new_opts.license_acceptance = self.license_acceptance.value
new_opts.precision = PRECISION_CHOICES[self.precision.value[0]]
@ -607,7 +567,8 @@ class EditOptApplication(npyscreen.NPSAppManaged):
self.program_opts = program_opts
self.invokeai_opts = invokeai_opts
self.user_cancelled = False
self.user_selections = default_user_selections(program_opts)
self.autoload_pending = True
self.install_selections = default_user_selections(program_opts)
def onStart(self):
npyscreen.setTheme(npyscreen.Themes.DefaultTheme)
@ -642,41 +603,62 @@ def default_startup_options(init_file: Path) -> Namespace:
opts.nsfw_checker = True
return opts
def default_user_selections(program_opts: Namespace) -> UserSelections:
return UserSelections(
install_models=default_dataset()
def default_user_selections(program_opts: Namespace) -> InstallSelections:
installer = ModelInstall(config)
models = installer.all_models()
return InstallSelections(
install_models=[models[installer.default_model()].path or models[installer.default_model()].repo_id]
if program_opts.default_only
else recommended_datasets()
else [models[x].path or models[x].repo_id for x in installer.recommended_models()]
if program_opts.yes_to_all
else dict(),
purge_deleted_models=False,
scan_directory=None,
autoscan_on_startup=None,
else list(),
# scan_directory=None,
# autoscan_on_startup=None,
)
# -------------------------------------
def autoimport_paths(config: InvokeAIAppConfig):
return [
('Checkpoints & diffusers models', 'autoimport_dir', config.root_path / config.autoimport_dir),
('LoRA/LyCORIS models', 'lora_dir', config.root_path / config.lora_dir),
('Controlnet models', 'controlnet_dir', config.root_path / config.controlnet_dir),
('Textual Inversion Embeddings', 'embedding_dir', config.root_path / config.embedding_dir),
]
# -------------------------------------
def initialize_rootdir(root: Path, yes_to_all: bool = False):
print("** INITIALIZING INVOKEAI RUNTIME DIRECTORY **")
logger.info("** INITIALIZING INVOKEAI RUNTIME DIRECTORY **")
for name in (
"models",
"configs",
"embeddings",
"databases",
"loras",
"controlnets",
"text-inversion-output",
"text-inversion-training-data",
"configs"
):
os.makedirs(os.path.join(root, name), exist_ok=True)
for model_type in ModelType:
Path(root, 'autoimport', model_type.value).mkdir(parents=True, exist_ok=True)
configs_src = Path(configs.__path__[0])
configs_dest = root / "configs"
if not os.path.samefile(configs_src, configs_dest):
shutil.copytree(configs_src, configs_dest, dirs_exist_ok=True)
dest = root / 'models'
for model_base in BaseModelType:
for model_type in ModelType:
path = dest / model_base.value / model_type.value
path.mkdir(parents=True, exist_ok=True)
path = dest / 'core'
path.mkdir(parents=True, exist_ok=True)
with open(root / 'configs' / 'models.yaml','w') as yaml_file:
yaml_file.write(yaml.dump({'__metadata__':
{'version':'3.0.0'}
}
)
)
# -------------------------------------
def run_console_ui(
program_opts: Namespace, initfile: Path = None
@ -699,7 +681,7 @@ def run_console_ui(
if editApp.user_cancelled:
return (None, None)
else:
return (editApp.new_opts, editApp.user_selections)
return (editApp.new_opts, editApp.install_selections)
# -------------------------------------
@ -722,18 +704,6 @@ def write_opts(opts: Namespace, init_file: Path):
def default_output_dir() -> Path:
return config.root_path / "outputs"
# -------------------------------------
def default_embedding_dir() -> Path:
return config.root_path / "embeddings"
# -------------------------------------
def default_lora_dir() -> Path:
return config.root_path / "loras"
# -------------------------------------
def default_controlnet_dir() -> Path:
return config.root_path / "controlnets"
# -------------------------------------
def write_default_options(program_opts: Namespace, initfile: Path):
opt = default_startup_options(initfile)
@ -758,14 +728,42 @@ def migrate_init_file(legacy_format:Path):
new.nsfw_checker = old.safety_checker
new.xformers_enabled = old.xformers
new.conf_path = old.conf
new.embedding_dir = old.embedding_path
new.root = legacy_format.parent.resolve()
invokeai_yaml = legacy_format.parent / 'invokeai.yaml'
with open(invokeai_yaml,"w", encoding="utf-8") as outfile:
outfile.write(new.to_yaml())
legacy_format.replace(legacy_format.parent / 'invokeai.init.old')
legacy_format.replace(legacy_format.parent / 'invokeai.init.orig')
# -------------------------------------
def migrate_models(root: Path):
from invokeai.backend.install.migrate_to_3 import do_migrate
do_migrate(root, root)
def migrate_if_needed(opt: Namespace, root: Path)->bool:
# We check for to see if the runtime directory is correctly initialized.
old_init_file = root / 'invokeai.init'
new_init_file = root / 'invokeai.yaml'
old_hub = root / 'models/hub'
migration_needed = old_init_file.exists() and not new_init_file.exists() or old_hub.exists()
if migration_needed:
if opt.yes_to_all or \
yes_or_no(f'{str(config.root_path)} appears to be a 2.3 format root directory. Convert to version 3.0?'):
logger.info('** Migrating invokeai.init to invokeai.yaml')
migrate_init_file(old_init_file)
config.parse_args(argv=[],conf=OmegaConf.load(new_init_file))
if old_hub.exists():
migrate_models(config.root_path)
else:
print('Cannot continue without conversion. Aborting.')
return migration_needed
# -------------------------------------
def main():
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
@ -831,20 +829,16 @@ def main():
errors = set()
try:
models_to_download = default_user_selections(opt)
# We check for to see if the runtime directory is correctly initialized.
old_init_file = config.root_path / 'invokeai.init'
new_init_file = config.root_path / 'invokeai.yaml'
if old_init_file.exists() and not new_init_file.exists():
print('** Migrating invokeai.init to invokeai.yaml')
migrate_init_file(old_init_file)
# Load new init file into config
config.parse_args(argv=[],conf=OmegaConf.load(new_init_file))
# if we do a root migration/upgrade, then we are keeping previous
# configuration and we are done.
if migrate_if_needed(opt, config.root_path):
sys.exit(0)
if not config.model_conf_path.exists():
initialize_rootdir(config.root_path, opt.yes_to_all)
models_to_download = default_user_selections(opt)
new_init_file = config.root_path / 'invokeai.yaml'
if opt.yes_to_all:
write_default_options(opt, new_init_file)
init_options = Namespace(
@ -855,29 +849,21 @@ def main():
if init_options:
write_opts(init_options, new_init_file)
else:
print(
logger.info(
'\n** CANCELLED AT USER\'S REQUEST. USE THE "invoke.sh" LAUNCHER TO RUN LATER **\n'
)
sys.exit(0)
if opt.skip_support_models:
print("\n** SKIPPING SUPPORT MODEL DOWNLOADS PER USER REQUEST **")
logger.info("SKIPPING SUPPORT MODEL DOWNLOADS PER USER REQUEST")
else:
print("\n** CHECKING/UPDATING SUPPORT MODELS **")
download_bert()
download_sd1_clip()
download_sd2_clip()
download_realesrgan()
download_gfpgan()
download_codeformer()
download_clipseg()
download_safety_checker()
download_vaes()
logger.info("CHECKING/UPDATING SUPPORT MODELS")
download_support_models()
if opt.skip_sd_weights:
print("\n** SKIPPING DIFFUSION WEIGHTS DOWNLOAD PER USER REQUEST **")
logger.info("\n** SKIPPING DIFFUSION WEIGHTS DOWNLOAD PER USER REQUEST **")
elif models_to_download:
print("\n** DOWNLOADING DIFFUSION WEIGHTS **")
logger.info("\n** DOWNLOADING DIFFUSION WEIGHTS **")
process_and_execute(opt, models_to_download)
postscript(errors=errors)

View File

@ -4,6 +4,8 @@ import argparse
import shlex
from argparse import ArgumentParser
# note that this includes both old sampler names and new scheduler names
# in order to be able to parse both 2.0 and 3.0-pre-nodes versions of invokeai.init
SAMPLER_CHOICES = [
"ddim",
"ddpm",
@ -27,6 +29,15 @@ SAMPLER_CHOICES = [
"dpmpp_sde",
"dpmpp_sde_k",
"unipc",
"k_dpm_2_a",
"k_dpm_2",
"k_dpmpp_2_a",
"k_dpmpp_2",
"k_euler_a",
"k_euler",
"k_heun",
"k_lms",
"plms",
]
PRECISION_CHOICES = [

View File

@ -0,0 +1,585 @@
'''
Migrate the models directory and models.yaml file from an existing
InvokeAI 2.3 installation to 3.0.0.
'''
import io
import os
import argparse
import shutil
import yaml
import transformers
import diffusers
import warnings
from dataclasses import dataclass
from pathlib import Path
from omegaconf import OmegaConf, DictConfig
from typing import Union
from diffusers import StableDiffusionPipeline, AutoencoderKL
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from transformers import (
CLIPTextModel,
CLIPTokenizer,
AutoFeatureExtractor,
BertTokenizerFast,
)
import invokeai.backend.util.logging as logger
from invokeai.backend.model_management import ModelManager
from invokeai.backend.model_management.model_probe import (
ModelProbe, ModelType, BaseModelType, SchedulerPredictionType, ModelProbeInfo
)
warnings.filterwarnings("ignore")
transformers.logging.set_verbosity_error()
diffusers.logging.set_verbosity_error()
# holder for paths that we will migrate
@dataclass
class ModelPaths:
models: Path
embeddings: Path
loras: Path
controlnets: Path
class MigrateTo3(object):
def __init__(self,
root_directory: Path,
dest_models: Path,
yaml_file: io.TextIOBase,
src_paths: ModelPaths,
):
self.root_directory = root_directory
self.dest_models = dest_models
self.dest_yaml = yaml_file
self.model_names = set()
self.src_paths = src_paths
self._initialize_yaml()
def _initialize_yaml(self):
self.dest_yaml.write(
yaml.dump(
{
'__metadata__':
{
'version':'3.0.0'}
}
)
)
def unique_name(self,name,info)->str:
'''
Create a unique name for a model for use within models.yaml.
'''
done = False
# some model names have slashes in them, which really screws things up
name = name.replace('/','_')
key = ModelManager.create_key(name,info.base_type,info.model_type)
unique_name = key
counter = 1
while not done:
if unique_name in self.model_names:
unique_name = f'{key}-{counter:0>2d}'
counter += 1
else:
done = True
self.model_names.add(unique_name)
name,_,_ = ModelManager.parse_key(unique_name)
return name
def create_directory_structure(self):
'''
Create the basic directory structure for the models folder.
'''
for model_base in [BaseModelType.StableDiffusion1,BaseModelType.StableDiffusion2]:
for model_type in [ModelType.Main, ModelType.Vae, ModelType.Lora,
ModelType.ControlNet,ModelType.TextualInversion]:
path = self.dest_models / model_base.value / model_type.value
path.mkdir(parents=True, exist_ok=True)
path = self.dest_models / 'core'
path.mkdir(parents=True, exist_ok=True)
@staticmethod
def copy_file(src:Path,dest:Path):
'''
copy a single file with logging
'''
if dest.exists():
logger.info(f'Skipping existing {str(dest)}')
return
logger.info(f'Copying {str(src)} to {str(dest)}')
try:
shutil.copy(src, dest)
except Exception as e:
logger.error(f'COPY FAILED: {str(e)}')
@staticmethod
def copy_dir(src:Path,dest:Path):
'''
Recursively copy a directory with logging
'''
if dest.exists():
logger.info(f'Skipping existing {str(dest)}')
return
logger.info(f'Copying {str(src)} to {str(dest)}')
try:
shutil.copytree(src, dest)
except Exception as e:
logger.error(f'COPY FAILED: {str(e)}')
def migrate_models(self, src_dir: Path):
'''
Recursively walk through src directory, probe anything
that looks like a model, and copy the model into the
appropriate location within the destination models directory.
'''
for root, dirs, files in os.walk(src_dir):
for f in files:
# hack - don't copy raw learned_embeds.bin, let them
# be copied as part of a tree copy operation
if f == 'learned_embeds.bin':
continue
try:
model = Path(root,f)
info = ModelProbe().heuristic_probe(model)
if not info:
continue
dest = self._model_probe_to_path(info) / f
self.copy_file(model, dest)
except KeyboardInterrupt:
raise
except Exception as e:
logger.error(str(e))
for d in dirs:
try:
model = Path(root,d)
info = ModelProbe().heuristic_probe(model)
if not info:
continue
dest = self._model_probe_to_path(info) / model.name
self.copy_dir(model, dest)
except KeyboardInterrupt:
raise
except Exception as e:
logger.error(str(e))
def migrate_support_models(self):
'''
Copy the clipseg, upscaler, and restoration models to their new
locations.
'''
dest_directory = self.dest_models
if (self.root_directory / 'models/clipseg').exists():
self.copy_dir(self.root_directory / 'models/clipseg', dest_directory / 'core/misc/clipseg')
if (self.root_directory / 'models/realesrgan').exists():
self.copy_dir(self.root_directory / 'models/realesrgan', dest_directory / 'core/upscaling/realesrgan')
for d in ['codeformer','gfpgan']:
path = self.root_directory / 'models' / d
if path.exists():
self.copy_dir(path,dest_directory / f'core/face_restoration/{d}')
def migrate_tuning_models(self):
'''
Migrate the embeddings, loras and controlnets directories to their new homes.
'''
for src in [self.src_paths.embeddings, self.src_paths.loras, self.src_paths.controlnets]:
if not src:
continue
if src.is_dir():
logger.info(f'Scanning {src}')
self.migrate_models(src)
else:
logger.info(f'{src} directory not found; skipping')
continue
def migrate_conversion_models(self):
'''
Migrate all the models that are needed by the ckpt_to_diffusers conversion
script.
'''
dest_directory = self.dest_models
kwargs = dict(
cache_dir = self.root_directory / 'models/hub',
#local_files_only = True
)
try:
logger.info('Migrating core tokenizers and text encoders')
target_dir = dest_directory / 'core' / 'convert'
self._migrate_pretrained(BertTokenizerFast,
repo_id='bert-base-uncased',
dest = target_dir / 'bert-base-uncased',
**kwargs)
# sd-1
repo_id = 'openai/clip-vit-large-patch14'
self._migrate_pretrained(CLIPTokenizer,
repo_id= repo_id,
dest= target_dir / 'clip-vit-large-patch14' / 'tokenizer',
**kwargs)
self._migrate_pretrained(CLIPTextModel,
repo_id = repo_id,
dest = target_dir / 'clip-vit-large-patch14' / 'text_encoder',
**kwargs)
# sd-2
repo_id = "stabilityai/stable-diffusion-2"
self._migrate_pretrained(CLIPTokenizer,
repo_id = repo_id,
dest = target_dir / 'stable-diffusion-2-clip' / 'tokenizer',
**{'subfolder':'tokenizer',**kwargs}
)
self._migrate_pretrained(CLIPTextModel,
repo_id = repo_id,
dest = target_dir / 'stable-diffusion-2-clip' / 'text_encoder',
**{'subfolder':'text_encoder',**kwargs}
)
# VAE
logger.info('Migrating stable diffusion VAE')
self._migrate_pretrained(AutoencoderKL,
repo_id = 'stabilityai/sd-vae-ft-mse',
dest = target_dir / 'sd-vae-ft-mse',
**kwargs)
# safety checking
logger.info('Migrating safety checker')
repo_id = "CompVis/stable-diffusion-safety-checker"
self._migrate_pretrained(AutoFeatureExtractor,
repo_id = repo_id,
dest = target_dir / 'stable-diffusion-safety-checker',
**kwargs)
self._migrate_pretrained(StableDiffusionSafetyChecker,
repo_id = repo_id,
dest = target_dir / 'stable-diffusion-safety-checker',
**kwargs)
except KeyboardInterrupt:
raise
except Exception as e:
logger.error(str(e))
def write_yaml(self, model_name: str, path:Path, info:ModelProbeInfo, **kwargs):
'''
Write a stanza for a moved model into the new models.yaml file.
'''
name = self.unique_name(model_name, info)
stanza = {
f'{info.base_type.value}/{info.model_type.value}/{name}': {
'name': model_name,
'path': str(path),
'description': f'A {info.base_type.value} {info.model_type.value} model',
'format': info.format,
'image_size': info.image_size,
'base': info.base_type.value,
'variant': info.variant_type.value,
'prediction_type': info.prediction_type.value,
'upcast_attention': info.prediction_type == SchedulerPredictionType.VPrediction,
**kwargs,
}
}
self.dest_yaml.write(yaml.dump(stanza))
self.dest_yaml.flush()
def _model_probe_to_path(self, info: ModelProbeInfo)->Path:
return Path(self.dest_models, info.base_type.value, info.model_type.value)
def _migrate_pretrained(self, model_class, repo_id: str, dest: Path, **kwargs):
if dest.exists():
logger.info(f'Skipping existing {dest}')
return
model = model_class.from_pretrained(repo_id, **kwargs)
self._save_pretrained(model, dest)
def _save_pretrained(self, model, dest: Path):
if dest.exists():
logger.info(f'Skipping existing {dest}')
return
model_name = dest.name
download_path = dest.with_name(f'{model_name}.downloading')
model.save_pretrained(download_path, safe_serialization=True)
download_path.replace(dest)
def _download_vae(self, repo_id: str, subfolder:str=None)->Path:
vae = AutoencoderKL.from_pretrained(repo_id, cache_dir=self.root_directory / 'models/hub', subfolder=subfolder)
info = ModelProbe().heuristic_probe(vae)
_, model_name = repo_id.split('/')
dest = self._model_probe_to_path(info) / self.unique_name(model_name, info)
vae.save_pretrained(dest, safe_serialization=True)
return dest
def _vae_path(self, vae: Union[str,dict])->Path:
'''
Convert 2.3 VAE stanza to a straight path.
'''
vae_path = None
# First get a path
if isinstance(vae,str):
vae_path = vae
elif isinstance(vae,DictConfig):
if p := vae.get('path'):
vae_path = p
elif repo_id := vae.get('repo_id'):
if repo_id=='stabilityai/sd-vae-ft-mse': # this guy is already downloaded
vae_path = 'models/core/convert/sd-vae-ft-mse'
else:
vae_path = self._download_vae(repo_id, vae.get('subfolder'))
assert vae_path is not None, "Couldn't find VAE for this model"
# if the VAE is in the old models directory, then we must move it into the new
# one. VAEs outside of this directory can stay where they are.
vae_path = Path(vae_path)
if vae_path.is_relative_to(self.src_paths.models):
info = ModelProbe().heuristic_probe(vae_path)
dest = self._model_probe_to_path(info) / vae_path.name
if not dest.exists():
self.copy_dir(vae_path,dest)
vae_path = dest
if vae_path.is_relative_to(self.dest_models):
rel_path = vae_path.relative_to(self.dest_models)
return Path('models',rel_path)
else:
return vae_path
def migrate_repo_id(self, repo_id: str, model_name :str=None, **extra_config):
'''
Migrate a locally-cached diffusers pipeline identified with a repo_id
'''
dest_dir = self.dest_models
cache = self.root_directory / 'models/hub'
kwargs = dict(
cache_dir = cache,
safety_checker = None,
# local_files_only = True,
)
owner,repo_name = repo_id.split('/')
model_name = model_name or repo_name
model = cache / '--'.join(['models',owner,repo_name])
if len(list(model.glob('snapshots/**/model_index.json')))==0:
return
revisions = [x.name for x in model.glob('refs/*')]
# if an fp16 is available we use that
revision = 'fp16' if len(revisions) > 1 and 'fp16' in revisions else revisions[0]
pipeline = StableDiffusionPipeline.from_pretrained(
repo_id,
revision=revision,
**kwargs)
info = ModelProbe().heuristic_probe(pipeline)
if not info:
return
dest = self._model_probe_to_path(info) / repo_name
self._save_pretrained(pipeline, dest)
rel_path = Path('models',dest.relative_to(dest_dir))
self.write_yaml(model_name, path=rel_path, info=info, **extra_config)
def migrate_path(self, location: Path, model_name: str=None, **extra_config):
'''
Migrate a model referred to using 'weights' or 'path'
'''
# handle relative paths
dest_dir = self.dest_models
location = self.root_directory / location
info = ModelProbe().heuristic_probe(location)
if not info:
return
# uh oh, weights is in the old models directory - move it into the new one
if Path(location).is_relative_to(self.src_paths.models):
dest = Path(dest_dir, info.base_type.value, info.model_type.value, location.name)
self.copy_dir(location,dest)
location = Path('models', info.base_type.value, info.model_type.value, location.name)
model_name = model_name or location.stem
model_name = self.unique_name(model_name, info)
self.write_yaml(model_name, path=location, info=info, **extra_config)
def migrate_defined_models(self):
'''
Migrate models defined in models.yaml
'''
# find any models referred to in old models.yaml
conf = OmegaConf.load(self.root_directory / 'configs/models.yaml')
for model_name, stanza in conf.items():
try:
passthru_args = {}
if vae := stanza.get('vae'):
try:
passthru_args['vae'] = str(self._vae_path(vae))
except Exception as e:
logger.warning(f'Could not find a VAE matching "{vae}" for model "{model_name}"')
logger.warning(str(e))
if config := stanza.get('config'):
passthru_args['config'] = config
if repo_id := stanza.get('repo_id'):
logger.info(f'Migrating diffusers model {model_name}')
self.migrate_repo_id(repo_id, model_name, **passthru_args)
elif location := stanza.get('weights'):
logger.info(f'Migrating checkpoint model {model_name}')
self.migrate_path(Path(location), model_name, **passthru_args)
elif location := stanza.get('path'):
logger.info(f'Migrating diffusers model {model_name}')
self.migrate_path(Path(location), model_name, **passthru_args)
except KeyboardInterrupt:
raise
except Exception as e:
logger.error(str(e))
def migrate(self):
self.create_directory_structure()
# the configure script is doing this
self.migrate_support_models()
self.migrate_conversion_models()
self.migrate_tuning_models()
self.migrate_defined_models()
def _parse_legacy_initfile(root: Path, initfile: Path)->ModelPaths:
'''
Returns tuple of (embedding_path, lora_path, controlnet_path)
'''
parser = argparse.ArgumentParser(fromfile_prefix_chars='@')
parser.add_argument(
'--embedding_directory',
'--embedding_path',
type=Path,
dest='embedding_path',
default=Path('embeddings'),
)
parser.add_argument(
'--lora_directory',
dest='lora_path',
type=Path,
default=Path('loras'),
)
opt,_ = parser.parse_known_args([f'@{str(initfile)}'])
return ModelPaths(
models = root / 'models',
embeddings = root / str(opt.embedding_path).strip('"'),
loras = root / str(opt.lora_path).strip('"'),
controlnets = root / 'controlnets',
)
def _parse_legacy_yamlfile(root: Path, initfile: Path)->ModelPaths:
'''
Returns tuple of (embedding_path, lora_path, controlnet_path)
'''
# Don't use the config object because it is unforgiving of version updates
# Just use omegaconf directly
opt = OmegaConf.load(initfile)
paths = opt.InvokeAI.Paths
models = paths.get('models_dir','models')
embeddings = paths.get('embedding_dir','embeddings')
loras = paths.get('lora_dir','loras')
controlnets = paths.get('controlnet_dir','controlnets')
return ModelPaths(
models = root / models,
embeddings = root / embeddings,
loras = root /loras,
controlnets = root / controlnets,
)
def get_legacy_embeddings(root: Path) -> ModelPaths:
path = root / 'invokeai.init'
if path.exists():
return _parse_legacy_initfile(root, path)
path = root / 'invokeai.yaml'
if path.exists():
return _parse_legacy_yamlfile(root, path)
def do_migrate(src_directory: Path, dest_directory: Path):
dest_models = dest_directory / 'models-3.0'
dest_yaml = dest_directory / 'configs/models.yaml-3.0'
paths = get_legacy_embeddings(src_directory)
with open(dest_yaml,'w') as yaml_file:
migrator = MigrateTo3(src_directory,
dest_models,
yaml_file,
src_paths = paths,
)
migrator.migrate()
shutil.rmtree(dest_directory / 'models.orig', ignore_errors=True)
(dest_directory / 'models').replace(dest_directory / 'models.orig')
dest_models.replace(dest_directory / 'models')
(dest_directory /'configs/models.yaml').replace(dest_directory / 'configs/models.yaml.orig')
dest_yaml.replace(dest_directory / 'configs/models.yaml')
print(f"""Migration successful.
Original models directory moved to {dest_directory}/models.orig
Original models.yaml file moved to {dest_directory}/configs/models.yaml.orig
""")
def main():
parser = argparse.ArgumentParser(prog="invokeai-migrate3",
description="""
This will copy and convert the models directory and the configs/models.yaml from the InvokeAI 2.3 format
'--from-directory' root to the InvokeAI 3.0 '--to-directory' root. These may be abbreviated '--from' and '--to'.a
The old models directory and config file will be renamed 'models.orig' and 'models.yaml.orig' respectively.
It is safe to provide the same directory for both arguments, but it is better to use the invokeai_configure
script, which will perform a full upgrade in place."""
)
parser.add_argument('--from-directory',
dest='root_directory',
type=Path,
required=True,
help='Source InvokeAI 2.3 root directory (containing "invokeai.init" or "invokeai.yaml")'
)
parser.add_argument('--to-directory',
dest='dest_directory',
type=Path,
required=True,
help='Destination InvokeAI 3.0 directory (containing "invokeai.yaml")'
)
# TO DO: Implement full directory scanning
# parser.add_argument('--all-models',
# action="store_true",
# help='Migrate all models found in `models` directory, not just those mentioned in models.yaml',
# )
args = parser.parse_args()
root_directory = args.root_directory
assert root_directory.is_dir(), f"{root_directory} is not a valid directory"
assert (root_directory / 'models').is_dir(), f"{root_directory} does not contain a 'models' subdirectory"
assert (root_directory / 'invokeai.init').exists() or (root_directory / 'invokeai.yaml').exists(), f"{root_directory} does not contain an InvokeAI init file."
dest_directory = args.dest_directory
assert dest_directory.is_dir(), f"{dest_directory} is not a valid directory"
assert (dest_directory / 'models').is_dir(), f"{dest_directory} does not contain a 'models' subdirectory"
assert (dest_directory / 'invokeai.yaml').exists(), f"{dest_directory} does not contain an InvokeAI init file."
do_migrate(root_directory,dest_directory)
if __name__ == '__main__':
main()

View File

@ -2,46 +2,37 @@
Utility (backend) functions used by model_install.py
"""
import os
import re
import shutil
import sys
import warnings
from dataclasses import dataclass,field
from pathlib import Path
from tempfile import TemporaryFile
from typing import List, Dict, Callable
from tempfile import TemporaryDirectory
from typing import List, Dict, Callable, Union, Set
import requests
from diffusers import AutoencoderKL
from huggingface_hub import hf_hub_url, HfFolder
from diffusers import StableDiffusionPipeline
from diffusers import logging as dlogging
from huggingface_hub import hf_hub_url, HfFolder, HfApi
from omegaconf import OmegaConf
from omegaconf.dictconfig import DictConfig
from tqdm import tqdm
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from ..stable_diffusion import StableDiffusionGeneratorPipeline
from invokeai.backend.model_management import ModelManager, ModelType, BaseModelType, ModelVariantType
from invokeai.backend.model_management.model_probe import ModelProbe, SchedulerPredictionType, ModelProbeInfo
from invokeai.backend.util import download_with_resume
from ..util.logging import InvokeAILogger
warnings.filterwarnings("ignore")
# --------------------------globals-----------------------
config = InvokeAIAppConfig.get_config()
Model_dir = "models"
Weights_dir = "ldm/stable-diffusion-v1/"
logger = InvokeAILogger.getLogger(name='InvokeAI')
# the initial "configs" dir is now bundled in the `invokeai.configs` package
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
# initial models omegaconf
Datasets = None
# logger
logger = InvokeAILogger.getLogger(name='InvokeAI')
Config_preamble = """
# This file describes the alternative machine learning models
# available to InvokeAI script.
@ -52,6 +43,24 @@ Config_preamble = """
# was trained on.
"""
LEGACY_CONFIGS = {
BaseModelType.StableDiffusion1: {
ModelVariantType.Normal: 'v1-inference.yaml',
ModelVariantType.Inpaint: 'v1-inpainting-inference.yaml',
},
BaseModelType.StableDiffusion2: {
ModelVariantType.Normal: {
SchedulerPredictionType.Epsilon: 'v2-inference.yaml',
SchedulerPredictionType.VPrediction: 'v2-inference-v.yaml',
},
ModelVariantType.Inpaint: {
SchedulerPredictionType.Epsilon: 'v2-inpainting-inference.yaml',
SchedulerPredictionType.VPrediction: 'v2-inpainting-inference-v.yaml',
}
}
}
@dataclass
class ModelInstallList:
'''Class for listing models to be installed/removed'''
@ -59,133 +68,328 @@ class ModelInstallList:
remove_models: List[str] = field(default_factory=list)
@dataclass
class UserSelections():
class InstallSelections():
install_models: List[str]= field(default_factory=list)
remove_models: List[str]=field(default_factory=list)
purge_deleted_models: bool=field(default_factory=list)
install_cn_models: List[str] = field(default_factory=list)
remove_cn_models: List[str] = field(default_factory=list)
install_lora_models: List[str] = field(default_factory=list)
remove_lora_models: List[str] = field(default_factory=list)
install_ti_models: List[str] = field(default_factory=list)
remove_ti_models: List[str] = field(default_factory=list)
scan_directory: Path = None
autoscan_on_startup: bool=False
import_model_paths: str=None
# scan_directory: Path = None
# autoscan_on_startup: bool=False
@dataclass
class ModelLoadInfo():
name: str
model_type: ModelType
base_type: BaseModelType
path: Path = None
repo_id: str = None
description: str = ''
installed: bool = False
recommended: bool = False
default: bool = False
class ModelInstall(object):
def __init__(self,
config:InvokeAIAppConfig,
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
model_manager: ModelManager = None,
access_token:str = None):
self.config = config
self.mgr = model_manager or ModelManager(config.model_conf_path)
self.datasets = OmegaConf.load(Dataset_path)
self.prediction_helper = prediction_type_helper
self.access_token = access_token or HfFolder.get_token()
self.reverse_paths = self._reverse_paths(self.datasets)
def all_models(self)->Dict[str,ModelLoadInfo]:
'''
Return dict of model_key=>ModelLoadInfo objects.
This method consolidates and simplifies the entries in both
models.yaml and INITIAL_MODELS.yaml so that they can
be treated uniformly. It also sorts the models alphabetically
by their name, to improve the display somewhat.
'''
model_dict = dict()
def default_config_file():
return config.model_conf_path
# first populate with the entries in INITIAL_MODELS.yaml
for key, value in self.datasets.items():
name,base,model_type = ModelManager.parse_key(key)
value['name'] = name
value['base_type'] = base
value['model_type'] = model_type
model_dict[key] = ModelLoadInfo(**value)
def sd_configs():
return config.legacy_conf_path
def initial_models():
global Datasets
if Datasets:
return Datasets
return (Datasets := OmegaConf.load(Dataset_path)['diffusers'])
def install_requested_models(
diffusers: ModelInstallList = None,
controlnet: ModelInstallList = None,
lora: ModelInstallList = None,
ti: ModelInstallList = None,
cn_model_map: Dict[str,str] = None, # temporary - move to model manager
scan_directory: Path = None,
external_models: List[str] = None,
scan_at_startup: bool = False,
precision: str = "float16",
purge_deleted: bool = False,
config_file_path: Path = None,
model_config_file_callback: Callable[[Path],Path] = None
):
"""
Entry point for installing/deleting starter models, or installing external models.
"""
access_token = HfFolder.get_token()
config_file_path = config_file_path or default_config_file()
if not config_file_path.exists():
open(config_file_path, "w")
# prevent circular import here
from ..model_management import ModelManager
model_manager = ModelManager(OmegaConf.load(config_file_path), precision=precision)
if controlnet:
model_manager.install_controlnet_models(controlnet.install_models, access_token=access_token)
model_manager.delete_controlnet_models(controlnet.remove_models)
if lora:
model_manager.install_lora_models(lora.install_models, access_token=access_token)
model_manager.delete_lora_models(lora.remove_models)
if ti:
model_manager.install_ti_models(ti.install_models, access_token=access_token)
model_manager.delete_ti_models(ti.remove_models)
if diffusers:
# TODO: Replace next three paragraphs with calls into new model manager
if diffusers.remove_models and len(diffusers.remove_models) > 0:
logger.info("Processing requested deletions")
for model in diffusers.remove_models:
logger.info(f"{model}...")
model_manager.del_model(model, delete_files=purge_deleted)
model_manager.commit(config_file_path)
if diffusers.install_models and len(diffusers.install_models) > 0:
logger.info("Installing requested models")
downloaded_paths = download_weight_datasets(
models=diffusers.install_models,
access_token=None,
precision=precision,
)
successful = {x:v for x,v in downloaded_paths.items() if v is not None}
if len(successful) > 0:
update_config_file(successful, config_file_path)
if len(successful) < len(diffusers.install_models):
unsuccessful = [x for x in downloaded_paths if downloaded_paths[x] is None]
logger.warning(f"Some of the model downloads were not successful: {unsuccessful}")
# due to above, we have to reload the model manager because conf file
# was changed behind its back
model_manager = ModelManager(OmegaConf.load(config_file_path), precision=precision)
external_models = external_models or list()
if scan_directory:
external_models.append(str(scan_directory))
if len(external_models) > 0:
logger.info("INSTALLING EXTERNAL MODELS")
for path_url_or_repo in external_models:
try:
logger.debug(f'In install_requested_models; callback = {model_config_file_callback}')
model_manager.heuristic_import(
path_url_or_repo,
commit_to_conf=config_file_path,
config_file_callback = model_config_file_callback,
# supplement with entries in models.yaml
installed_models = self.mgr.list_models()
for md in installed_models:
base = md['base_model']
model_type = md['type']
name = md['name']
key = ModelManager.create_key(name, base, model_type)
if key in model_dict:
model_dict[key].installed = True
else:
model_dict[key] = ModelLoadInfo(
name = name,
base_type = base,
model_type = model_type,
path = value.get('path'),
installed = True,
)
except KeyboardInterrupt:
sys.exit(-1)
except Exception:
return {x : model_dict[x] for x in sorted(model_dict.keys(),key=lambda y: model_dict[y].name.lower())}
def starter_models(self)->Set[str]:
models = set()
for key, value in self.datasets.items():
name,base,model_type = ModelManager.parse_key(key)
if model_type==ModelType.Main:
models.add(key)
return models
def recommended_models(self)->Set[str]:
starters = self.starter_models()
return set([x for x in starters if self.datasets[x].get('recommended',False)])
def default_model(self)->str:
starters = self.starter_models()
defaults = [x for x in starters if self.datasets[x].get('default',False)]
return defaults[0]
def install(self, selections: InstallSelections):
verbosity = dlogging.get_verbosity() # quench NSFW nags
dlogging.set_verbosity_error()
job = 1
jobs = len(selections.remove_models) + len(selections.install_models)
# remove requested models
for key in selections.remove_models:
name,base,mtype = self.mgr.parse_key(key)
logger.info(f'Deleting {mtype} model {name} [{job}/{jobs}]')
try:
self.mgr.del_model(name,base,mtype)
except FileNotFoundError as e:
logger.warning(e)
job += 1
# add requested models
for path in selections.install_models:
logger.info(f'Installing {path} [{job}/{jobs}]')
self.heuristic_install(path)
job += 1
dlogging.set_verbosity(verbosity)
self.mgr.commit()
def heuristic_install(self,
model_path_id_or_url: Union[str,Path],
models_installed: Set[Path]=None)->Set[Path]:
if not models_installed:
models_installed = set()
# A little hack to allow nested routines to retrieve info on the requested ID
self.current_id = model_path_id_or_url
path = Path(model_path_id_or_url)
try:
# checkpoint file, or similar
if path.is_file():
models_installed.add(self._install_path(path))
# folders style or similar
elif path.is_dir() and any([(path/x).exists() for x in {'config.json','model_index.json','learned_embeds.bin'}]):
models_installed.add(self._install_path(path))
# recursive scan
elif path.is_dir():
for child in path.iterdir():
self.heuristic_install(child, models_installed=models_installed)
# huggingface repo
elif len(str(model_path_id_or_url).split('/')) == 2:
models_installed.add(self._install_repo(str(model_path_id_or_url)))
# a URL
elif model_path_id_or_url.startswith(("http:", "https:", "ftp:")):
models_installed.add(self._install_url(model_path_id_or_url))
else:
logger.warning(f'{str(model_path_id_or_url)} is not recognized as a local path, repo ID or URL. Skipping')
except ValueError as e:
logger.error(str(e))
return models_installed
# install a model from a local path. The optional info parameter is there to prevent
# the model from being probed twice in the event that it has already been probed.
def _install_path(self, path: Path, info: ModelProbeInfo=None)->Path:
try:
# logger.debug(f'Probing {path}')
info = info or ModelProbe().heuristic_probe(path,self.prediction_helper)
model_name = path.stem if info.format=='checkpoint' else path.name
if self.mgr.model_exists(model_name, info.base_type, info.model_type):
raise ValueError(f'A model named "{model_name}" is already installed.')
attributes = self._make_attributes(path,info)
self.mgr.add_model(model_name = model_name,
base_model = info.base_type,
model_type = info.model_type,
model_attributes = attributes,
)
except Exception as e:
logger.warning(f'{str(e)} Skipping registration.')
return path
def _install_url(self, url: str)->Path:
# copy to a staging area, probe, import and delete
with TemporaryDirectory(dir=self.config.models_path) as staging:
location = download_with_resume(url,Path(staging))
if not location:
logger.error(f'Unable to download {url}. Skipping.')
info = ModelProbe().heuristic_probe(location)
dest = self.config.models_path / info.base_type.value / info.model_type.value / location.name
models_path = shutil.move(location,dest)
# staged version will be garbage-collected at this time
return self._install_path(Path(models_path), info)
def _install_repo(self, repo_id: str)->Path:
hinfo = HfApi().model_info(repo_id)
# we try to figure out how to download this most economically
# list all the files in the repo
files = [x.rfilename for x in hinfo.siblings]
location = None
with TemporaryDirectory(dir=self.config.models_path) as staging:
staging = Path(staging)
if 'model_index.json' in files:
location = self._download_hf_pipeline(repo_id, staging) # pipeline
else:
for suffix in ['safetensors','bin']:
if f'pytorch_lora_weights.{suffix}' in files:
location = self._download_hf_model(repo_id, ['pytorch_lora_weights.bin'], staging) # LoRA
break
elif self.config.precision=='float16' and f'diffusion_pytorch_model.fp16.{suffix}' in files: # vae, controlnet or some other standalone
files = ['config.json', f'diffusion_pytorch_model.fp16.{suffix}']
location = self._download_hf_model(repo_id, files, staging)
break
elif f'diffusion_pytorch_model.{suffix}' in files:
files = ['config.json', f'diffusion_pytorch_model.{suffix}']
location = self._download_hf_model(repo_id, files, staging)
break
elif f'learned_embeds.{suffix}' in files:
location = self._download_hf_model(repo_id, ['learned_embeds.suffix'], staging)
break
if not location:
logger.warning(f'Could not determine type of repo {repo_id}. Skipping install.')
return
info = ModelProbe().heuristic_probe(location, self.prediction_helper)
if not info:
logger.warning(f'Could not probe {location}. Skipping install.')
return
dest = self.config.models_path / info.base_type.value / info.model_type.value / self._get_model_name(repo_id,location)
if dest.exists():
shutil.rmtree(dest)
shutil.copytree(location,dest)
return self._install_path(dest, info)
def _get_model_name(self,path_name: str, location: Path)->str:
'''
Calculate a name for the model - primitive implementation.
'''
if key := self.reverse_paths.get(path_name):
(name, base, mtype) = ModelManager.parse_key(key)
return name
else:
return location.stem
def _make_attributes(self, path: Path, info: ModelProbeInfo)->dict:
model_name = path.name if path.is_dir() else path.stem
description = f'{info.base_type.value} {info.model_type.value} model {model_name}'
if key := self.reverse_paths.get(self.current_id):
if key in self.datasets:
description = self.datasets[key].get('description') or description
rel_path = self.relative_to_root(path)
attributes = dict(
path = str(rel_path),
description = str(description),
model_format = info.format,
)
if info.model_type == ModelType.Main:
attributes.update(dict(variant = info.variant_type,))
if info.format=="checkpoint":
try:
possible_conf = path.with_suffix('.yaml')
if possible_conf.exists():
legacy_conf = str(self.relative_to_root(possible_conf))
elif info.base_type == BaseModelType.StableDiffusion2:
legacy_conf = Path(self.config.legacy_conf_dir, LEGACY_CONFIGS[info.base_type][info.variant_type][info.prediction_type])
else:
legacy_conf = Path(self.config.legacy_conf_dir, LEGACY_CONFIGS[info.base_type][info.variant_type])
except KeyError:
legacy_conf = Path(self.config.legacy_conf_dir, 'v1-inference.yaml') # best guess
attributes.update(
dict(
config = str(legacy_conf)
)
)
return attributes
def relative_to_root(self, path: Path)->Path:
root = self.config.root_path
if path.is_relative_to(root):
return path.relative_to(root)
else:
return path
def _download_hf_pipeline(self, repo_id: str, staging: Path)->Path:
'''
This retrieves a StableDiffusion model from cache or remote and then
does a save_pretrained() to the indicated staging area.
'''
_,name = repo_id.split("/")
revisions = ['fp16','main'] if self.config.precision=='float16' else ['main']
model = None
for revision in revisions:
try:
model = StableDiffusionPipeline.from_pretrained(repo_id,revision=revision,safety_checker=None)
except: # most errors are due to fp16 not being present. Fix this to catch other errors
pass
if model:
break
if not model:
logger.error(f'Diffusers model {repo_id} could not be downloaded. Skipping.')
return None
model.save_pretrained(staging / name, safe_serialization=True)
return staging / name
if scan_at_startup and scan_directory.is_dir():
update_autoconvert_dir(scan_directory)
else:
update_autoconvert_dir(None)
def update_autoconvert_dir(autodir: Path):
'''
Update the "autoconvert_dir" option in invokeai.yaml
'''
invokeai_config_path = config.init_file_path
conf = OmegaConf.load(invokeai_config_path)
conf.InvokeAI.Paths.autoconvert_dir = str(autodir) if autodir else None
yaml = OmegaConf.to_yaml(conf)
tmpfile = invokeai_config_path.parent / "new_config.tmp"
with open(tmpfile, "w", encoding="utf-8") as outfile:
outfile.write(yaml)
tmpfile.replace(invokeai_config_path)
def _download_hf_model(self, repo_id: str, files: List[str], staging: Path)->Path:
_,name = repo_id.split("/")
location = staging / name
paths = list()
for filename in files:
p = hf_download_with_resume(repo_id,
model_dir=location,
model_name=filename,
access_token = self.access_token
)
if p:
paths.append(p)
else:
logger.warning(f'Could not download {filename} from {repo_id}.')
return location if len(paths)>0 else None
@classmethod
def _reverse_paths(cls,datasets)->dict:
'''
Reverse mapping from repo_id/path to destination name.
'''
return {v.get('path') or v.get('repo_id') : k for k, v in datasets.items()}
# -------------------------------------
def yes_or_no(prompt: str, default_yes=True):
@ -197,133 +401,19 @@ def yes_or_no(prompt: str, default_yes=True):
return response[0] in ("y", "Y")
# ---------------------------------------------
def recommended_datasets() -> List['str']:
datasets = set()
for ds in initial_models().keys():
if initial_models()[ds].get("recommended", False):
datasets.add(ds)
return list(datasets)
# ---------------------------------------------
def default_dataset() -> dict:
datasets = set()
for ds in initial_models().keys():
if initial_models()[ds].get("default", False):
datasets.add(ds)
return list(datasets)
# ---------------------------------------------
def all_datasets() -> dict:
datasets = dict()
for ds in initial_models().keys():
datasets[ds] = True
return datasets
# ---------------------------------------------
# look for legacy model.ckpt in models directory and offer to
# normalize its name
def migrate_models_ckpt():
model_path = os.path.join(config.root_dir, Model_dir, Weights_dir)
if not os.path.exists(os.path.join(model_path, "model.ckpt")):
return
new_name = initial_models()["stable-diffusion-1.4"]["file"]
logger.warning(
'The Stable Diffusion v4.1 "model.ckpt" is already installed. The name will be changed to {new_name} to avoid confusion.'
)
logger.warning(f"model.ckpt => {new_name}")
os.replace(
os.path.join(model_path, "model.ckpt"), os.path.join(model_path, new_name)
)
# ---------------------------------------------
def download_weight_datasets(
models: List[str], access_token: str, precision: str = "float32"
):
migrate_models_ckpt()
successful = dict()
for mod in models:
logger.info(f"Downloading {mod}:")
successful[mod] = _download_repo_or_file(
initial_models()[mod], access_token, precision=precision
)
return successful
def _download_repo_or_file(
mconfig: DictConfig, access_token: str, precision: str = "float32"
) -> Path:
path = None
if mconfig["format"] == "ckpt":
path = _download_ckpt_weights(mconfig, access_token)
else:
path = _download_diffusion_weights(mconfig, access_token, precision=precision)
if "vae" in mconfig and "repo_id" in mconfig["vae"]:
_download_diffusion_weights(
mconfig["vae"], access_token, precision=precision
)
return path
def _download_ckpt_weights(mconfig: DictConfig, access_token: str) -> Path:
repo_id = mconfig["repo_id"]
filename = mconfig["file"]
cache_dir = os.path.join(config.root_dir, Model_dir, Weights_dir)
return hf_download_with_resume(
repo_id=repo_id,
model_dir=cache_dir,
model_name=filename,
access_token=access_token,
)
# ---------------------------------------------
def download_from_hf(
model_class: object, model_name: str, **kwargs
def hf_download_from_pretrained(
model_class: object, model_name: str, destination: Path, **kwargs
):
logger = InvokeAILogger.getLogger('InvokeAI')
logger.addFilter(lambda x: 'fp16 is not a valid' not in x.getMessage())
path = config.cache_dir
model = model_class.from_pretrained(
model_name,
cache_dir=path,
resume_download=True,
**kwargs,
)
model_name = "--".join(("models", *model_name.split("/")))
return path / model_name if model else None
def _download_diffusion_weights(
mconfig: DictConfig, access_token: str, precision: str = "float32"
):
repo_id = mconfig["repo_id"]
model_class = (
StableDiffusionGeneratorPipeline
if mconfig.get("format", None) == "diffusers"
else AutoencoderKL
)
extra_arg_list = [{"revision": "fp16"}, {}] if precision == "float16" else [{}]
path = None
for extra_args in extra_arg_list:
try:
path = download_from_hf(
model_class,
repo_id,
safety_checker=None,
**extra_args,
)
except OSError as e:
if 'Revision Not Found' in str(e):
pass
else:
logger.error(str(e))
if path:
break
return path
model.save_pretrained(destination, safe_serialization=True)
return destination
# ---------------------------------------------
def hf_download_with_resume(
@ -383,128 +473,3 @@ def hf_download_with_resume(
return model_dest
# ---------------------------------------------
def update_config_file(successfully_downloaded: dict, config_file: Path):
config_file = (
Path(config_file) if config_file is not None else default_config_file()
)
# In some cases (incomplete setup, etc), the default configs directory might be missing.
# Create it if it doesn't exist.
# this check is ignored if opt.config_file is specified - user is assumed to know what they
# are doing if they are passing a custom config file from elsewhere.
if config_file is default_config_file() and not config_file.parent.exists():
configs_src = Dataset_path.parent
configs_dest = default_config_file().parent
shutil.copytree(configs_src, configs_dest, dirs_exist_ok=True)
yaml = new_config_file_contents(successfully_downloaded, config_file)
try:
backup = None
if os.path.exists(config_file):
logger.warning(
f"{config_file.name} exists. Renaming to {config_file.stem}.yaml.orig"
)
backup = config_file.with_suffix(".yaml.orig")
## Ugh. Windows is unable to overwrite an existing backup file, raises a WinError 183
if sys.platform == "win32" and backup.is_file():
backup.unlink()
config_file.rename(backup)
with TemporaryFile() as tmp:
tmp.write(Config_preamble.encode())
tmp.write(yaml.encode())
with open(str(config_file.expanduser().resolve()), "wb") as new_config:
tmp.seek(0)
new_config.write(tmp.read())
except Exception as e:
logger.error(f"Error creating config file {config_file}: {str(e)}")
if backup is not None:
logger.info("restoring previous config file")
## workaround, for WinError 183, see above
if sys.platform == "win32" and config_file.is_file():
config_file.unlink()
backup.rename(config_file)
return
logger.info(f"Successfully created new configuration file {config_file}")
# ---------------------------------------------
def new_config_file_contents(
successfully_downloaded: dict,
config_file: Path,
) -> str:
if config_file.exists():
conf = OmegaConf.load(str(config_file.expanduser().resolve()))
else:
conf = OmegaConf.create()
default_selected = None
for model in successfully_downloaded:
# a bit hacky - what we are doing here is seeing whether a checkpoint
# version of the model was previously defined, and whether the current
# model is a diffusers (indicated with a path)
if conf.get(model) and Path(successfully_downloaded[model]).is_dir():
delete_weights(model, conf[model])
stanza = {}
mod = initial_models()[model]
stanza["description"] = mod["description"]
stanza["repo_id"] = mod["repo_id"]
stanza["format"] = mod["format"]
# diffusers don't need width and height (probably .ckpt doesn't either)
# so we no longer require these in INITIAL_MODELS.yaml
if "width" in mod:
stanza["width"] = mod["width"]
if "height" in mod:
stanza["height"] = mod["height"]
if "file" in mod:
stanza["weights"] = os.path.relpath(
successfully_downloaded[model], start=config.root_dir
)
stanza["config"] = os.path.normpath(
os.path.join(sd_configs(), mod["config"])
)
if "vae" in mod:
if "file" in mod["vae"]:
stanza["vae"] = os.path.normpath(
os.path.join(Model_dir, Weights_dir, mod["vae"]["file"])
)
else:
stanza["vae"] = mod["vae"]
if mod.get("default", False):
stanza["default"] = True
default_selected = True
conf[model] = stanza
# if no default model was chosen, then we select the first
# one in the list
if not default_selected:
conf[list(successfully_downloaded.keys())[0]]["default"] = True
return OmegaConf.to_yaml(conf)
# ---------------------------------------------
def delete_weights(model_name: str, conf_stanza: dict):
if not (weights := conf_stanza.get("weights")):
return
if re.match("/VAE/", conf_stanza.get("config")):
return
logger.warning(
f"\nThe checkpoint version of {model_name} is superseded by the diffusers version. Deleting the original file {weights}?"
)
weights = Path(weights)
if not weights.is_absolute():
weights = config.root_dir / weights
try:
weights.unlink()
except OSError as e:
logger.error(str(e))

View File

@ -4,3 +4,4 @@ Initialization file for invokeai.backend.model_management
from .model_manager import ModelManager, ModelInfo
from .model_cache import ModelCache
from .models import BaseModelType, ModelType, SubModelType, ModelVariantType

View File

@ -30,7 +30,7 @@ from invokeai.app.services.config import InvokeAIAppConfig
from .model_manager import ModelManager
from .model_cache import ModelCache
from .models import SchedulerPredictionType, BaseModelType, ModelVariantType
from .models import BaseModelType, ModelVariantType
try:
from omegaconf import OmegaConf
@ -73,7 +73,9 @@ from transformers import (
from ..stable_diffusion import StableDiffusionGeneratorPipeline
MODEL_ROOT = None
# TODO: redo in future
#CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().models_path / "core" / "convert"
CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().root_path / "models" / "core" / "convert"
def shave_segments(path, n_shave_prefix_segments=1):
"""
@ -159,17 +161,17 @@ def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0):
new_item = new_item.replace("norm.weight", "group_norm.weight")
new_item = new_item.replace("norm.bias", "group_norm.bias")
new_item = new_item.replace("q.weight", "query.weight")
new_item = new_item.replace("q.bias", "query.bias")
new_item = new_item.replace("q.weight", "to_q.weight")
new_item = new_item.replace("q.bias", "to_q.bias")
new_item = new_item.replace("k.weight", "key.weight")
new_item = new_item.replace("k.bias", "key.bias")
new_item = new_item.replace("k.weight", "to_k.weight")
new_item = new_item.replace("k.bias", "to_k.bias")
new_item = new_item.replace("v.weight", "value.weight")
new_item = new_item.replace("v.bias", "value.bias")
new_item = new_item.replace("v.weight", "to_v.weight")
new_item = new_item.replace("v.bias", "to_v.bias")
new_item = new_item.replace("proj_out.weight", "proj_attn.weight")
new_item = new_item.replace("proj_out.bias", "proj_attn.bias")
new_item = new_item.replace("proj_out.weight", "to_out.0.weight")
new_item = new_item.replace("proj_out.bias", "to_out.0.bias")
new_item = shave_segments(
new_item, n_shave_prefix_segments=n_shave_prefix_segments
@ -184,7 +186,6 @@ def assign_to_checkpoint(
paths,
checkpoint,
old_checkpoint,
attention_paths_to_split=None,
additional_replacements=None,
config=None,
):
@ -199,35 +200,9 @@ def assign_to_checkpoint(
paths, list
), "Paths should be a list of dicts containing 'old' and 'new' keys."
# Splits the attention layers into three variables.
if attention_paths_to_split is not None:
for path, path_map in attention_paths_to_split.items():
old_tensor = old_checkpoint[path]
channels = old_tensor.shape[0] // 3
target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1)
num_heads = old_tensor.shape[0] // config["num_head_channels"] // 3
old_tensor = old_tensor.reshape(
(num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]
)
query, key, value = old_tensor.split(channels // num_heads, dim=1)
checkpoint[path_map["query"]] = query.reshape(target_shape)
checkpoint[path_map["key"]] = key.reshape(target_shape)
checkpoint[path_map["value"]] = value.reshape(target_shape)
for path in paths:
new_path = path["new"]
# These have already been assigned
if (
attention_paths_to_split is not None
and new_path in attention_paths_to_split
):
continue
# Global renaming happens here
new_path = new_path.replace("middle_block.0", "mid_block.resnets.0")
new_path = new_path.replace("middle_block.1", "mid_block.attentions.0")
@ -246,14 +221,14 @@ def assign_to_checkpoint(
def conv_attn_to_linear(checkpoint):
keys = list(checkpoint.keys())
attn_keys = ["query.weight", "key.weight", "value.weight"]
attn_keys = ["to_q.weight", "to_k.weight", "to_v.weight"]
for key in keys:
if ".".join(key.split(".")[-2:]) in attn_keys:
if checkpoint[key].ndim > 2:
checkpoint[key] = checkpoint[key][:, :, 0, 0]
elif "proj_attn.weight" in key:
elif "to_out.0.weight" in key:
if checkpoint[key].ndim > 2:
checkpoint[key] = checkpoint[key][:, :, 0]
checkpoint[key] = checkpoint[key][:, :, 0, 0]
def create_unet_diffusers_config(original_config, image_size: int):
@ -632,7 +607,7 @@ def convert_ldm_vae_checkpoint(checkpoint, config):
else:
vae_state_dict = checkpoint
new_checkpoint = convert_ldm_vae_state_dict(vae_state_dict,config)
new_checkpoint = convert_ldm_vae_state_dict(vae_state_dict, config)
return new_checkpoint
def convert_ldm_vae_state_dict(vae_state_dict, config):
@ -855,7 +830,7 @@ def convert_ldm_bert_checkpoint(checkpoint, config):
def convert_ldm_clip_checkpoint(checkpoint):
text_model = CLIPTextModel.from_pretrained(MODEL_ROOT / 'clip-vit-large-patch14')
text_model = CLIPTextModel.from_pretrained(CONVERT_MODEL_ROOT / 'clip-vit-large-patch14')
keys = list(checkpoint.keys())
text_model_dict = {}
@ -909,7 +884,7 @@ textenc_pattern = re.compile("|".join(protected.keys()))
def convert_open_clip_checkpoint(checkpoint):
text_model = CLIPTextModel.from_pretrained(
MODEL_ROOT / 'stable-diffusion-2-clip',
CONVERT_MODEL_ROOT / 'stable-diffusion-2-clip',
subfolder='text_encoder',
)
@ -976,7 +951,7 @@ def convert_open_clip_checkpoint(checkpoint):
return text_model
def replace_checkpoint_vae(checkpoint, vae_path:str):
def replace_checkpoint_vae(checkpoint, vae_path: str):
if vae_path.endswith(".safetensors"):
vae_ckpt = load_file(vae_path)
else:
@ -986,7 +961,7 @@ def replace_checkpoint_vae(checkpoint, vae_path:str):
new_key = f'first_stage_model.{vae_key}'
checkpoint[new_key] = state_dict[vae_key]
def convert_ldm_vae_to_diffusers(checkpoint, vae_config: DictConfig, image_size: int)->AutoencoderKL:
def convert_ldm_vae_to_diffusers(checkpoint, vae_config: DictConfig, image_size: int) -> AutoencoderKL:
vae_config = create_vae_diffusers_config(
vae_config, image_size=image_size
)
@ -1006,8 +981,6 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
original_config_file: str,
extract_ema: bool = True,
precision: torch.dtype = torch.float32,
upcast_attention: bool = False,
prediction_type: SchedulerPredictionType = SchedulerPredictionType.Epsilon,
scan_needed: bool = True,
) -> StableDiffusionPipeline:
"""
@ -1021,8 +994,6 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
:param checkpoint_path: Path to `.ckpt` file.
:param original_config_file: Path to `.yaml` config file corresponding to the original architecture.
If `None`, will be automatically inferred by looking for a key that only exists in SD2.0 models.
:param prediction_type: The prediction type that the model was trained on. Use `'epsilon'` for Stable Diffusion
v1.X and Stable Diffusion v2 Base. Use `'v-prediction'` for Stable Diffusion v2.
:param scheduler_type: Type of scheduler to use. Should be one of `["pndm", "lms", "heun", "euler",
"euler-ancestral", "dpm", "ddim"]`. :param model_type: The pipeline type. `None` to automatically infer, or one of
`["FrozenOpenCLIPEmbedder", "FrozenCLIPEmbedder"]`. :param extract_ema: Only relevant for
@ -1030,17 +1001,16 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
or not. Defaults to `False`. Pass `True` to extract the EMA weights. EMA weights usually yield higher
quality images for inference. Non-EMA weights are usually better to continue fine-tuning.
:param precision: precision to use - torch.float16, torch.float32 or torch.autocast
:param upcast_attention: Whether the attention computation should always be upcasted. This is necessary when
running stable diffusion 2.1.
"""
config = InvokeAIAppConfig.get_config()
if not isinstance(checkpoint_path, Path):
checkpoint_path = Path(checkpoint_path)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
if str(checkpoint_path).endswith(".safetensors"):
if checkpoint_path.suffix == ".safetensors":
checkpoint = load_file(checkpoint_path)
else:
if scan_needed:
@ -1053,9 +1023,13 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
original_config = OmegaConf.load(original_config_file)
if model_version == BaseModelType.StableDiffusion2 and prediction_type == SchedulerPredictionType.VPrediction:
if model_version == BaseModelType.StableDiffusion2 and original_config["model"]["params"]["parameterization"] == "v":
prediction_type = "v_prediction"
upcast_attention = True
image_size = 768
else:
prediction_type = "epsilon"
upcast_attention = False
image_size = 512
#
@ -1110,7 +1084,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
if model_type == "FrozenOpenCLIPEmbedder":
text_model = convert_open_clip_checkpoint(checkpoint)
tokenizer = CLIPTokenizer.from_pretrained(
MODEL_ROOT / 'stable-diffusion-2-clip',
CONVERT_MODEL_ROOT / 'stable-diffusion-2-clip',
subfolder='tokenizer',
)
pipe = StableDiffusionPipeline(
@ -1126,9 +1100,9 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
elif model_type in ["FrozenCLIPEmbedder", "WeightedFrozenCLIPEmbedder"]:
text_model = convert_ldm_clip_checkpoint(checkpoint)
tokenizer = CLIPTokenizer.from_pretrained(MODEL_ROOT / 'clip-vit-large-patch14')
safety_checker = StableDiffusionSafetyChecker.from_pretrained(MODEL_ROOT / 'stable-diffusion-safety-checker')
feature_extractor = AutoFeatureExtractor.from_pretrained(MODEL_ROOT / 'stable-diffusion-safety-checker')
tokenizer = CLIPTokenizer.from_pretrained(CONVERT_MODEL_ROOT / 'clip-vit-large-patch14')
safety_checker = StableDiffusionSafetyChecker.from_pretrained(CONVERT_MODEL_ROOT / 'stable-diffusion-safety-checker')
feature_extractor = AutoFeatureExtractor.from_pretrained(CONVERT_MODEL_ROOT / 'stable-diffusion-safety-checker')
pipe = StableDiffusionPipeline(
vae=vae.to(precision),
text_encoder=text_model.to(precision),
@ -1142,7 +1116,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
else:
text_config = create_ldm_bert_config(original_config)
text_model = convert_ldm_bert_checkpoint(checkpoint, text_config)
tokenizer = BertTokenizerFast.from_pretrained(MODEL_ROOT / "bert-base-uncased")
tokenizer = BertTokenizerFast.from_pretrained(CONVERT_MODEL_ROOT / "bert-base-uncased")
pipe = LDMTextToImagePipeline(
vqvae=vae,
bert=text_model,
@ -1158,7 +1132,6 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
def convert_ckpt_to_diffusers(
checkpoint_path: Union[str, Path],
dump_path: Union[str, Path],
model_root: Union[str, Path],
**kwargs,
):
"""
@ -1166,9 +1139,6 @@ def convert_ckpt_to_diffusers(
and in addition a path-like object indicating the location of the desired diffusers
model to be written.
"""
# setting global here to avoid massive changes late at night
global MODEL_ROOT
MODEL_ROOT = Path(model_root) / 'core/convert'
pipe = load_pipeline_from_original_stable_diffusion_ckpt(checkpoint_path, **kwargs)
pipe.save_pretrained(

View File

@ -70,7 +70,7 @@ class LoRALayerBase:
op = torch.nn.functional.linear
extra_args = {}
weight = self.get_weight(module)
weight = self.get_weight()
bias = self.bias if self.bias is not None else 0
scale = self.alpha / self.rank if (self.alpha and self.rank) else 1.0
@ -81,7 +81,7 @@ class LoRALayerBase:
**extra_args,
) * multiplier * scale
def get_weight(self, module: torch.nn.Module):
def get_weight(self):
raise NotImplementedError()
def calc_size(self) -> int:
@ -122,7 +122,7 @@ class LoRALayer(LoRALayerBase):
self.rank = self.down.shape[0]
def get_weight(self, module: torch.nn.Module):
def get_weight(self):
if self.mid is not None:
up = self.up.reshape(up.shape[0], up.shape[1])
down = self.down.reshape(up.shape[0], up.shape[1])
@ -166,7 +166,7 @@ class LoHALayer(LoRALayerBase):
layer_key: str,
values: dict,
):
super().__init__(module_key, rank, alpha, bias)
super().__init__(layer_key, values)
self.w1_a = values["hada_w1_a"]
self.w1_b = values["hada_w1_b"]
@ -185,7 +185,7 @@ class LoHALayer(LoRALayerBase):
self.rank = self.w1_b.shape[0]
def get_weight(self, module: torch.nn.Module):
def get_weight(self):
if self.t1 is None:
weight = (self.w1_a @ self.w1_b) * (self.w2_a @ self.w2_b)
@ -239,7 +239,7 @@ class LoKRLayer(LoRALayerBase):
layer_key: str,
values: dict,
):
super().__init__(module_key, rank, alpha, bias)
super().__init__(layer_key, values)
if "lokr_w1" in values:
self.w1 = values["lokr_w1"]
@ -271,7 +271,7 @@ class LoKRLayer(LoRALayerBase):
else:
self.rank = None # unscaled
def get_weight(self, module: torch.nn.Module):
def get_weight(self):
w1 = self.w1
if w1 is None:
w1 = self.w1_a @ self.w1_b
@ -286,7 +286,7 @@ class LoKRLayer(LoRALayerBase):
if len(w2.shape) == 4:
w1 = w1.unsqueeze(2).unsqueeze(2)
w2 = w2.contiguous()
weight = torch.kron(w1, w2).reshape(module.weight.shape) # TODO: can we remove reshape?
weight = torch.kron(w1, w2)
return weight
@ -471,7 +471,7 @@ class ModelPatcher:
submodule_name += "_" + key_parts.pop(0)
module = module.get_submodule(submodule_name)
module_key = module_key.rstrip(".")
module_key = (module_key + "." + submodule_name).lstrip(".")
return (module_key, module)
@ -525,23 +525,36 @@ class ModelPatcher:
loras: List[Tuple[LoraModel, float]],
prefix: str,
):
hooks = dict()
original_weights = dict()
try:
for lora, lora_weight in loras:
for layer_key, layer in lora.layers.items():
if not layer_key.startswith(prefix):
continue
with torch.no_grad():
for lora, lora_weight in loras:
#assert lora.device.type == "cpu"
for layer_key, layer in lora.layers.items():
if not layer_key.startswith(prefix):
continue
module_key, module = cls._resolve_lora_key(model, layer_key, prefix)
if module_key not in hooks:
hooks[module_key] = module.register_forward_hook(cls._lora_forward_hook(loras, layer_key))
module_key, module = cls._resolve_lora_key(model, layer_key, prefix)
if module_key not in original_weights:
original_weights[module_key] = module.weight.detach().to(device="cpu", copy=True)
# enable autocast to calc fp16 loras on cpu
with torch.autocast(device_type="cpu"):
layer_scale = layer.alpha / layer.rank if (layer.alpha and layer.rank) else 1.0
layer_weight = layer.get_weight() * lora_weight * layer_scale
if module.weight.shape != layer_weight.shape:
# TODO: debug on lycoris
layer_weight = layer_weight.reshape(module.weight.shape)
module.weight += layer_weight.to(device=module.weight.device, dtype=module.weight.dtype)
yield # wait for context manager exit
finally:
for module_key, hook in hooks.items():
hook.remove()
hooks.clear()
with torch.no_grad():
for module_key, weight in original_weights.items():
model.get_submodule(module_key).weight.copy_(weight)
@classmethod
@ -591,7 +604,7 @@ class ModelPatcher:
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {model_embeddings.weight.data[token_id].shape[0]}."
)
model_embeddings.weight.data[token_id] = embedding
model_embeddings.weight.data[token_id] = embedding.to(device=text_encoder.device, dtype=text_encoder.dtype)
ti_tokens.append(token_id)
if len(ti_tokens) > 1:

View File

@ -100,8 +100,6 @@ class ModelCache(object):
:param sha_chunksize: Chunksize to use when calculating sha256 model hash
'''
#max_cache_size = 9999
execution_device = torch.device('cuda')
self.model_infos: Dict[str, ModelBase] = dict()
self.lazy_offloading = lazy_offloading
#self.sequential_offload: bool=sequential_offload
@ -354,7 +352,9 @@ class ModelCache(object):
for model_key, cache_entry in self._cached_models.items():
if not cache_entry.locked and cache_entry.loaded:
self.logger.debug(f'Offloading {model_key} from {self.execution_device} into {self.storage_device}')
cache_entry.model.to(self.storage_device)
with VRAMUsage() as mem:
cache_entry.model.to(self.storage_device)
self.logger.debug(f'GPU VRAM freed: {(mem.vram_used/GIG):.2f} GB')
def _local_model_hash(self, model_path: Union[str, Path]) -> str:
sha = hashlib.sha256()

View File

@ -1,118 +0,0 @@
"""
Routines for downloading and installing models.
"""
import json
import safetensors
import safetensors.torch
import shutil
import tempfile
import torch
import traceback
from dataclasses import dataclass
from diffusers import ModelMixin
from enum import Enum
from typing import Callable
from pathlib import Path
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from . import ModelManager
from .models import BaseModelType, ModelType, VariantType
from .model_probe import ModelProbe, ModelVariantInfo
from .model_cache import SilenceWarnings
class ModelInstall(object):
'''
This class is able to download and install several different kinds of
InvokeAI models. The helper function, if provided, is called on to distinguish
between v2-base and v2-768 stable diffusion pipelines. This usually involves
asking the user to select the proper type, as there is no way of distinguishing
the two type of v2 file programmatically (as far as I know).
'''
def __init__(self,
config: InvokeAIAppConfig,
model_base_helper: Callable[[Path],BaseModelType]=None,
clobber:bool = False
):
'''
:param config: InvokeAI configuration object
:param model_base_helper: A function call that accepts the Path to a checkpoint model and returns a ModelType enum
:param clobber: If true, models with colliding names will be overwritten
'''
self.config = config
self.clogger = clobber
self.helper = model_base_helper
self.prober = ModelProbe()
def install_checkpoint_file(self, checkpoint: Path)->dict:
'''
Install the checkpoint file at path and return a
configuration entry that can be added to `models.yaml`.
Model checkpoints and VAEs will be converted into
diffusers before installation. Note that the model manager
does not hold entries for anything but diffusers pipelines,
and the configuration file stanzas returned from such models
can be safely ignored.
'''
model_info = self.prober.probe(checkpoint, self.helper)
if not model_info:
raise ValueError(f"Unable to determine type of checkpoint file {checkpoint}")
key = ModelManager.create_key(
model_name = checkpoint.stem,
base_model = model_info.base_type,
model_type = model_info.model_type,
)
destination_path = self._dest_path(model_info) / checkpoint
destination_path.parent.mkdir(parents=True, exist_ok=True)
self._check_for_collision(destination_path)
stanza = {
key: dict(
name = checkpoint.stem,
description = f'{model_info.model_type} model {checkpoint.stem}',
base = model_info.base_model.value,
type = model_info.model_type.value,
variant = model_info.variant_type.value,
path = str(destination_path),
)
}
# non-pipeline; no conversion needed, just copy into right place
if model_info.model_type != ModelType.Pipeline:
shutil.copyfile(checkpoint, destination_path)
stanza[key].update({'format': 'checkpoint'})
# pipeline - conversion needed here
else:
destination_path = self._dest_path(model_info) / checkpoint.stem
config_file = self._pipeline_type_to_config_file(model_info.model_type)
from .convert_ckpt_to_diffusers import convert_ckpt_to_diffusers
with SilenceWarnings:
convert_ckpt_to_diffusers(
checkpoint,
destination_path,
extract_ema=True,
original_config_file=config_file,
scan_needed=False,
)
stanza[key].update({'format': 'folder',
'path': destination_path, # no suffix on this
})
return stanza
def _check_for_collision(self, path: Path):
if not path.exists():
return
if self.clobber:
shutil.rmtree(path)
else:
raise ValueError(f"Destination {path} already exists. Won't overwrite unless clobber=True.")
def _staging_directory(self)->tempfile.TemporaryDirectory:
return tempfile.TemporaryDirectory(dir=self.config.root_path)

View File

@ -1,53 +1,209 @@
"""This module manages the InvokeAI `models.yaml` file, mapping
symbolic diffusers model names to the paths and repo_ids used
by the underlying `from_pretrained()` call.
symbolic diffusers model names to the paths and repo_ids used by the
underlying `from_pretrained()` call.
For fetching models, use manager.get_model('symbolic name'). This will
return a ModelInfo object that contains the following attributes:
* context -- a context manager Generator that loads and locks the
model into GPU VRAM and returns the model for use.
See below for usage.
* name -- symbolic name of the model
* type -- SubModelType of the model
* hash -- unique hash for the model
* location -- path or repo_id of the model
* revision -- revision of the model if coming from a repo id,
e.g. 'fp16'
* precision -- torch precision of the model
SYNOPSIS:
Typical usage:
mgr = ModelManager('/home/phi/invokeai/configs/models.yaml')
sd1_5 = mgr.get_model('stable-diffusion-v1-5',
model_type=ModelType.Main,
base_model=BaseModelType.StableDiffusion1,
submodel_type=SubModelType.Unet)
with sd1_5 as unet:
run_some_inference(unet)
from invokeai.backend import ModelManager
FETCHING MODELS:
manager = ModelManager(
config='./configs/models.yaml',
max_cache_size=8
) # gigabytes
Models are described using four attributes:
model_info = manager.get_model('stable-diffusion-1.5', SubModelType.Diffusers)
with model_info.context as my_model:
my_model.latents_from_embeddings(...)
1) model_name -- the symbolic name for the model
The manager uses the underlying ModelCache class to keep
frequently-used models in RAM and move them into GPU as needed for
generation operations. The optional `max_cache_size` argument
indicates the maximum size the cache can grow to, in gigabytes. The
underlying ModelCache object can be accessed using the manager's "cache"
attribute.
2) ModelType -- an enum describing the type of the model. Currently
defined types are:
ModelType.Main -- a full model capable of generating images
ModelType.Vae -- a VAE model
ModelType.Lora -- a LoRA or LyCORIS fine-tune
ModelType.TextualInversion -- a textual inversion embedding
ModelType.ControlNet -- a ControlNet model
Because the model manager can return multiple different types of
models, you may wish to add additional type checking on the class
of model returned. To do this, provide the option `model_type`
parameter:
3) BaseModelType -- an enum indicating the stable diffusion base model, one of:
BaseModelType.StableDiffusion1
BaseModelType.StableDiffusion2
model_info = manager.get_model(
'clip-tokenizer',
model_type=SubModelType.Tokenizer
)
4) SubModelType (optional) -- an enum that refers to one of the submodels contained
within the main model. Values are:
This will raise an InvalidModelError if the format defined in the
config file doesn't match the requested model type.
SubModelType.UNet
SubModelType.TextEncoder
SubModelType.Tokenizer
SubModelType.Scheduler
SubModelType.SafetyChecker
To fetch a model, use `manager.get_model()`. This takes the symbolic
name of the model, the ModelType, the BaseModelType and the
SubModelType. The latter is required for ModelType.Main.
get_model() will return a ModelInfo object that can then be used in
context to retrieve the model and move it into GPU VRAM (on GPU
systems).
A typical example is:
sd1_5 = mgr.get_model('stable-diffusion-v1-5',
model_type=ModelType.Main,
base_model=BaseModelType.StableDiffusion1,
submodel_type=SubModelType.UNet)
with sd1_5 as unet:
run_some_inference(unet)
The ModelInfo object provides a number of useful fields describing the
model, including:
name -- symbolic name of the model
base_model -- base model (BaseModelType)
type -- model type (ModelType)
location -- path to the model file
precision -- torch precision of the model
hash -- unique sha256 checksum for this model
SUBMODELS:
When fetching a main model, you must specify the submodel. Retrieval
of full pipelines is not supported.
vae_info = mgr.get_model('stable-diffusion-1.5',
model_type = ModelType.Main,
base_model = BaseModelType.StableDiffusion1,
submodel_type = SubModelType.Vae
)
with vae_info as vae:
do_something(vae)
This rule does not apply to controlnets, embeddings, loras and standalone
VAEs, which do not have submodels.
LISTING MODELS
The model_names() method will return a list of Tuples describing each
model it knows about:
>> mgr.model_names()
[
('stable-diffusion-1.5', <BaseModelType.StableDiffusion1: 'sd-1'>, <ModelType.Main: 'main'>),
('stable-diffusion-2.1', <BaseModelType.StableDiffusion2: 'sd-2'>, <ModelType.Main: 'main'>),
('inpaint', <BaseModelType.StableDiffusion1: 'sd-1'>, <ModelType.ControlNet: 'controlnet'>)
('Ink scenery', <BaseModelType.StableDiffusion1: 'sd-1'>, <ModelType.Lora: 'lora'>)
...
]
The tuple is in the correct order to pass to get_model():
for m in mgr.model_names():
info = get_model(*m)
In contrast, the list_models() method returns a list of dicts, each
providing information about a model defined in models.yaml. For example:
>>> models = mgr.list_models()
>>> json.dumps(models[0])
{"path": "/home/lstein/invokeai-main/models/sd-1/controlnet/canny",
"model_format": "diffusers",
"name": "canny",
"base_model": "sd-1",
"type": "controlnet"
}
You can filter by model type and base model as shown here:
controlnets = mgr.list_models(model_type=ModelType.ControlNet,
base_model=BaseModelType.StableDiffusion1)
for c in controlnets:
name = c['name']
format = c['model_format']
path = c['path']
type = c['type']
# etc
ADDING AND REMOVING MODELS
At startup time, the `models` directory will be scanned for
checkpoints, diffusers pipelines, controlnets, LoRAs and TI
embeddings. New entries will be added to the model manager and defunct
ones removed. Anything that is a main model (ModelType.Main) will be
added to models.yaml. For scanning to succeed, files need to be in
their proper places. For example, a controlnet folder built on the
stable diffusion 2 base, will need to be placed in
`models/sd-2/controlnet`.
Layout of the `models` directory:
models
├── sd-1
│   ├── controlnet
│   ├── lora
│   ├── main
│   └── embedding
├── sd-2
│   ├── controlnet
│   ├── lora
│   ├── main
│ └── embedding
└── core
├── face_reconstruction
│ ├── codeformer
│ └── gfpgan
├── sd-conversion
│ ├── clip-vit-large-patch14 - tokenizer, text_encoder subdirs
│ ├── stable-diffusion-2 - tokenizer, text_encoder subdirs
│ └── stable-diffusion-safety-checker
└── upscaling
└─── esrgan
class ConfigMeta(BaseModel):Loras, textual_inversion and controlnet models are not listed
explicitly in models.yaml, but are added to the in-memory data
structure at initialization time by scanning the models directory. The
in-memory data structure can be resynchronized by calling
`manager.scan_models_directory()`.
Files and folders placed inside the `autoimport` paths (paths
defined in `invokeai.yaml`) will also be scanned for new models at
initialization time and added to `models.yaml`. Files will not be
moved from this location but preserved in-place. These directories
are:
configuration default description
------------- ------- -----------
autoimport_dir autoimport/main main models
lora_dir autoimport/lora LoRA/LyCORIS models
embedding_dir autoimport/embedding TI embeddings
controlnet_dir autoimport/controlnet ControlNet models
In actuality, models located in any of these directories are scanned
to determine their type, so it isn't strictly necessary to organize
the different types in this way. This entry in `invokeai.yaml` will
recursively scan all subdirectories within `autoimport`, scan models
files it finds, and import them if recognized.
Paths:
autoimport_dir: autoimport
A model can be manually added using `add_model()` using the model's
name, base model, type and a dict of model attributes. See
`invokeai/backend/model_management/models` for the attributes required
by each model type.
A model can be deleted using `del_model()`, providing the same
identifying information as `get_model()`
The `heuristic_import()` method will take a set of strings
corresponding to local paths, remote URLs, and repo_ids, probe the
object to determine what type of model it is (if any), and import new
models into the manager. If passed a directory, it will recursively
scan it for models to import. The return value is a set of the models
successfully added.
MODELS.YAML
@ -56,93 +212,18 @@ The general format of a models.yaml section is:
type-of-model/name-of-model:
path: /path/to/local/file/or/directory
description: a description
format: folder|ckpt|safetensors|pt
base: SD-1|SD-2
subfolder: subfolder-name
format: diffusers|checkpoint
variant: normal|inpaint|depth
The type of model is given in the stanza key, and is one of
{diffusers, ckpt, vae, text_encoder, tokenizer, unet, scheduler,
safety_checker, feature_extractor, lora, textual_inversion,
controlnet}, and correspond to items in the SubModelType enum defined
in model_cache.py
{main, vae, lora, controlnet, textual}
The format indicates whether the model is organized as a folder with
model subdirectories, or is contained in a single checkpoint or
safetensors file.
The format indicates whether the model is organized as a diffusers
folder with model subdirectories, or is contained in a single
checkpoint or safetensors file.
One, but not both, of repo_id and path are provided. repo_id is the
HuggingFace repository ID of the model, and path points to the file or
directory on disk.
If subfolder is provided, then the model exists in a subdirectory of
the main model. These are usually named after the model type, such as
"unet".
This example summarizes the two ways of getting a non-diffuser model:
text_encoder/clip-test-1:
format: folder
path: /path/to/folder
description: Returns standalone CLIPTextModel
text_encoder/clip-test-2:
format: folder
repo_id: /path/to/folder
subfolder: text_encoder
description: Returns the text_encoder in the subfolder of the diffusers model (just the encoder in RAM)
SUBMODELS:
It is also possible to fetch an isolated submodel from a diffusers
model. Use the `submodel` parameter to select which part:
vae = manager.get_model('stable-diffusion-1.5',submodel=SubModelType.Vae)
with vae.context as my_vae:
print(type(my_vae))
# "AutoencoderKL"
DIRECTORY_SCANNING:
Loras, textual_inversion and controlnet models are usually not listed
explicitly in models.yaml, but are added to the in-memory data
structure at initialization time by scanning the models directory. The
in-memory data structure can be resynchronized by calling
`manager.scan_models_directory`.
DISAMBIGUATION:
You may wish to use the same name for a related family of models. To
do this, disambiguate the stanza key with the model and and format
separated by "/". Example:
tokenizer/clip-large:
format: tokenizer
path: /path/to/folder
description: Returns standalone tokenizer
text_encoder/clip-large:
format: text_encoder
path: /path/to/folder
description: Returns standalone text encoder
You can now use the `model_type` argument to indicate which model you
want:
tokenizer = mgr.get('clip-large',model_type=SubModelType.Tokenizer)
encoder = mgr.get('clip-large',model_type=SubModelType.TextEncoder)
OTHER FUNCTIONS:
Other methods provided by ModelManager support importing, editing,
converting and deleting models.
IMPORTANT CHANGES AND LIMITATIONS SINCE 2.3:
1. Only local paths are supported. Repo_ids are no longer accepted. This
simplifies the logic.
2. VAEs can't be swapped in and out at load time. They must be baked
into the model when downloaded or converted.
The path points to a file or directory on disk. If a relative path,
the root is the InvokeAI ROOTDIR.
"""
from __future__ import annotations
@ -151,13 +232,11 @@ import os
import hashlib
import textwrap
from dataclasses import dataclass
from packaging import version
from pathlib import Path
from typing import Dict, Optional, List, Tuple, Union, types
from typing import Optional, List, Tuple, Union, Set, Callable, types
from shutil import rmtree
import torch
from huggingface_hub import scan_cache_dir
from omegaconf import OmegaConf
from omegaconf.dictconfig import DictConfig
@ -165,9 +244,13 @@ from pydantic import BaseModel
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.util import CUDA_DEVICE, download_with_resume
from invokeai.backend.util import CUDA_DEVICE, Chdir
from .model_cache import ModelCache, ModelLocker
from .models import BaseModelType, ModelType, SubModelType, ModelError, MODEL_CLASSES
from .models import (
BaseModelType, ModelType, SubModelType,
ModelError, SchedulerPredictionType, MODEL_CLASSES,
ModelConfigBase,
)
# We are only starting to number the config file with release 3.
# The config file version doesn't have to start at release version, but it will help
@ -183,7 +266,6 @@ class ModelInfo():
hash: str
location: Union[Path, str]
precision: torch.dtype
revision: str = None
_cache: ModelCache = None
def __enter__(self):
@ -199,31 +281,6 @@ class InvalidModelError(Exception):
MAX_CACHE_SIZE = 6.0 # GB
# layout of the models directory:
# models
# ├── sd-1
# │   ├── controlnet
# │   ├── lora
# │   ├── pipeline
# │   └── textual_inversion
# ├── sd-2
# │   ├── controlnet
# │   ├── lora
# │   ├── pipeline
# │ └── textual_inversion
# └── core
# ├── face_reconstruction
# │ ├── codeformer
# │ └── gfpgan
# ├── sd-conversion
# │ ├── clip-vit-large-patch14 - tokenizer, text_encoder subdirs
# │ ├── stable-diffusion-2 - tokenizer, text_encoder subdirs
# │ └── stable-diffusion-safety-checker
# └── upscaling
# └─── esrgan
class ConfigMeta(BaseModel):
version: str
@ -271,7 +328,7 @@ class ModelManager(object):
self.models[model_key] = model_class.create_config(**model_config)
# check config version number and update on disk/RAM if necessary
self.globals = InvokeAIAppConfig.get_config()
self.app_config = InvokeAIAppConfig.get_config()
self.logger = logger
self.cache = ModelCache(
max_cache_size=max_cache_size,
@ -307,7 +364,8 @@ class ModelManager(object):
) -> str:
return f"{base_model}/{model_type}/{model_name}"
def parse_key(self, model_key: str) -> Tuple[str, BaseModelType, ModelType]:
@classmethod
def parse_key(cls, model_key: str) -> Tuple[str, BaseModelType, ModelType]:
base_model_str, model_type_str, model_name = model_key.split('/', 2)
try:
model_type = ModelType(model_type_str)
@ -321,86 +379,44 @@ class ModelManager(object):
return (model_name, base_model, model_type)
def _get_model_cache_path(self, model_path):
return self.app_config.models_path / ".cache" / hashlib.md5(str(model_path).encode()).hexdigest()
def get_model(
self,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel_type: Optional[SubModelType] = None
):
)->ModelInfo:
"""Given a model named identified in models.yaml, return
an ModelInfo object describing it.
:param model_name: symbolic name of the model in models.yaml
:param model_type: ModelType enum indicating the type of model to return
:param base_model: BaseModelType enum indicating the base model used by this model
:param submode_typel: an ModelType enum indicating the portion of
the model to retrieve (e.g. ModelType.Vae)
If not provided, the model_type will be read from the `format` field
of the corresponding stanza. If provided, the model_type will be used
to disambiguate stanzas in the configuration file. The default is to
assume a diffusers pipeline. The behavior is illustrated here:
[models.yaml]
diffusers/test1:
repo_id: foo/bar
description: Typical diffusers pipeline
lora/test1:
repo_id: /tmp/loras/test1.safetensors
description: Typical lora file
test1_pipeline = mgr.get_model('test1')
# returns a StableDiffusionGeneratorPipeline
test1_vae1 = mgr.get_model('test1', submodel=ModelType.Vae)
# returns the VAE part of a diffusers model as an AutoencoderKL
test1_vae2 = mgr.get_model('test1', model_type=ModelType.Diffusers, submodel=ModelType.Vae)
# does the same thing as the previous statement. Note that model_type
# is for the parent model, and submodel is for the part
test1_lora = mgr.get_model('test1', model_type=ModelType.Lora)
# returns a LoRA embed (as a 'dict' of tensors)
test1_encoder = mgr.get_modelI('test1', model_type=ModelType.TextEncoder)
# raises an InvalidModelError
"""
model_class = MODEL_CLASSES[base_model][model_type]
model_key = self.create_key(model_name, base_model, model_type)
# if model not found try to find it (maybe file just pasted)
if model_key not in self.models:
# TODO: find by mask or try rescan?
path_mask = f"/models/{base_model}/{model_type}/{model_name}*"
if False: # model_path = next(find_by_mask(path_mask)):
model_path = None # TODO:
model_config = model_class.probe_config(model_path)
self.models[model_key] = model_config
else:
self.scan_models_directory(base_model=base_model, model_type=model_type)
if model_key not in self.models:
raise Exception(f"Model not found - {model_key}")
# if it known model check that target path exists (if manualy deleted)
else:
# logic repeated twice(in rescan too) any way to optimize?
if not os.path.exists(self.models[model_key].path):
if model_class.save_to_config:
self.models[model_key].error = ModelError.NotFound
raise Exception(f"Files for model \"{model_key}\" not found")
else:
self.models.pop(model_key, None)
raise Exception(f"Model not found - {model_key}")
# reset model errors?
model_config = self.models[model_key]
model_path = self.app_config.root_path / model_config.path
# /models/{base_model}/{model_type}/{name}.ckpt or .safentesors
# /models/{base_model}/{model_type}/{name}/
model_path = model_config.path
if not model_path.exists():
if model_class.save_to_config:
self.models[model_key].error = ModelError.NotFound
raise Exception(f"Files for model \"{model_key}\" not found")
else:
self.models.pop(model_key, None)
raise Exception(f"Model not found - {model_key}")
# vae/movq override
# TODO:
@ -414,10 +430,10 @@ class ModelManager(object):
# TODO: path
# TODO: is it accurate to use path as id
dst_convert_path = self.globals.models_dir / ".cache" / hashlib.md5(model_path.encode()).hexdigest()
dst_convert_path = self._get_model_cache_path(model_path)
model_path = model_class.convert_if_required(
base_model=base_model,
model_path=model_path,
model_path=str(model_path), # TODO: refactor str/Path types logic
output_path=dst_convert_path,
config=model_config,
)
@ -476,11 +492,6 @@ class ModelManager(object):
) -> list[dict]:
"""
Return a list of models.
Please use model_manager.models() to get all the model names,
model_manager.model_info('model-name') to get the stanza for the model
named 'model-name', and model_manager.config to get the full OmegaConf
object derived from models.yaml
"""
models = []
@ -507,7 +518,7 @@ class ModelManager(object):
def print_models(self) -> None:
"""
Print a table of models, their descriptions
Print a table of models and their descriptions. This needs to be redone
"""
# TODO: redo
for model_type, model_dict in self.list_models().items():
@ -515,7 +526,7 @@ class ModelManager(object):
line = f'{model_info["name"]:25s} {model_info["type"]:10s} {model_info["description"]}'
print(line)
# TODO: test when ui implemented
# Tested - LS
def del_model(
self,
model_name: str,
@ -525,7 +536,6 @@ class ModelManager(object):
"""
Delete the named model.
"""
raise Exception("TODO: del_model") # TODO: redo
model_key = self.create_key(model_name, base_model, model_type)
model_cfg = self.models.pop(model_key, None)
@ -541,14 +551,18 @@ class ModelManager(object):
self.cache.uncache_model(cache_id)
# if model inside invoke models folder - delete files
if model_cfg.path.startswith("models/") or model_cfg.path.startswith("models\\"):
model_path = self.globals.root_dir / model_cfg.path
if model_path.isdir():
shutil.rmtree(str(model_path))
model_path = self.app_config.root_path / model_cfg.path
cache_path = self._get_model_cache_path(model_path)
if cache_path.exists():
rmtree(str(cache_path))
if model_path.is_relative_to(self.app_config.models_path):
if model_path.is_dir():
rmtree(str(model_path))
else:
model_path.unlink()
# TODO: test when ui implemented
# LS: tested
def add_model(
self,
model_name: str,
@ -569,18 +583,30 @@ class ModelManager(object):
model_config = model_class.create_config(**model_attributes)
model_key = self.create_key(model_name, base_model, model_type)
assert (
clobber or model_key not in self.models
), f'attempt to overwrite existing model definition "{model_key}"'
if model_key in self.models and not clobber:
raise Exception(f'Attempt to overwrite existing model definition "{model_key}"')
self.models[model_key] = model_config
if clobber and model_key in self.cache_keys:
old_model = self.models.pop(model_key, None)
if old_model is not None:
# TODO: if path changed and old_model.path inside models folder should we delete this too?
# remove conversion cache as config changed
old_model_path = self.app_config.root_path / old_model.path
old_model_cache = self._get_model_cache_path(old_model_path)
if old_model_cache.exists():
if old_model_cache.is_dir():
rmtree(str(old_model_cache))
else:
old_model_cache.unlink()
# remove in-memory cache
# note: it not garantie to release memory(model can has other references)
cache_ids = self.cache_keys.pop(model_key, [])
for cache_id in cache_ids:
self.cache.uncache_model(cache_id)
self.models[model_key] = model_config
def search_models(self, search_folder):
self.logger.info(f"Finding Models In: {search_folder}")
models_folder_ckpt = Path(search_folder).glob("**/*.ckpt")
@ -621,7 +647,7 @@ class ModelManager(object):
yaml_str = OmegaConf.to_yaml(data_to_save)
config_file_path = conf_file or self.config_path
assert config_file_path is not None,'no config file path to write to'
config_file_path = self.globals.root_dir / config_file_path
config_file_path = self.app_config.root_path / config_file_path
tmpfile = os.path.join(os.path.dirname(config_file_path), "new_config.tmp")
with open(tmpfile, "w", encoding="utf-8") as outfile:
outfile.write(self.preamble())
@ -644,42 +670,153 @@ class ModelManager(object):
"""
)
def scan_models_directory(self):
def scan_models_directory(
self,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
):
loaded_files = set()
new_models_found = False
for model_key, model_config in list(self.models.items()):
model_name, base_model, model_type = self.parse_key(model_key)
model_path = str(self.globals.root / model_config.path)
if not os.path.exists(model_path):
model_class = MODEL_CLASSES[base_model][model_type]
if model_class.save_to_config:
model_config.error = ModelError.NotFound
self.logger.info(f'scanning {self.app_config.models_path} for new models')
with Chdir(self.app_config.root_path):
for model_key, model_config in list(self.models.items()):
model_name, cur_base_model, cur_model_type = self.parse_key(model_key)
model_path = self.app_config.root_path.absolute() / model_config.path
if not model_path.exists():
model_class = MODEL_CLASSES[cur_base_model][cur_model_type]
if model_class.save_to_config:
model_config.error = ModelError.NotFound
self.models.pop(model_key, None)
else:
self.models.pop(model_key, None)
else:
self.models.pop(model_key, None)
else:
loaded_files.add(model_path)
loaded_files.add(model_path)
for base_model in BaseModelType:
for model_type in ModelType:
model_class = MODEL_CLASSES[base_model][model_type]
models_dir = os.path.join(self.globals.models_path, base_model, model_type)
for cur_base_model in BaseModelType:
if base_model is not None and cur_base_model != base_model:
continue
if not os.path.exists(models_dir):
continue # TODO: or create all folders?
for entry_name in os.listdir(models_dir):
model_path = os.path.join(models_dir, entry_name)
if model_path not in loaded_files: # TODO: check
model_name = Path(model_path).stem
model_key = self.create_key(model_name, base_model, model_type)
for cur_model_type in ModelType:
if model_type is not None and cur_model_type != model_type:
continue
model_class = MODEL_CLASSES[cur_base_model][cur_model_type]
models_dir = self.app_config.models_path / cur_base_model.value / cur_model_type.value
if model_key in self.models:
raise Exception(f"Model with key {model_key} added twice")
if not models_dir.exists():
continue # TODO: or create all folders?
model_config: ModelConfigBase = model_class.probe_config(model_path)
self.models[model_key] = model_config
new_models_found = True
for model_path in models_dir.iterdir():
if model_path not in loaded_files: # TODO: check
model_name = model_path.name if model_path.is_dir() else model_path.stem
model_key = self.create_key(model_name, cur_base_model, cur_model_type)
if new_models_found:
if model_key in self.models:
raise Exception(f"Model with key {model_key} added twice")
if model_path.is_relative_to(self.app_config.root_path):
model_path = model_path.relative_to(self.app_config.root_path)
try:
model_config: ModelConfigBase = model_class.probe_config(str(model_path))
self.models[model_key] = model_config
new_models_found = True
except NotImplementedError as e:
self.logger.warning(e)
imported_models = self.autoimport()
if (new_models_found or imported_models) and self.config_path:
self.commit()
def autoimport(self)->set[Path]:
'''
Scan the autoimport directory (if defined) and import new models, delete defunct models.
'''
# avoid circular import
from invokeai.backend.install.model_install_backend import ModelInstall
from invokeai.frontend.install.model_install import ask_user_for_prediction_type
installer = ModelInstall(config = self.app_config,
model_manager = self,
prediction_type_helper = ask_user_for_prediction_type,
)
installed = set()
scanned_dirs = set()
config = self.app_config
known_paths = {(self.app_config.root_path / x['path']) for x in self.list_models()}
for autodir in [config.autoimport_dir,
config.lora_dir,
config.embedding_dir,
config.controlnet_dir]:
if autodir is None:
continue
self.logger.info(f'Scanning {autodir} for models to import')
autodir = self.app_config.root_path / autodir
if not autodir.exists():
continue
items_scanned = 0
new_models_found = set()
for root, dirs, files in os.walk(autodir):
items_scanned += len(dirs) + len(files)
for d in dirs:
path = Path(root) / d
if path in known_paths or path.parent in scanned_dirs:
scanned_dirs.add(path)
continue
if any([(path/x).exists() for x in {'config.json','model_index.json','learned_embeds.bin'}]):
new_models_found.update(installer.heuristic_install(path))
scanned_dirs.add(path)
for f in files:
path = Path(root) / f
if path in known_paths or path.parent in scanned_dirs:
continue
if path.suffix in {'.ckpt','.bin','.pth','.safetensors','.pt'}:
new_models_found.update(installer.heuristic_install(path))
self.logger.info(f'Scanned {items_scanned} files and directories, imported {len(new_models_found)} models')
installed.update(new_models_found)
return installed
def heuristic_import(self,
items_to_import: Set[str],
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
)->Set[str]:
'''Import a list of paths, repo_ids or URLs. Returns the set of
successfully imported items.
:param items_to_import: Set of strings corresponding to models to be imported.
:param prediction_type_helper: A callback that receives the Path of a Stable Diffusion 2 checkpoint model and returns a SchedulerPredictionType.
The prediction type helper is necessary to distinguish between
models based on Stable Diffusion 2 Base (requiring
SchedulerPredictionType.Epsilson) and Stable Diffusion 768
(requiring SchedulerPredictionType.VPrediction). It is
generally impossible to do this programmatically, so the
prediction_type_helper usually asks the user to choose.
'''
# avoid circular import here
from invokeai.backend.install.model_install_backend import ModelInstall
successfully_installed = set()
installer = ModelInstall(config = self.app_config,
prediction_type_helper = prediction_type_helper,
model_manager = self)
for thing in items_to_import:
try:
installed = installer.heuristic_install(thing)
successfully_installed.update(installed)
except Exception as e:
self.logger.warning(f'{thing} could not be imported: {str(e)}')
self.commit()
return successfully_installed

View File

@ -1,27 +1,28 @@
import json
import traceback
import torch
import safetensors.torch
from dataclasses import dataclass
from enum import Enum
from diffusers import ModelMixin, ConfigMixin, StableDiffusionPipeline, AutoencoderKL, ControlNetModel
from diffusers import ModelMixin, ConfigMixin
from pathlib import Path
from typing import Callable, Literal, Union, Dict
from picklescan.scanner import scan_file_path
import invokeai.backend.util.logging as logger
from .models import BaseModelType, ModelType, ModelVariantType, SchedulerPredictionType, SilenceWarnings
from .models import (
BaseModelType, ModelType, ModelVariantType,
SchedulerPredictionType, SilenceWarnings,
)
from .models.base import read_checkpoint_meta
@dataclass
class ModelVariantInfo(object):
class ModelProbeInfo(object):
model_type: ModelType
base_type: BaseModelType
variant_type: ModelVariantType
prediction_type: SchedulerPredictionType
upcast_attention: bool
format: Literal['folder','checkpoint']
format: Literal['diffusers','checkpoint', 'lycoris']
image_size: int
class ProbeBase(object):
@ -31,19 +32,19 @@ class ProbeBase(object):
class ModelProbe(object):
PROBES = {
'folder': { },
'diffusers': { },
'checkpoint': { },
}
CLASS2TYPE = {
'StableDiffusionPipeline' : ModelType.Pipeline,
'StableDiffusionPipeline' : ModelType.Main,
'AutoencoderKL' : ModelType.Vae,
'ControlNetModel' : ModelType.ControlNet,
}
@classmethod
def register_probe(cls,
format: Literal['folder','file'],
format: Literal['diffusers','checkpoint'],
model_type: ModelType,
probe_class: ProbeBase):
cls.PROBES[format][model_type] = probe_class
@ -51,8 +52,8 @@ class ModelProbe(object):
@classmethod
def heuristic_probe(cls,
model: Union[Dict, ModelMixin, Path],
prediction_type_helper: Callable[[Path],BaseModelType]=None,
)->ModelVariantInfo:
prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,
)->ModelProbeInfo:
if isinstance(model,Path):
return cls.probe(model_path=model,prediction_type_helper=prediction_type_helper)
elif isinstance(model,(dict,ModelMixin,ConfigMixin)):
@ -64,7 +65,7 @@ class ModelProbe(object):
def probe(cls,
model_path: Path,
model: Union[Dict, ModelMixin] = None,
prediction_type_helper: Callable[[Path],BaseModelType] = None)->ModelVariantInfo:
prediction_type_helper: Callable[[Path],SchedulerPredictionType] = None)->ModelProbeInfo:
'''
Probe the model at model_path and return sufficient information about it
to place it somewhere in the models directory hierarchy. If the model is
@ -74,23 +75,24 @@ class ModelProbe(object):
between V2-Base and V2-768 SD models.
'''
if model_path:
format = 'folder' if model_path.is_dir() else 'checkpoint'
format_type = 'diffusers' if model_path.is_dir() else 'checkpoint'
else:
format = 'folder' if isinstance(model,(ConfigMixin,ModelMixin)) else 'checkpoint'
format_type = 'diffusers' if isinstance(model,(ConfigMixin,ModelMixin)) else 'checkpoint'
model_info = None
try:
model_type = cls.get_model_type_from_folder(model_path, model) \
if format == 'folder' \
if format_type == 'diffusers' \
else cls.get_model_type_from_checkpoint(model_path, model)
probe_class = cls.PROBES[format].get(model_type)
probe_class = cls.PROBES[format_type].get(model_type)
if not probe_class:
return None
probe = probe_class(model_path, model, prediction_type_helper)
base_type = probe.get_base_type()
variant_type = probe.get_variant_type()
prediction_type = probe.get_scheduler_prediction_type()
model_info = ModelVariantInfo(
format = probe.get_format()
model_info = ModelProbeInfo(
model_type = model_type,
base_type = base_type,
variant_type = variant_type,
@ -102,32 +104,40 @@ class ModelProbe(object):
and prediction_type==SchedulerPredictionType.VPrediction \
) else 512,
)
except Exception as e:
except Exception:
return None
return model_info
@classmethod
def get_model_type_from_checkpoint(cls, model_path: Path, checkpoint: dict)->ModelType:
if model_path.suffix not in ('.bin','.pt','.ckpt','.safetensors'):
def get_model_type_from_checkpoint(cls, model_path: Path, checkpoint: dict) -> ModelType:
if model_path.suffix not in ('.bin','.pt','.ckpt','.safetensors','.pth'):
return None
if model_path.name=='learned_embeds.bin':
if model_path.name == "learned_embeds.bin":
return ModelType.TextualInversion
checkpoint = checkpoint or cls._scan_and_load_checkpoint(model_path)
state_dict = checkpoint.get("state_dict") or checkpoint
if any([x.startswith("model.diffusion_model") for x in state_dict.keys()]):
return ModelType.Pipeline
if any([x.startswith("encoder.conv_in") for x in state_dict.keys()]):
return ModelType.Vae
if "string_to_token" in state_dict or "emb_params" in state_dict:
return ModelType.TextualInversion
if any([x.startswith("lora") for x in state_dict.keys()]):
return ModelType.Lora
if any([x.startswith("control_model") for x in state_dict.keys()]):
return ModelType.ControlNet
if any([x.startswith("input_blocks") for x in state_dict.keys()]):
return ModelType.ControlNet
return None # give up
ckpt = checkpoint if checkpoint else read_checkpoint_meta(model_path, scan=True)
ckpt = ckpt.get("state_dict", ckpt)
for key in ckpt.keys():
if any(key.startswith(v) for v in {"cond_stage_model.", "first_stage_model.", "model.diffusion_model."}):
return ModelType.Main
elif any(key.startswith(v) for v in {"encoder.conv_in", "decoder.conv_in"}):
return ModelType.Vae
elif any(key.startswith(v) for v in {"lora_te_", "lora_unet_"}):
return ModelType.Lora
elif any(key.startswith(v) for v in {"control_model", "input_blocks"}):
return ModelType.ControlNet
elif key in {"emb_params", "string_to_param"}:
return ModelType.TextualInversion
else:
# diffusers-ti
if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
return ModelType.TextualInversion
raise ValueError("Unable to determine model type")
@classmethod
def get_model_type_from_folder(cls, folder_path: Path, model: ModelMixin)->ModelType:
@ -192,11 +202,14 @@ class ProbeBase(object):
def get_scheduler_prediction_type(self)->SchedulerPredictionType:
pass
def get_format(self)->str:
pass
class CheckpointProbeBase(ProbeBase):
def __init__(self,
checkpoint_path: Path,
checkpoint: dict,
helper: Callable[[Path],BaseModelType] = None
helper: Callable[[Path],SchedulerPredictionType] = None
)->BaseModelType:
self.checkpoint = checkpoint or ModelProbe._scan_and_load_checkpoint(checkpoint_path)
self.checkpoint_path = checkpoint_path
@ -205,9 +218,12 @@ class CheckpointProbeBase(ProbeBase):
def get_base_type(self)->BaseModelType:
pass
def get_format(self)->str:
return 'checkpoint'
def get_variant_type(self)-> ModelVariantType:
model_type = ModelProbe.get_model_type_from_checkpoint(self.checkpoint_path,self.checkpoint)
if model_type != ModelType.Pipeline:
if model_type != ModelType.Main:
return ModelVariantType.Normal
state_dict = self.checkpoint.get('state_dict') or self.checkpoint
in_channels = state_dict[
@ -246,7 +262,8 @@ class PipelineCheckpointProbe(CheckpointProbeBase):
return SchedulerPredictionType.Epsilon
elif checkpoint["global_step"] == 110000:
return SchedulerPredictionType.VPrediction
if self.checkpoint_path and self.helper:
if self.checkpoint_path and self.helper \
and not self.checkpoint_path.with_suffix('.yaml').exists(): # if a .yaml config file exists, then this step not needed
return self.helper(self.checkpoint_path)
else:
return None
@ -257,6 +274,9 @@ class VaeCheckpointProbe(CheckpointProbeBase):
return BaseModelType.StableDiffusion1
class LoRACheckpointProbe(CheckpointProbeBase):
def get_format(self)->str:
return 'lycoris'
def get_base_type(self)->BaseModelType:
checkpoint = self.checkpoint
key1 = "lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight"
@ -276,6 +296,9 @@ class LoRACheckpointProbe(CheckpointProbeBase):
return None
class TextualInversionCheckpointProbe(CheckpointProbeBase):
def get_format(self)->str:
return None
def get_base_type(self)->BaseModelType:
checkpoint = self.checkpoint
if 'string_to_token' in checkpoint:
@ -322,17 +345,16 @@ class FolderProbeBase(ProbeBase):
def get_variant_type(self)->ModelVariantType:
return ModelVariantType.Normal
def get_format(self)->str:
return 'diffusers'
class PipelineFolderProbe(FolderProbeBase):
def get_base_type(self)->BaseModelType:
if self.model:
unet_conf = self.model.unet.config
scheduler_conf = self.model.scheduler.config
else:
with open(self.folder_path / 'unet' / 'config.json','r') as file:
unet_conf = json.load(file)
with open(self.folder_path / 'scheduler' / 'scheduler_config.json','r') as file:
scheduler_conf = json.load(file)
if unet_conf['cross_attention_dim'] == 768:
return BaseModelType.StableDiffusion1
elif unet_conf['cross_attention_dim'] == 1024:
@ -381,6 +403,9 @@ class VaeFolderProbe(FolderProbeBase):
return BaseModelType.StableDiffusion1
class TextualInversionFolderProbe(FolderProbeBase):
def get_format(self)->str:
return None
def get_base_type(self)->BaseModelType:
path = self.folder_path / 'learned_embeds.bin'
if not path.exists():
@ -401,16 +426,24 @@ class ControlNetFolderProbe(FolderProbeBase):
else BaseModelType.StableDiffusion2
class LoRAFolderProbe(FolderProbeBase):
# I've never seen one of these in the wild, so this is a noop
pass
def get_base_type(self)->BaseModelType:
model_file = None
for suffix in ['safetensors','bin']:
base_file = self.folder_path / f'pytorch_lora_weights.{suffix}'
if base_file.exists():
model_file = base_file
break
if not model_file:
raise Exception('Unknown LoRA format encountered')
return LoRACheckpointProbe(model_file,None).get_base_type()
############## register probe classes ######
ModelProbe.register_probe('folder', ModelType.Pipeline, PipelineFolderProbe)
ModelProbe.register_probe('folder', ModelType.Vae, VaeFolderProbe)
ModelProbe.register_probe('folder', ModelType.Lora, LoRAFolderProbe)
ModelProbe.register_probe('folder', ModelType.TextualInversion, TextualInversionFolderProbe)
ModelProbe.register_probe('folder', ModelType.ControlNet, ControlNetFolderProbe)
ModelProbe.register_probe('checkpoint', ModelType.Pipeline, PipelineCheckpointProbe)
ModelProbe.register_probe('diffusers', ModelType.Main, PipelineFolderProbe)
ModelProbe.register_probe('diffusers', ModelType.Vae, VaeFolderProbe)
ModelProbe.register_probe('diffusers', ModelType.Lora, LoRAFolderProbe)
ModelProbe.register_probe('diffusers', ModelType.TextualInversion, TextualInversionFolderProbe)
ModelProbe.register_probe('diffusers', ModelType.ControlNet, ControlNetFolderProbe)
ModelProbe.register_probe('checkpoint', ModelType.Main, PipelineCheckpointProbe)
ModelProbe.register_probe('checkpoint', ModelType.Vae, VaeCheckpointProbe)
ModelProbe.register_probe('checkpoint', ModelType.Lora, LoRACheckpointProbe)
ModelProbe.register_probe('checkpoint', ModelType.TextualInversion, TextualInversionCheckpointProbe)

View File

@ -11,21 +11,21 @@ from .textual_inversion import TextualInversionModel
MODEL_CLASSES = {
BaseModelType.StableDiffusion1: {
ModelType.Pipeline: StableDiffusion1Model,
ModelType.Main: StableDiffusion1Model,
ModelType.Vae: VaeModel,
ModelType.Lora: LoRAModel,
ModelType.ControlNet: ControlNetModel,
ModelType.TextualInversion: TextualInversionModel,
},
BaseModelType.StableDiffusion2: {
ModelType.Pipeline: StableDiffusion2Model,
ModelType.Main: StableDiffusion2Model,
ModelType.Vae: VaeModel,
ModelType.Lora: LoRAModel,
ModelType.ControlNet: ControlNetModel,
ModelType.TextualInversion: TextualInversionModel,
},
#BaseModelType.Kandinsky2_1: {
# ModelType.Pipeline: Kandinsky2_1Model,
# ModelType.Main: Kandinsky2_1Model,
# ModelType.MoVQ: MoVQModel,
# ModelType.Lora: LoRAModel,
# ModelType.ControlNet: ControlNetModel,

View File

@ -1,9 +1,12 @@
import json
import os
import sys
import typing
import inspect
from enum import Enum
from abc import ABCMeta, abstractmethod
from pathlib import Path
from picklescan.scanner import scan_file_path
import torch
import safetensors.torch
from diffusers import DiffusionPipeline, ConfigMixin
@ -18,7 +21,7 @@ class BaseModelType(str, Enum):
#Kandinsky2_1 = "kandinsky-2.1"
class ModelType(str, Enum):
Pipeline = "pipeline"
Main = "main"
Vae = "vae"
Lora = "lora"
ControlNet = "controlnet" # used by model_probe
@ -56,7 +59,6 @@ class ModelConfigBase(BaseModel):
class Config:
use_enum_values = True
class EmptyConfigLoader(ConfigMixin):
@classmethod
def load_config(cls, *args, **kwargs):
@ -124,7 +126,10 @@ class ModelBase(metaclass=ABCMeta):
if not isinstance(value, type) or not issubclass(value, ModelConfigBase):
continue
fields = inspect.get_annotations(value)
if hasattr(inspect,'get_annotations'):
fields = inspect.get_annotations(value)
else:
fields = value.__annotations__
try:
field = fields["model_format"]
except:
@ -383,15 +388,18 @@ def _fast_safetensors_reader(path: str):
return checkpoint
def read_checkpoint_meta(path: str):
if path.endswith(".safetensors"):
def read_checkpoint_meta(path: Union[str, Path], scan: bool = False):
if str(path).endswith(".safetensors"):
try:
checkpoint = _fast_safetensors_reader(path)
except:
# TODO: create issue for support "meta"?
checkpoint = safetensors.torch.load_file(path, device="cpu")
else:
if scan:
scan_result = scan_file_path(path)
if scan_result.infected_files != 0:
raise Exception(f"The model file \"{path}\" is potentially infected by malware. Aborting import.")
checkpoint = torch.load(path, map_location=torch.device("meta"))
return checkpoint

View File

@ -34,17 +34,17 @@ class StableDiffusion1Model(DiffusersModel):
class CheckpointConfig(ModelConfigBase):
model_format: Literal[StableDiffusion1ModelFormat.Checkpoint]
vae: Optional[str] = Field(None)
config: Optional[str] = Field(None)
config: str
variant: ModelVariantType
def __init__(self, model_path: str, base_model: BaseModelType, model_type: ModelType):
assert base_model == BaseModelType.StableDiffusion1
assert model_type == ModelType.Pipeline
assert model_type == ModelType.Main
super().__init__(
model_path=model_path,
base_model=BaseModelType.StableDiffusion1,
model_type=ModelType.Pipeline,
model_type=ModelType.Main,
)
@classmethod
@ -69,7 +69,7 @@ class StableDiffusion1Model(DiffusersModel):
in_channels = unet_config['in_channels']
else:
raise Exception("Not supported stable diffusion diffusers format(possibly onnx?)")
raise NotImplementedError(f"{path} is not a supported stable diffusion diffusers format")
else:
raise NotImplementedError(f"Unknown stable diffusion 1.* format: {model_format}")
@ -81,6 +81,8 @@ class StableDiffusion1Model(DiffusersModel):
else:
raise Exception("Unkown stable diffusion 1.* model format")
if ckpt_config_path is None:
ckpt_config_path = _select_ckpt_config(BaseModelType.StableDiffusion1, variant)
return cls.create_config(
path=path,
@ -109,14 +111,12 @@ class StableDiffusion1Model(DiffusersModel):
config: ModelConfigBase,
base_model: BaseModelType,
) -> str:
assert model_path == config.path
if isinstance(config, cls.CheckpointConfig):
return _convert_ckpt_and_cache(
version=BaseModelType.StableDiffusion1,
model_config=config,
output_path=output_path,
) # TODO: args
)
else:
return model_path
@ -131,25 +131,20 @@ class StableDiffusion2Model(DiffusersModel):
model_format: Literal[StableDiffusion2ModelFormat.Diffusers]
vae: Optional[str] = Field(None)
variant: ModelVariantType
prediction_type: SchedulerPredictionType
upcast_attention: bool
class CheckpointConfig(ModelConfigBase):
model_format: Literal[StableDiffusion2ModelFormat.Checkpoint]
vae: Optional[str] = Field(None)
config: Optional[str] = Field(None)
config: str
variant: ModelVariantType
prediction_type: SchedulerPredictionType
upcast_attention: bool
def __init__(self, model_path: str, base_model: BaseModelType, model_type: ModelType):
assert base_model == BaseModelType.StableDiffusion2
assert model_type == ModelType.Pipeline
assert model_type == ModelType.Main
super().__init__(
model_path=model_path,
base_model=BaseModelType.StableDiffusion2,
model_type=ModelType.Pipeline,
model_type=ModelType.Main,
)
@classmethod
@ -188,13 +183,8 @@ class StableDiffusion2Model(DiffusersModel):
else:
raise Exception("Unkown stable diffusion 2.* model format")
if variant == ModelVariantType.Normal:
prediction_type = SchedulerPredictionType.VPrediction
upcast_attention = True
else:
prediction_type = SchedulerPredictionType.Epsilon
upcast_attention = False
if ckpt_config_path is None:
ckpt_config_path = _select_ckpt_config(BaseModelType.StableDiffusion2, variant)
return cls.create_config(
path=path,
@ -202,8 +192,6 @@ class StableDiffusion2Model(DiffusersModel):
config=ckpt_config_path,
variant=variant,
prediction_type=prediction_type,
upcast_attention=upcast_attention,
)
@classproperty
@ -225,14 +213,12 @@ class StableDiffusion2Model(DiffusersModel):
config: ModelConfigBase,
base_model: BaseModelType,
) -> str:
assert model_path == config.path
if isinstance(config, cls.CheckpointConfig):
return _convert_ckpt_and_cache(
version=BaseModelType.StableDiffusion2,
model_config=config,
output_path=output_path,
) # TODO: args
)
else:
return model_path
@ -243,18 +229,18 @@ def _select_ckpt_config(version: BaseModelType, variant: ModelVariantType):
ModelVariantType.Inpaint: "v1-inpainting-inference.yaml",
},
BaseModelType.StableDiffusion2: {
# code further will manually set upcast_attention and v_prediction
ModelVariantType.Normal: "v2-inference.yaml",
ModelVariantType.Normal: "v2-inference-v.yaml", # best guess, as we can't differentiate with base(512)
ModelVariantType.Inpaint: "v2-inpainting-inference.yaml",
ModelVariantType.Depth: "v2-midas-inference.yaml",
}
}
app_config = InvokeAIAppConfig.get_config()
try:
# TODO: path
#model_config.config = app_config.config_dir / "stable-diffusion" / ckpt_configs[version][model_config.variant]
#return InvokeAIAppConfig.get_config().legacy_conf_dir / ckpt_configs[version][variant]
return InvokeAIAppConfig.get_config().root_dir / "configs" / "stable-diffusion" / ckpt_configs[version][variant]
config_path = app_config.legacy_conf_path / ckpt_configs[version][variant]
if config_path.is_relative_to(app_config.root_path):
config_path = config_path.relative_to(app_config.root_path)
return str(config_path)
except:
return None
@ -273,36 +259,14 @@ def _convert_ckpt_and_cache(
"""
app_config = InvokeAIAppConfig.get_config()
if model_config.config is None:
model_config.config = _select_ckpt_config(version, model_config.variant)
if model_config.config is None:
raise Exception(f"Model variant {model_config.variant} not supported for {version}")
weights = app_config.root_dir / model_config.path
config_file = app_config.root_dir / model_config.config
weights = app_config.root_path / model_config.path
config_file = app_config.root_path / model_config.config
output_path = Path(output_path)
if version == BaseModelType.StableDiffusion1:
upcast_attention = False
prediction_type = SchedulerPredictionType.Epsilon
elif version == BaseModelType.StableDiffusion2:
upcast_attention = model_config.upcast_attention
prediction_type = model_config.prediction_type
else:
raise Exception(f"Unknown model provided: {version}")
# return cached version if it exists
if output_path.exists():
return output_path
# TODO: I think that it more correctly to convert with embedded vae
# as if user will delete custom vae he will got not embedded but also custom vae
#vae_ckpt_path, vae_model = self._get_vae_for_conversion(weights, mconfig)
# to avoid circular import errors
from ..convert_ckpt_to_diffusers import convert_ckpt_to_diffusers
with SilenceWarnings():
@ -313,9 +277,6 @@ def _convert_ckpt_and_cache(
model_variant=model_config.variant,
original_config_file=config_file,
extract_ema=True,
upcast_attention=upcast_attention,
prediction_type=prediction_type,
scan_needed=True,
model_root=app_config.models_path,
)
return output_path

View File

@ -137,7 +137,6 @@ def _convert_vae_ckpt_and_cache(
from .stable_diffusion import _select_ckpt_config
# all sd models use same vae settings
config_file = _select_ckpt_config(base_model, ModelVariantType.Normal)
else:
raise Exception(f"Vae conversion not supported for model type: {base_model}")
@ -152,13 +151,12 @@ def _convert_vae_ckpt_and_cache(
if "state_dict" in checkpoint:
checkpoint = checkpoint["state_dict"]
config = OmegaConf.load(config_file)
config = OmegaConf.load(app_config.root_path/config_file)
vae_model = convert_ldm_vae_to_diffusers(
checkpoint = checkpoint,
vae_config = config,
image_size = image_size,
model_root = app_config.models_path,
)
vae_model.save_pretrained(
output_path,

View File

@ -215,10 +215,12 @@ class GeneratorToCallbackinator(Generic[ParamType, ReturnType, CallbackType]):
@dataclass
class ControlNetData:
model: ControlNetModel = Field(default=None)
image_tensor: torch.Tensor= Field(default=None)
weight: Union[float, List[float]]= Field(default=1.0)
image_tensor: torch.Tensor = Field(default=None)
weight: Union[float, List[float]] = Field(default=1.0)
begin_step_percent: float = Field(default=0.0)
end_step_percent: float = Field(default=1.0)
control_mode: str = Field(default="balanced")
@dataclass(frozen=True)
class ConditioningData:
@ -599,48 +601,68 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# TODO: should this scaling happen here or inside self._unet_forward?
# i.e. before or after passing it to InvokeAIDiffuserComponent
latent_model_input = self.scheduler.scale_model_input(latents, timestep)
unet_latent_input = self.scheduler.scale_model_input(latents, timestep)
# default is no controlnet, so set controlnet processing output to None
down_block_res_samples, mid_block_res_sample = None, None
if control_data is not None:
# FIXME: make sure guidance_scale < 1.0 is handled correctly if doing per-step guidance setting
# if conditioning_data.guidance_scale > 1.0:
if conditioning_data.guidance_scale is not None:
# expand the latents input to control model if doing classifier free guidance
# (which I think for now is always true, there is conditional elsewhere that stops execution if
# classifier_free_guidance is <= 1.0 ?)
latent_control_input = torch.cat([latent_model_input] * 2)
else:
latent_control_input = latent_model_input
# control_data should be type List[ControlNetData]
# this loop covers both ControlNet (one ControlNetData in list)
# and MultiControlNet (multiple ControlNetData in list)
for i, control_datum in enumerate(control_data):
# print("controlnet", i, "==>", type(control_datum))
control_mode = control_datum.control_mode
# soft_injection and cfg_injection are the two ControlNet control_mode booleans
# that are combined at higher level to make control_mode enum
# soft_injection determines whether to do per-layer re-weighting adjustment (if True)
# or default weighting (if False)
soft_injection = (control_mode == "more_prompt" or control_mode == "more_control")
# cfg_injection = determines whether to apply ControlNet to only the conditional (if True)
# or the default both conditional and unconditional (if False)
cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced")
first_control_step = math.floor(control_datum.begin_step_percent * total_step_count)
last_control_step = math.ceil(control_datum.end_step_percent * total_step_count)
# only apply controlnet if current step is within the controlnet's begin/end step range
if step_index >= first_control_step and step_index <= last_control_step:
# print("running controlnet", i, "for step", step_index)
if cfg_injection:
control_latent_input = unet_latent_input
else:
# expand the latents input to control model if doing classifier free guidance
# (which I think for now is always true, there is conditional elsewhere that stops execution if
# classifier_free_guidance is <= 1.0 ?)
control_latent_input = torch.cat([unet_latent_input] * 2)
if cfg_injection: # only applying ControlNet to conditional instead of in unconditioned
encoder_hidden_states = torch.cat([conditioning_data.unconditioned_embeddings])
else:
encoder_hidden_states = torch.cat([conditioning_data.unconditioned_embeddings,
conditioning_data.text_embeddings])
if isinstance(control_datum.weight, list):
# if controlnet has multiple weights, use the weight for the current step
controlnet_weight = control_datum.weight[step_index]
else:
# if controlnet has a single weight, use it for all steps
controlnet_weight = control_datum.weight
# controlnet(s) inference
down_samples, mid_sample = control_datum.model(
sample=latent_control_input,
sample=control_latent_input,
timestep=timestep,
encoder_hidden_states=torch.cat([conditioning_data.unconditioned_embeddings,
conditioning_data.text_embeddings]),
encoder_hidden_states=encoder_hidden_states,
controlnet_cond=control_datum.image_tensor,
conditioning_scale=controlnet_weight,
# cross_attention_kwargs,
guess_mode=False,
conditioning_scale=controlnet_weight, # controlnet specific, NOT the guidance scale
guess_mode=soft_injection, # this is still called guess_mode in diffusers ControlNetModel
return_dict=False,
)
if cfg_injection:
# Inferred ControlNet only for the conditional batch.
# To apply the output of ControlNet to both the unconditional and conditional batches,
# add 0 to the unconditional batch to keep it unchanged.
down_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_samples]
mid_sample = torch.cat([torch.zeros_like(mid_sample), mid_sample])
if down_block_res_samples is None and mid_block_res_sample is None:
down_block_res_samples, mid_block_res_sample = down_samples, mid_sample
else:
@ -653,11 +675,11 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# predict the noise residual
noise_pred = self.invokeai_diffuser.do_diffusion_step(
latent_model_input,
t,
conditioning_data.unconditioned_embeddings,
conditioning_data.text_embeddings,
conditioning_data.guidance_scale,
x=unet_latent_input,
sigma=t,
unconditioning=conditioning_data.unconditioned_embeddings,
conditioning=conditioning_data.text_embeddings,
unconditional_guidance_scale=conditioning_data.guidance_scale,
step_index=step_index,
total_step_count=total_step_count,
down_block_additional_residuals=down_block_res_samples, # from controlnet(s)
@ -962,6 +984,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
device="cuda",
dtype=torch.float16,
do_classifier_free_guidance=True,
control_mode="balanced"
):
if not isinstance(image, torch.Tensor):
@ -992,6 +1015,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
repeat_by = num_images_per_prompt
image = image.repeat_interleave(repeat_by, dim=0)
image = image.to(device=device, dtype=dtype)
if do_classifier_free_guidance:
cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced")
if do_classifier_free_guidance and not cfg_injection:
image = torch.cat([image] * 2)
return image

View File

@ -16,6 +16,7 @@ from .util import (
download_with_resume,
instantiate_from_config,
url_attachment_name,
Chdir
)

View File

@ -381,3 +381,18 @@ def image_to_dataURL(image: Image.Image, image_format: str = "PNG") -> str:
buffered.getvalue()
).decode("UTF-8")
return image_base64
class Chdir(object):
'''Context manager to chdir to desired directory and change back after context exits:
Args:
path (Path): The path to the cwd
'''
def __init__(self, path: Path):
self.path = path
self.original = Path().absolute()
def __enter__(self):
os.chdir(self.path)
def __exit__(self,*args):
os.chdir(self.original)

View File

@ -1,107 +1,92 @@
# This file predefines a few models that the user may want to install.
diffusers:
stable-diffusion-1.5:
description: Stable Diffusion version 1.5 diffusers model (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: True
default: True
sd-inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: True
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
recommended: True
sd-inpainting-2.0:
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-inpainting
format: diffusers
recommended: False
analog-diffusion-1.0:
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
repo_id: wavymulder/Analog-Diffusion
format: diffusers
recommended: false
deliberate-1.0:
description: Versatile model that produces detailed images up to 768px (4.27 GB)
format: diffusers
repo_id: XpucT/Deliberate
recommended: False
d&d-diffusion-1.0:
description: Dungeons & Dragons characters (2.13 GB)
format: diffusers
repo_id: 0xJustin/Dungeons-and-Diffusion
recommended: False
dreamlike-photoreal-2.0:
description: A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)
format: diffusers
repo_id: dreamlike-art/dreamlike-photoreal-2.0
recommended: False
inkpunk-1.0:
description: Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)
format: diffusers
repo_id: Envvi/Inkpunk-Diffusion
recommended: False
openjourney-4.0:
description: An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)
format: diffusers
repo_id: prompthero/openjourney
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
portrait-plus-1.0:
description: An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)
format: diffusers
repo_id: wavymulder/portraitplus
recommended: False
seek-art-mega-1.0:
description: A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)
repo_id: coreco/seek.art_MEGA
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
trinart-2.0:
description: An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
waifu-diffusion-1.4:
description: An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)
repo_id: hakurei/waifu-diffusion
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
controlnet:
canny: lllyasviel/control_v11p_sd15_canny
inpaint: lllyasviel/control_v11p_sd15_inpaint
mlsd: lllyasviel/control_v11p_sd15_mlsd
depth: lllyasviel/control_v11f1p_sd15_depth
normal_bae: lllyasviel/control_v11p_sd15_normalbae
seg: lllyasviel/control_v11p_sd15_seg
lineart: lllyasviel/control_v11p_sd15_lineart
lineart_anime: lllyasviel/control_v11p_sd15s2_lineart_anime
scribble: lllyasviel/control_v11p_sd15_scribble
softedge: lllyasviel/control_v11p_sd15_softedge
shuffle: lllyasviel/control_v11e_sd15_shuffle
tile: lllyasviel/control_v11f1e_sd15_tile
ip2p: lllyasviel/control_v11e_sd15_ip2p
textual_inversion:
'EasyNegative': https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors
'ahx-beta-453407d': sd-concepts-library/ahx-beta-453407d
lora:
'LowRA': https://civitai.com/api/download/models/63006
'Ink scenery': https://civitai.com/api/download/models/83390
'sd-model-finetuned-lora-t4': sayakpaul/sd-model-finetuned-lora-t4
sd-1/main/stable-diffusion-v1-5:
description: Stable Diffusion version 1.5 diffusers model (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
recommended: True
default: True
sd-1/main/stable-diffusion-inpainting:
description: RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
recommended: True
sd-2/main/stable-diffusion-2-1:
description: Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
recommended: True
sd-2/main/stable-diffusion-2-inpainting:
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-inpainting
recommended: False
sd-1/main/Analog-Diffusion:
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
repo_id: wavymulder/Analog-Diffusion
recommended: false
sd-1/main/Deliberate:
description: Versatile model that produces detailed images up to 768px (4.27 GB)
repo_id: XpucT/Deliberate
recommended: False
sd-1/main/Dungeons-and-Diffusion:
description: Dungeons & Dragons characters (2.13 GB)
repo_id: 0xJustin/Dungeons-and-Diffusion
recommended: False
sd-1/main/dreamlike-photoreal-2:
description: A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)
repo_id: dreamlike-art/dreamlike-photoreal-2.0
recommended: False
sd-1/main/Inkpunk-Diffusion:
description: Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)
repo_id: Envvi/Inkpunk-Diffusion
recommended: False
sd-1/main/openjourney:
description: An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)
repo_id: prompthero/openjourney
recommended: False
sd-1/main/portraitplus:
description: An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)
repo_id: wavymulder/portraitplus
recommended: False
sd-1/main/seek.art_MEGA:
repo_id: coreco/seek.art_MEGA
description: A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)
recommended: False
sd-1/main/trinart_stable_diffusion_v2:
description: An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
recommended: False
sd-1/main/waifu-diffusion:
description: An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)
repo_id: hakurei/waifu-diffusion
recommended: False
sd-1/controlnet/canny:
repo_id: lllyasviel/control_v11p_sd15_canny
sd-1/controlnet/inpaint:
repo_id: lllyasviel/control_v11p_sd15_inpaint
sd-1/controlnet/mlsd:
repo_id: lllyasviel/control_v11p_sd15_mlsd
sd-1/controlnet/depth:
repo_id: lllyasviel/control_v11f1p_sd15_depth
sd-1/controlnet/normal_bae:
repo_id: lllyasviel/control_v11p_sd15_normalbae
sd-1/controlnet/seg:
repo_id: lllyasviel/control_v11p_sd15_seg
sd-1/controlnet/lineart:
repo_id: lllyasviel/control_v11p_sd15_lineart
sd-1/controlnet/lineart_anime:
repo_id: lllyasviel/control_v11p_sd15s2_lineart_anime
sd-1/controlnet/scribble:
repo_id: lllyasviel/control_v11p_sd15_scribble
sd-1/controlnet/softedge:
repo_id: lllyasviel/control_v11p_sd15_softedge
sd-1/controlnet/shuffle:
repo_id: lllyasviel/control_v11e_sd15_shuffle
sd-1/controlnet/tile:
repo_id: lllyasviel/control_v11f1e_sd15_tile
sd-1/controlnet/ip2p:
repo_id: lllyasviel/control_v11e_sd15_ip2p
sd-1/embedding/EasyNegative:
path: https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors
sd-1/embedding/ahx-beta-453407d:
repo_id: sd-concepts-library/ahx-beta-453407d
sd-1/lora/LowRA:
path: https://civitai.com/api/download/models/63006
sd-1/lora/Ink scenery:
path: https://civitai.com/api/download/models/83390

View File

@ -0,0 +1,159 @@
model:
base_learning_rate: 5.0e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
parameterization: "v"
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: hybrid
scale_factor: 0.18215
monitor: val/loss_simple_ema
finetune_keys: null
use_ema: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
use_checkpoint: True
image_size: 32 # unused
in_channels: 9
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
#attn_type: "vanilla-xformers"
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: [ ]
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder
params:
freeze: True
layer: "penultimate"
data:
target: ldm.data.laion.WebDataModuleFromConfig
params:
tar_base: null # for concat as in LAION-A
p_unsafe_threshold: 0.1
filter_word_list: "data/filters.yaml"
max_pwatermark: 0.45
batch_size: 8
num_workers: 6
multinode: True
min_size: 512
train:
shards:
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-0/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-1/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-2/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-3/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-4/{00000..18699}.tar -" #{00000-94333}.tar"
shuffle: 10000
image_key: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 512
interpolation: 3
- target: torchvision.transforms.RandomCrop
params:
size: 512
postprocess:
target: ldm.data.laion.AddMask
params:
mode: "512train-large"
p_drop: 0.25
# NOTE use enough shards to avoid empty validation loops in workers
validation:
shards:
- "pipe:aws s3 cp s3://deep-floyd-s3/datasets/laion_cleaned-part5/{93001..94333}.tar - "
shuffle: 0
image_key: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 512
interpolation: 3
- target: torchvision.transforms.CenterCrop
params:
size: 512
postprocess:
target: ldm.data.laion.AddMask
params:
mode: "512train-large"
p_drop: 0.25
lightning:
find_unused_parameters: True
modelcheckpoint:
params:
every_n_train_steps: 5000
callbacks:
metrics_over_trainsteps_checkpoint:
params:
every_n_train_steps: 10000
image_logger:
target: main.ImageLogger
params:
enable_autocast: False
disabled: False
batch_frequency: 1000
max_images: 4
increase_log_steps: False
log_first_step: False
log_images_kwargs:
use_ema_scope: False
inpaint: False
plot_progressive_rows: False
plot_diffusion_rows: False
N: 4
unconditional_guidance_scale: 5.0
unconditional_guidance_label: [""]
ddim_steps: 50 # todo check these out for depth2img,
ddim_eta: 0.0 # todo check these out for depth2img,
trainer:
benchmark: True
val_check_interval: 5000000
num_sanity_val_steps: 0
accumulate_grad_batches: 1

View File

@ -0,0 +1,158 @@
model:
base_learning_rate: 5.0e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: hybrid
scale_factor: 0.18215
monitor: val/loss_simple_ema
finetune_keys: null
use_ema: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
use_checkpoint: True
image_size: 32 # unused
in_channels: 9
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
#attn_type: "vanilla-xformers"
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: [ ]
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder
params:
freeze: True
layer: "penultimate"
data:
target: ldm.data.laion.WebDataModuleFromConfig
params:
tar_base: null # for concat as in LAION-A
p_unsafe_threshold: 0.1
filter_word_list: "data/filters.yaml"
max_pwatermark: 0.45
batch_size: 8
num_workers: 6
multinode: True
min_size: 512
train:
shards:
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-0/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-1/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-2/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-3/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-4/{00000..18699}.tar -" #{00000-94333}.tar"
shuffle: 10000
image_key: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 512
interpolation: 3
- target: torchvision.transforms.RandomCrop
params:
size: 512
postprocess:
target: ldm.data.laion.AddMask
params:
mode: "512train-large"
p_drop: 0.25
# NOTE use enough shards to avoid empty validation loops in workers
validation:
shards:
- "pipe:aws s3 cp s3://deep-floyd-s3/datasets/laion_cleaned-part5/{93001..94333}.tar - "
shuffle: 0
image_key: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 512
interpolation: 3
- target: torchvision.transforms.CenterCrop
params:
size: 512
postprocess:
target: ldm.data.laion.AddMask
params:
mode: "512train-large"
p_drop: 0.25
lightning:
find_unused_parameters: True
modelcheckpoint:
params:
every_n_train_steps: 5000
callbacks:
metrics_over_trainsteps_checkpoint:
params:
every_n_train_steps: 10000
image_logger:
target: main.ImageLogger
params:
enable_autocast: False
disabled: False
batch_frequency: 1000
max_images: 4
increase_log_steps: False
log_first_step: False
log_images_kwargs:
use_ema_scope: False
inpaint: False
plot_progressive_rows: False
plot_diffusion_rows: False
N: 4
unconditional_guidance_scale: 5.0
unconditional_guidance_label: [""]
ddim_steps: 50 # todo check these out for depth2img,
ddim_eta: 0.0 # todo check these out for depth2img,
trainer:
benchmark: True
val_check_interval: 5000000
num_sanity_val_steps: 0
accumulate_grad_batches: 1

View File

@ -108,11 +108,11 @@ def main():
print(f':crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]')
if release:
cmd = f"pip install 'invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip' --use-pep517 --upgrade"
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
elif tag:
cmd = f"pip install 'invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip' --use-pep517 --upgrade"
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade'
else:
cmd = f"pip install 'invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip' --use-pep517 --upgrade"
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade'
print('')
print('')
if os.system(cmd)==0:

View File

@ -11,7 +11,6 @@ The work is actually done in backend code in model_install_backend.py.
import argparse
import curses
import os
import sys
import textwrap
import traceback
@ -20,28 +19,22 @@ from multiprocessing import Process
from multiprocessing.connection import Connection, Pipe
from pathlib import Path
from shutil import get_terminal_size
from typing import List
import logging
import npyscreen
import torch
from npyscreen import widget
from omegaconf import OmegaConf
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.backend.install.model_install_backend import (
Dataset_path,
default_config_file,
default_dataset,
install_requested_models,
recommended_datasets,
ModelInstallList,
UserSelections,
InstallSelections,
ModelInstall,
SchedulerPredictionType,
)
from invokeai.backend import ModelManager
from invokeai.backend.model_management import ModelManager, ModelType
from invokeai.backend.util import choose_precision, choose_torch_device
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.install.widgets import (
CenteredTitleText,
MultiSelectColumns,
@ -58,6 +51,7 @@ from invokeai.frontend.install.widgets import (
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
logger = InvokeAILogger.getLogger()
# build a table mapping all non-printable characters to None
# for stripping control characters
@ -71,8 +65,8 @@ def make_printable(s:str)->str:
return s.translate(NOPRINT_TRANS_TABLE)
class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
# for responsive resizing - disabled
# FIX_MINIMUM_SIZE_WHEN_CREATED = False
# for responsive resizing set to False, but this seems to cause a crash!
FIX_MINIMUM_SIZE_WHEN_CREATED = True
# for persistence
current_tab = 0
@ -90,25 +84,10 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
if not config.model_conf_path.exists():
with open(config.model_conf_path,'w') as file:
print('# InvokeAI model configuration file',file=file)
model_manager = ModelManager(config.model_conf_path)
self.starter_models = OmegaConf.load(Dataset_path)['diffusers']
self.installed_diffusers_models = self.list_additional_diffusers_models(
model_manager,
self.starter_models,
)
self.installed_cn_models = model_manager.list_controlnet_models()
self.installed_lora_models = model_manager.list_lora_models()
self.installed_ti_models = model_manager.list_ti_models()
try:
self.existing_models = OmegaConf.load(default_config_file())
except:
self.existing_models = dict()
self.starter_model_list = list(self.starter_models.keys())
self.installed_models = dict()
self.installer = ModelInstall(config)
self.all_models = self.installer.all_models()
self.starter_models = self.installer.starter_models()
self.model_labels = self._get_model_labels()
window_width, window_height = get_terminal_size()
self.nextrely -= 1
@ -141,39 +120,37 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
scroll_exit = True,
)
self.tabs.on_changed = self._toggle_tables
top_of_table = self.nextrely
self.starter_diffusers_models = self.add_starter_diffusers()
self.starter_pipelines = self.add_starter_pipelines()
bottom_of_table = self.nextrely
self.nextrely = top_of_table
self.diffusers_models = self.add_diffusers_widgets(
predefined_models=self.installed_diffusers_models,
model_type='Diffusers',
self.pipeline_models = self.add_pipeline_widgets(
model_type=ModelType.Main,
window_width=window_width,
exclude = self.starter_models
)
# self.pipeline_models['autoload_pending'] = True
bottom_of_table = max(bottom_of_table,self.nextrely)
self.nextrely = top_of_table
self.controlnet_models = self.add_model_widgets(
predefined_models=self.installed_cn_models,
model_type='ControlNet',
model_type=ModelType.ControlNet,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table,self.nextrely)
self.nextrely = top_of_table
self.lora_models = self.add_model_widgets(
predefined_models=self.installed_lora_models,
model_type="LoRA/LyCORIS",
model_type=ModelType.Lora,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table,self.nextrely)
self.nextrely = top_of_table
self.ti_models = self.add_model_widgets(
predefined_models=self.installed_ti_models,
model_type="Textual Inversion Embeddings",
model_type=ModelType.TextualInversion,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table,self.nextrely)
@ -184,7 +161,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
BufferBox,
name='Log Messages',
editable=False,
max_height = 16,
max_height = 10,
)
self.nextrely += 1
@ -197,13 +174,14 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
rely=-3,
when_pressed_function=self.on_back,
)
self.ok_button = self.add_widget_intelligent(
npyscreen.ButtonPress,
name=done_label,
relx=(window_width - len(done_label)) // 2,
rely=-3,
when_pressed_function=self.on_execute
)
else:
self.ok_button = self.add_widget_intelligent(
npyscreen.ButtonPress,
name=done_label,
relx=(window_width - len(done_label)) // 2,
rely=-3,
when_pressed_function=self.on_execute
)
label = "APPLY CHANGES & EXIT"
self.done = self.add_widget_intelligent(
@ -220,18 +198,15 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
self._toggle_tables([self.current_tab])
############# diffusers tab ##########
def add_starter_diffusers(self)->dict[str, npyscreen.widget]:
def add_starter_pipelines(self)->dict[str, npyscreen.widget]:
'''Add widgets responsible for selecting diffusers models'''
widgets = dict()
starter_model_labels = self._get_starter_model_labels()
recommended_models = [
x
for x in self.starter_model_list
if self.starter_models[x].get("recommended", False)
]
models = self.all_models
starters = self.starter_models
starter_model_labels = self.model_labels
self.installed_models = sorted(
[x for x in list(self.starter_models.keys()) if x in self.existing_models]
[x for x in starters if models[x].installed]
)
widgets.update(
@ -246,55 +221,46 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
self.nextrely -= 1
# if user has already installed some initial models, then don't patronize them
# by showing more recommendations
show_recommended = not self.existing_models
show_recommended = len(self.installed_models)==0
keys = [x for x in models.keys() if x in starters]
widgets.update(
models_selected = self.add_widget_intelligent(
MultiSelectColumns,
columns=1,
name="Install Starter Models",
values=starter_model_labels,
values=[starter_model_labels[x] for x in keys],
value=[
self.starter_model_list.index(x)
for x in self.starter_model_list
if (show_recommended and x in recommended_models)\
or (x in self.existing_models)
keys.index(x)
for x in keys
if (show_recommended and models[x].recommended) \
or (x in self.installed_models)
],
max_height=len(starter_model_labels) + 1,
max_height=len(starters) + 1,
relx=4,
scroll_exit=True,
)
),
models = keys,
)
widgets.update(
purge_deleted = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Purge unchecked diffusers models from disk",
value=False,
scroll_exit=True,
relx=4,
)
)
widgets['purge_deleted'].when_value_edited = lambda: self.sync_purge_buttons(widgets['purge_deleted'])
self.nextrely += 1
return widgets
############# Add a set of model install widgets ########
def add_model_widgets(self,
predefined_models: dict[str,bool],
model_type: str,
model_type: ModelType,
window_width: int=120,
install_prompt: str=None,
add_purge_deleted: bool=False,
exclude: set=set(),
)->dict[str,npyscreen.widget]:
'''Generic code to create model selection widgets'''
widgets = dict()
model_list = sorted(predefined_models.keys())
model_list = [x for x in self.all_models if self.all_models[x].model_type==model_type and not x in exclude]
model_labels = [self.model_labels[x] for x in model_list]
if len(model_list) > 0:
max_width = max([len(x) for x in model_list])
max_width = max([len(x) for x in model_labels])
columns = window_width // (max_width+8) # 8 characters for "[x] " and padding
columns = min(len(model_list),columns) or 1
prompt = install_prompt or f"Select the desired {model_type} models to install. Unchecked models will be purged from disk."
prompt = install_prompt or f"Select the desired {model_type.value.title()} models to install. Unchecked models will be purged from disk."
widgets.update(
label1 = self.add_widget_intelligent(
@ -310,31 +276,19 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
MultiSelectColumns,
columns=columns,
name=f"Install {model_type} Models",
values=model_list,
values=model_labels,
value=[
model_list.index(x)
for x in model_list
if predefined_models[x]
if self.all_models[x].installed
],
max_height=len(model_list)//columns + 1,
relx=4,
scroll_exit=True,
)
),
models = model_list,
)
if add_purge_deleted:
self.nextrely += 1
widgets.update(
purge_deleted = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Purge unchecked diffusers models from disk",
value=False,
scroll_exit=True,
relx=4,
)
)
widgets['purge_deleted'].when_value_edited = lambda: self.sync_purge_buttons(widgets['purge_deleted'])
self.nextrely += 1
widgets.update(
download_ids = self.add_widget_intelligent(
@ -348,63 +302,33 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
return widgets
### Tab for arbitrary diffusers widgets ###
def add_diffusers_widgets(self,
predefined_models: dict[str,bool],
model_type: str='Diffusers',
window_width: int=120,
)->dict[str,npyscreen.widget]:
def add_pipeline_widgets(self,
model_type: ModelType=ModelType.Main,
window_width: int=120,
**kwargs,
)->dict[str,npyscreen.widget]:
'''Similar to add_model_widgets() but adds some additional widgets at the bottom
to support the autoload directory'''
widgets = self.add_model_widgets(
predefined_models,
'Diffusers',
window_width,
install_prompt="Additional diffusers models already installed.",
add_purge_deleted=True
model_type = model_type,
window_width = window_width,
install_prompt=f"Additional {model_type.value.title()} models already installed.",
**kwargs,
)
label = "Directory to scan for models to automatically import (<tab> autocompletes):"
self.nextrely += 1
widgets.update(
autoload_directory = self.add_widget_intelligent(
FileBox,
max_height=3,
name=label,
value=str(config.autoconvert_dir) if config.autoconvert_dir else None,
select_dir=True,
must_exist=True,
use_two_lines=False,
labelColor="DANGER",
begin_entry_at=len(label)+1,
scroll_exit=True,
)
)
widgets.update(
autoscan_on_startup = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Scan and import from this directory each time InvokeAI starts",
value=config.autoconvert_dir is not None,
relx=4,
scroll_exit=True,
)
)
return widgets
def sync_purge_buttons(self,checkbox):
value = checkbox.value
self.starter_diffusers_models['purge_deleted'].value = value
self.diffusers_models['purge_deleted'].value = value
def resize(self):
super().resize()
if (s := self.starter_diffusers_models.get("models_selected")):
s.values = self._get_starter_model_labels()
if (s := self.starter_pipelines.get("models_selected")):
keys = [x for x in self.all_models.keys() if x in self.starter_models]
s.values = [self.model_labels[x] for x in keys]
def _toggle_tables(self, value=None):
selected_tab = value[0]
widgets = [
self.starter_diffusers_models,
self.diffusers_models,
self.starter_pipelines,
self.pipeline_models,
self.controlnet_models,
self.lora_models,
self.ti_models,
@ -412,34 +336,38 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
for group in widgets:
for k,v in group.items():
v.hidden = True
v.editable = False
try:
v.hidden = True
v.editable = False
except:
pass
for k,v in widgets[selected_tab].items():
v.hidden = False
if not isinstance(v,(npyscreen.FixedText, npyscreen.TitleFixedText, CenteredTitleText)):
v.editable = True
try:
v.hidden = False
if not isinstance(v,(npyscreen.FixedText, npyscreen.TitleFixedText, CenteredTitleText)):
v.editable = True
except:
pass
self.__class__.current_tab = selected_tab # for persistence
self.display()
def _get_starter_model_labels(self) -> List[str]:
def _get_model_labels(self) -> dict[str,str]:
window_width, window_height = get_terminal_size()
label_width = 25
checkbox_width = 4
spacing_width = 2
models = self.all_models
label_width = max([len(models[x].name) for x in models])
description_width = window_width - label_width - checkbox_width - spacing_width
im = self.starter_models
names = self.starter_model_list
descriptions = [
im[x].description[0 : description_width - 3] + "..."
if len(im[x].description) > description_width
else im[x].description
for x in names
]
return [
f"%-{label_width}s %s" % (names[x], descriptions[x])
for x in range(0, len(names))
]
result = dict()
for x in models.keys():
description = models[x].description
description = description[0 : description_width - 3] + "..." \
if description and len(description) > description_width \
else description if description else ''
result[x] = f"%-{label_width}s %s" % (models[x].name, description)
return result
def _get_columns(self) -> int:
window_width, window_height = get_terminal_size()
@ -467,7 +395,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
target = process_and_execute,
kwargs=dict(
opt = app.program_opts,
selections = app.user_selections,
selections = app.install_selections,
conn_out = child_conn,
)
)
@ -475,8 +403,8 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
child_conn.close()
self.subprocess_connection = parent_conn
self.subprocess = p
app.user_selections = UserSelections()
# process_and_execute(app.opt, app.user_selections)
app.install_selections = InstallSelections()
# process_and_execute(app.opt, app.install_selections)
def on_back(self):
self.parentApp.switchFormPrevious()
@ -492,7 +420,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
self.parentApp.setNextForm(None)
self.parentApp.user_cancelled = False
self.editing = False
########## This routine monitors the child process that is performing model installation and removal #####
def while_waiting(self):
'''Called during idle periods. Main task is to update the Log Messages box with messages
@ -548,8 +476,8 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
# rebuild the form, saving and restoring some of the fields that need to be preserved.
saved_messages = self.monitor.entry_widget.values
autoload_dir = self.diffusers_models['autoload_directory'].value
autoscan = self.diffusers_models['autoscan_on_startup'].value
# autoload_dir = str(config.root_path / self.pipeline_models['autoload_directory'].value)
# autoscan = self.pipeline_models['autoscan_on_startup'].value
app.main_form = app.addForm(
"MAIN", addModelsForm, name="Install Stable Diffusion Models", multipage=self.multipage,
@ -558,23 +486,8 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
app.main_form.monitor.entry_widget.values = saved_messages
app.main_form.monitor.entry_widget.buffer([''],scroll_end=True)
app.main_form.diffusers_models['autoload_directory'].value = autoload_dir
app.main_form.diffusers_models['autoscan_on_startup'].value = autoscan
###############################################################
def list_additional_diffusers_models(self,
manager: ModelManager,
starters:dict
)->dict[str,bool]:
'''Return a dict of all the currently installed models that are not on the starter list'''
model_info = manager.list_models()
additional_models = {
x:True for x in model_info \
if model_info[x]['format']=='diffusers' \
and x not in starters
}
return additional_models
# app.main_form.pipeline_models['autoload_directory'].value = autoload_dir
# app.main_form.pipeline_models['autoscan_on_startup'].value = autoscan
def marshall_arguments(self):
"""
@ -586,89 +499,40 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
.autoscan_on_startup: True if invokeai should scan and import at startup time
.import_model_paths: list of URLs, repo_ids and file paths to import
"""
# we're using a global here rather than storing the result in the parentapp
# due to some bug in npyscreen that is causing attributes to be lost
selections = self.parentApp.user_selections
selections = self.parentApp.install_selections
all_models = self.all_models
# Starter models to install/remove
starter_models = dict(
map(
lambda x: (self.starter_model_list[x], True),
self.starter_diffusers_models['models_selected'].value,
)
)
selections.purge_deleted_models = self.starter_diffusers_models['purge_deleted'].value or \
self.diffusers_models['purge_deleted'].value
selections.install_models = [x for x in starter_models if x not in self.existing_models]
selections.remove_models = [x for x in self.starter_model_list if x in self.existing_models and x not in starter_models]
# Defined models (in INITIAL_CONFIG.yaml or models.yaml) to add/remove
ui_sections = [self.starter_pipelines, self.pipeline_models,
self.controlnet_models, self.lora_models, self.ti_models]
for section in ui_sections:
if not 'models_selected' in section:
continue
selected = set([section['models'][x] for x in section['models_selected'].value])
models_to_install = [x for x in selected if not self.all_models[x].installed]
models_to_remove = [x for x in section['models'] if x not in selected and self.all_models[x].installed]
selections.remove_models.extend(models_to_remove)
selections.install_models.extend(all_models[x].path or all_models[x].repo_id \
for x in models_to_install if all_models[x].path or all_models[x].repo_id)
# "More" models
selections.import_model_paths = self.diffusers_models['download_ids'].value.split()
if diffusers_selected := self.diffusers_models.get('models_selected'):
selections.remove_models.extend([x
for x in diffusers_selected.values
if self.installed_diffusers_models[x]
and diffusers_selected.values.index(x) not in diffusers_selected.value
]
)
# TODO: REFACTOR THIS REPETITIVE CODE
if cn_models_selected := self.controlnet_models.get('models_selected'):
selections.install_cn_models = [cn_models_selected.values[x]
for x in cn_models_selected.value
if not self.installed_cn_models[cn_models_selected.values[x]]
]
selections.remove_cn_models = [x
for x in cn_models_selected.values
if self.installed_cn_models[x]
and cn_models_selected.values.index(x) not in cn_models_selected.value
]
if (additional_cns := self.controlnet_models['download_ids'].value.split()):
valid_cns = [x for x in additional_cns if '/' in x]
selections.install_cn_models.extend(valid_cns)
# models located in the 'download_ids" section
for section in ui_sections:
if downloads := section.get('download_ids'):
selections.install_models.extend(downloads.value.split())
# same thing, for LoRAs
if loras_selected := self.lora_models.get('models_selected'):
selections.install_lora_models = [loras_selected.values[x]
for x in loras_selected.value
if not self.installed_lora_models[loras_selected.values[x]]
]
selections.remove_lora_models = [x
for x in loras_selected.values
if self.installed_lora_models[x]
and loras_selected.values.index(x) not in loras_selected.value
]
if (additional_loras := self.lora_models['download_ids'].value.split()):
selections.install_lora_models.extend(additional_loras)
# same thing, for TIs
# TODO: refactor
if tis_selected := self.ti_models.get('models_selected'):
selections.install_ti_models = [tis_selected.values[x]
for x in tis_selected.value
if not self.installed_ti_models[tis_selected.values[x]]
]
selections.remove_ti_models = [x
for x in tis_selected.values
if self.installed_ti_models[x]
and tis_selected.values.index(x) not in tis_selected.value
]
if (additional_tis := self.ti_models['download_ids'].value.split()):
selections.install_ti_models.extend(additional_tis)
# load directory and whether to scan on startup
selections.scan_directory = self.diffusers_models['autoload_directory'].value
selections.autoscan_on_startup = self.diffusers_models['autoscan_on_startup'].value
# if self.parentApp.autoload_pending:
# selections.scan_directory = str(config.root_path / self.pipeline_models['autoload_directory'].value)
# self.parentApp.autoload_pending = False
# selections.autoscan_on_startup = self.pipeline_models['autoscan_on_startup'].value
class AddModelApplication(npyscreen.NPSAppManaged):
def __init__(self,opt):
super().__init__()
self.program_opts = opt
self.user_cancelled = False
self.user_selections = UserSelections()
# self.autoload_pending = True
self.install_selections = InstallSelections()
def onStart(self):
npyscreen.setTheme(npyscreen.Themes.DefaultTheme)
@ -687,26 +551,22 @@ class StderrToMessage():
pass
# --------------------------------------------------------
def ask_user_for_config_file(model_path: Path,
tui_conn: Connection=None
)->Path:
def ask_user_for_prediction_type(model_path: Path,
tui_conn: Connection=None
)->SchedulerPredictionType:
if tui_conn:
logger.debug('Waiting for user response...')
return _ask_user_for_cf_tui(model_path, tui_conn)
return _ask_user_for_pt_tui(model_path, tui_conn)
else:
return _ask_user_for_cf_cmdline(model_path)
return _ask_user_for_pt_cmdline(model_path)
def _ask_user_for_cf_cmdline(model_path):
choices = [
config.legacy_conf_path / x
for x in ['v2-inference.yaml','v2-inference-v.yaml']
]
choices.extend([None])
def _ask_user_for_pt_cmdline(model_path: Path)->SchedulerPredictionType:
choices = [SchedulerPredictionType.Epsilon, SchedulerPredictionType.VPrediction, None]
print(
f"""
Please select the type of the V2 checkpoint named {model_path.name}:
[1] A Stable Diffusion v2.x base model (512 pixels; there should be no 'parameterization:' line in its yaml file)
[2] A Stable Diffusion v2.x v-predictive model (768 pixels; look for a 'parameterization: "v"' line in its yaml file)
[1] A model based on Stable Diffusion v2 trained on 512 pixel images (SD-2-base)
[2] A model based on Stable Diffusion v2 trained on 768 pixel images (SD-2-768)
[3] Skip this model and come back later.
"""
)
@ -723,7 +583,7 @@ Please select the type of the V2 checkpoint named {model_path.name}:
return
return choice
def _ask_user_for_cf_tui(model_path: Path, tui_conn: Connection)->Path:
def _ask_user_for_pt_tui(model_path: Path, tui_conn: Connection)->SchedulerPredictionType:
try:
tui_conn.send_bytes(f'*need v2 config for:{model_path}'.encode('utf-8'))
# note that we don't do any status checking here
@ -731,20 +591,20 @@ def _ask_user_for_cf_tui(model_path: Path, tui_conn: Connection)->Path:
if response is None:
return None
elif response == 'epsilon':
return config.legacy_conf_path / 'v2-inference.yaml'
return SchedulerPredictionType.epsilon
elif response == 'v':
return config.legacy_conf_path / 'v2-inference-v.yaml'
return SchedulerPredictionType.VPrediction
elif response == 'abort':
logger.info('Conversion aborted')
return None
else:
return Path(response)
return response
except:
return None
# --------------------------------------------------------
def process_and_execute(opt: Namespace,
selections: UserSelections,
selections: InstallSelections,
conn_out: Connection=None,
):
# set up so that stderr is sent to conn_out
@ -755,34 +615,14 @@ def process_and_execute(opt: Namespace,
logger = InvokeAILogger.getLogger()
logger.handlers.clear()
logger.addHandler(logging.StreamHandler(translator))
models_to_install = selections.install_models
models_to_remove = selections.remove_models
directory_to_scan = selections.scan_directory
scan_at_startup = selections.autoscan_on_startup
potential_models_to_install = selections.import_model_paths
install_requested_models(
diffusers = ModelInstallList(models_to_install, models_to_remove),
controlnet = ModelInstallList(selections.install_cn_models, selections.remove_cn_models),
lora = ModelInstallList(selections.install_lora_models, selections.remove_lora_models),
ti = ModelInstallList(selections.install_ti_models, selections.remove_ti_models),
scan_directory=Path(directory_to_scan) if directory_to_scan else None,
external_models=potential_models_to_install,
scan_at_startup=scan_at_startup,
precision="float32"
if opt.full_precision
else choose_precision(torch.device(choose_torch_device())),
purge_deleted=selections.purge_deleted_models,
config_file_path=Path(opt.config_file) if opt.config_file else config.model_conf_path,
model_config_file_callback = lambda x: ask_user_for_config_file(x,conn_out)
)
installer = ModelInstall(config, prediction_type_helper=lambda x: ask_user_for_prediction_type(x,conn_out))
installer.install(selections)
if conn_out:
conn_out.send_bytes('*done*'.encode('utf-8'))
conn_out.close()
def do_listings(opt)->bool:
"""List installed models of various sorts, and return
True if any were requested."""
@ -813,39 +653,34 @@ def select_and_download_models(opt: Namespace):
if opt.full_precision
else choose_precision(torch.device(choose_torch_device()))
)
if do_listings(opt):
pass
# this processes command line additions/removals
elif opt.diffusers or opt.controlnets or opt.textual_inversions or opt.loras:
action = 'remove_models' if opt.delete else 'install_models'
diffusers_args = {'diffusers':ModelInstallList(remove_models=opt.diffusers or [])} \
if opt.delete \
else {'external_models':opt.diffusers or []}
install_requested_models(
**diffusers_args,
controlnet=ModelInstallList(**{action:opt.controlnets or []}),
ti=ModelInstallList(**{action:opt.textual_inversions or []}),
lora=ModelInstallList(**{action:opt.loras or []}),
precision=precision,
purge_deleted=True,
model_config_file_callback=lambda x: ask_user_for_config_file(x),
config.precision = precision
helper = lambda x: ask_user_for_prediction_type(x)
# if do_listings(opt):
# pass
installer = ModelInstall(config, prediction_type_helper=helper)
if opt.add or opt.delete:
selections = InstallSelections(
install_models = opt.add or [],
remove_models = opt.delete or []
)
installer.install(selections)
elif opt.default_only:
install_requested_models(
diffusers=ModelInstallList(install_models=default_dataset()),
precision=precision,
selections = InstallSelections(
install_models = installer.default_model()
)
installer.install(selections)
elif opt.yes_to_all:
install_requested_models(
diffusers=ModelInstallList(install_models=recommended_datasets()),
precision=precision,
selections = InstallSelections(
install_models = installer.recommended_models()
)
installer.install(selections)
# this is where the TUI is called
else:
# needed because the torch library is loaded, even though we don't use it
torch.multiprocessing.set_start_method("spawn")
# currently commented out because it has started generating errors (?)
# torch.multiprocessing.set_start_method("spawn")
# the third argument is needed in the Windows 11 environment in
# order to launch and resize a console window running this program
@ -861,35 +696,20 @@ def select_and_download_models(opt: Namespace):
installApp.main_form.subprocess.terminate()
installApp.main_form.subprocess = None
raise e
process_and_execute(opt, installApp.user_selections)
process_and_execute(opt, installApp.install_selections)
# -------------------------------------
def main():
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
parser.add_argument(
"--diffusers",
"--add",
nargs="*",
help="List of URLs or repo_ids of diffusers to install/delete",
)
parser.add_argument(
"--loras",
nargs="*",
help="List of URLs or repo_ids of LoRA/LyCORIS models to install/delete",
)
parser.add_argument(
"--controlnets",
nargs="*",
help="List of URLs or repo_ids of controlnet models to install/delete",
)
parser.add_argument(
"--textual-inversions",
nargs="*",
help="List of URLs or repo_ids of textual inversion embeddings to install/delete",
help="List of URLs, local paths or repo_ids of models to install",
)
parser.add_argument(
"--delete",
action="store_true",
help="Delete models listed on command line rather than installing them",
nargs="*",
help="List of names of models to idelete",
)
parser.add_argument(
"--full-precision",
@ -909,7 +729,7 @@ def main():
parser.add_argument(
"--default_only",
action="store_true",
help="only install the default model",
help="Only install the default model",
)
parser.add_argument(
"--list-models",
@ -965,13 +785,15 @@ def main():
logger.error(
"Insufficient vertical space for the interface. Please make your window taller and try again"
)
elif str(e).startswith("addwstr"):
input('Press any key to continue...')
except Exception as e:
if str(e).startswith("addwstr"):
logger.error(
"Insufficient horizontal space for the interface. Please make your window wider and try again."
)
except Exception as e:
print(f'An exception has occurred: {str(e)} Details:')
print(traceback.format_exc(), file=sys.stderr)
else:
print(f'An exception has occurred: {str(e)} Details:')
print(traceback.format_exc(), file=sys.stderr)
input('Press any key to continue...')

View File

@ -17,8 +17,8 @@ from shutil import get_terminal_size
from curses import BUTTON2_CLICKED,BUTTON3_CLICKED
# minimum size for UIs
MIN_COLS = 120
MIN_LINES = 50
MIN_COLS = 130
MIN_LINES = 45
# -------------------------------------
def set_terminal_size(columns: int, lines: int, launch_command: str=None):
@ -42,6 +42,18 @@ def set_terminal_size(columns: int, lines: int, launch_command: str=None):
elif OS in ["Darwin", "Linux"]:
_set_terminal_size_unix(width,height)
# check whether it worked....
ts = get_terminal_size()
pause = False
if ts.columns < columns:
print('\033[1mThis window is too narrow for the user interface. Please make it wider.\033[0m')
pause = True
if ts.lines < lines:
print('\033[1mThis window is too short for the user interface. Please make it taller.\033[0m')
pause = True
if pause:
input('Press any key to continue..')
def _set_terminal_size_powershell(width: int, height: int):
script=f'''
$pshost = get-host
@ -61,6 +73,12 @@ def _set_terminal_size_unix(width: int, height: int):
import fcntl
import termios
# These terminals accept the size command and report that the
# size changed, but they lie!!!
for bad_terminal in ['TERMINATOR_UUID', 'ALACRITTY_WINDOW_ID']:
if os.environ.get(bad_terminal):
return
winsize = struct.pack("HHHH", height, width, 0, 0)
fcntl.ioctl(sys.stdout.fileno(), termios.TIOCSWINSZ, winsize)
sys.stdout.write("\x1b[8;{height};{width}t".format(height=height, width=width))
@ -75,6 +93,12 @@ def set_min_terminal_size(min_cols: int, min_lines: int, launch_command: str=Non
lines = max(term_lines, min_lines)
set_terminal_size(cols, lines, launch_command)
# did it work?
term_cols, term_lines = get_terminal_size()
if term_cols < cols or term_lines < lines:
print(f'This window is too small for optimal display. For best results please enlarge it.')
input('After resizing, press any key to continue...')
class IntSlider(npyscreen.Slider):
def translate_value(self):
stri = "%2d / %2d" % (self.value, self.out_of)
@ -378,13 +402,12 @@ def select_stable_diffusion_config_file(
wrap:bool =True,
model_name:str='Unknown',
):
message = "Please select the correct base model for the V2 checkpoint named {model_name}. Press <CANCEL> to skip installation."
message = f"Please select the correct base model for the V2 checkpoint named '{model_name}'. Press <CANCEL> to skip installation."
title = "CONFIG FILE SELECTION"
options=[
"An SD v2.x base model (512 pixels; no 'parameterization:' line in its yaml file)",
"An SD v2.x v-predictive model (768 pixels; 'parameterization: \"v\"' line in its yaml file)",
"Skip installation for now and come back later",
"Enter config file path manually",
]
F = ConfirmCancelPopup(
@ -406,35 +429,17 @@ def select_stable_diffusion_config_file(
mlw.values = message
choice = F.add(
SingleSelectWithChanged,
npyscreen.SelectOne,
values = options,
value = [0],
max_height = len(options)+1,
scroll_exit=True,
)
file = F.add(
FileBox,
name='Path to config file',
max_height=3,
hidden=True,
must_exist=True,
scroll_exit=True
)
def toggle_visible(value):
value = value[0]
if value==3:
file.hidden=False
else:
file.hidden=True
F.display()
choice.on_changed = toggle_visible
F.editw = 1
F.edit()
if not F.value:
return None
assert choice.value[0] in range(0,4),'invalid choice'
choices = ['epsilon','v','abort',file.value]
assert choice.value[0] in range(0,3),'invalid choice'
choices = ['epsilon','v','abort']
return choices[choice.value[0]]

View File

@ -0,0 +1,12 @@
import react from '@vitejs/plugin-react-swc';
import { visualizer } from 'rollup-plugin-visualizer';
import { PluginOption, UserConfig } from 'vite';
import eslint from 'vite-plugin-eslint';
import tsconfigPaths from 'vite-tsconfig-paths';
export const commonPlugins: UserConfig['plugins'] = [
react(),
eslint(),
tsconfigPaths(),
visualizer() as unknown as PluginOption,
];

View File

@ -1,17 +1,9 @@
import react from '@vitejs/plugin-react-swc';
import { visualizer } from 'rollup-plugin-visualizer';
import { PluginOption, UserConfig } from 'vite';
import eslint from 'vite-plugin-eslint';
import tsconfigPaths from 'vite-tsconfig-paths';
import { UserConfig } from 'vite';
import { commonPlugins } from './common';
export const appConfig: UserConfig = {
base: './',
plugins: [
react(),
eslint(),
tsconfigPaths(),
visualizer() as unknown as PluginOption,
],
plugins: [...commonPlugins],
build: {
chunkSizeWarningLimit: 1500,
},

View File

@ -1,19 +1,13 @@
import react from '@vitejs/plugin-react-swc';
import path from 'path';
import { visualizer } from 'rollup-plugin-visualizer';
import { PluginOption, UserConfig } from 'vite';
import { UserConfig } from 'vite';
import dts from 'vite-plugin-dts';
import eslint from 'vite-plugin-eslint';
import tsconfigPaths from 'vite-tsconfig-paths';
import cssInjectedByJsPlugin from 'vite-plugin-css-injected-by-js';
import { commonPlugins } from './common';
export const packageConfig: UserConfig = {
base: './',
plugins: [
react(),
eslint(),
tsconfigPaths(),
visualizer() as unknown as PluginOption,
...commonPlugins,
dts({
insertTypesEntry: true,
}),

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -12,7 +12,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-8a3e9251.js"></script>
<script type="module" crossorigin src="./assets/index-4dfaefdd.js"></script>
</head>
<body dir="ltr">

View File

@ -24,16 +24,13 @@
},
"common": {
"hotkeysLabel": "Hotkeys",
"themeLabel": "Theme",
"darkMode": "Dark Mode",
"lightMode": "Light Mode",
"languagePickerLabel": "Language",
"reportBugLabel": "Report Bug",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Settings",
"darkTheme": "Dark",
"lightTheme": "Light",
"greenTheme": "Green",
"oceanTheme": "Ocean",
"langArabic": "العربية",
"langEnglish": "English",
"langDutch": "Nederlands",
@ -524,7 +521,8 @@
"initialImage": "Initial Image",
"showOptionsPanel": "Show Options Panel",
"hidePreview": "Hide Preview",
"showPreview": "Show Preview"
"showPreview": "Show Preview",
"controlNetControlMode": "Control Mode"
},
"settings": {
"models": "Models",
@ -547,7 +545,8 @@
"general": "General",
"generation": "Generation",
"ui": "User Interface",
"availableSchedulers": "Available Schedulers"
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited"
},
"toast": {
"serverError": "Server Error",

View File

@ -0,0 +1,122 @@
{
"accessibility": {
"reset": "Resetoi",
"useThisParameter": "Käytä tätä parametria",
"modelSelect": "Mallin Valinta",
"exitViewer": "Poistu katselimesta",
"uploadImage": "Lataa kuva",
"copyMetadataJson": "Kopioi metadata JSON:iin",
"invokeProgressBar": "Invoken edistymispalkki",
"nextImage": "Seuraava kuva",
"previousImage": "Edellinen kuva",
"zoomIn": "Lähennä",
"flipHorizontally": "Käännä vaakasuoraan",
"zoomOut": "Loitonna",
"rotateCounterClockwise": "Kierrä vastapäivään",
"rotateClockwise": "Kierrä myötäpäivään",
"flipVertically": "Käännä pystysuoraan",
"showGallery": "Näytä galleria",
"modifyConfig": "Muokkaa konfiguraatiota",
"toggleAutoscroll": "Kytke automaattinen vieritys",
"toggleLogViewer": "Kytke lokin katselutila",
"showOptionsPanel": "Näytä asetukset"
},
"common": {
"postProcessDesc2": "Erillinen käyttöliittymä tullaan julkaisemaan helpottaaksemme työnkulkua jälkikäsittelyssä.",
"training": "Kouluta",
"statusLoadingModel": "Ladataan mallia",
"statusModelChanged": "Malli vaihdettu",
"statusConvertingModel": "Muunnetaan mallia",
"statusModelConverted": "Malli muunnettu",
"langFrench": "Ranska",
"langItalian": "Italia",
"languagePickerLabel": "Kielen valinta",
"hotkeysLabel": "Pikanäppäimet",
"reportBugLabel": "Raportoi Bugista",
"langPolish": "Puola",
"themeLabel": "Teema",
"langDutch": "Hollanti",
"settingsLabel": "Asetukset",
"githubLabel": "Github",
"darkTheme": "Tumma",
"lightTheme": "Vaalea",
"greenTheme": "Vihreä",
"langGerman": "Saksa",
"langPortuguese": "Portugali",
"discordLabel": "Discord",
"langEnglish": "Englanti",
"oceanTheme": "Meren sininen",
"langRussian": "Venäjä",
"langUkranian": "Ukraina",
"langSpanish": "Espanja",
"upload": "Lataa",
"statusMergedModels": "Mallit yhdistelty",
"img2img": "Kuva kuvaksi",
"nodes": "Solmut",
"nodesDesc": "Solmupohjainen järjestelmä kuvien generoimiseen on parhaillaan kehitteillä. Pysy kuulolla päivityksistä tähän uskomattomaan ominaisuuteen liittyen.",
"postProcessDesc1": "Invoke AI tarjoaa monenlaisia jälkikäsittelyominaisuukisa. Kuvan laadun skaalaus sekä kasvojen korjaus ovat jo saatavilla WebUI:ssä. Voit ottaa ne käyttöön lisäasetusten valikosta teksti kuvaksi sekä kuva kuvaksi -välilehdiltä. Voit myös suoraan prosessoida kuvia käyttämällä kuvan toimintapainikkeita nykyisen kuvan yläpuolella tai tarkastelussa.",
"postprocessing": "Jälkikäsitellään",
"postProcessing": "Jälkikäsitellään",
"cancel": "Peruuta",
"close": "Sulje",
"accept": "Hyväksy",
"statusConnected": "Yhdistetty",
"statusError": "Virhe",
"statusProcessingComplete": "Prosessointi valmis",
"load": "Lataa",
"back": "Takaisin",
"statusGeneratingTextToImage": "Generoidaan tekstiä kuvaksi",
"trainingDesc2": "InvokeAI tukee jo mukautettujen upotusten kouluttamista tekstin inversiolla käyttäen pääskriptiä.",
"statusDisconnected": "Yhteys katkaistu",
"statusPreparing": "Valmistellaan",
"statusIterationComplete": "Iteraatio valmis",
"statusMergingModels": "Yhdistellään malleja",
"statusProcessingCanceled": "Valmistelu peruutettu",
"statusSavingImage": "Tallennetaan kuvaa",
"statusGeneratingImageToImage": "Generoidaan kuvaa kuvaksi",
"statusRestoringFacesGFPGAN": "Korjataan kasvoja (GFPGAN)",
"statusRestoringFacesCodeFormer": "Korjataan kasvoja (CodeFormer)",
"statusGeneratingInpainting": "Generoidaan sisällemaalausta",
"statusGeneratingOutpainting": "Generoidaan ulosmaalausta",
"statusRestoringFaces": "Korjataan kasvoja",
"pinOptionsPanel": "Kiinnitä asetukset -paneeli",
"loadingInvokeAI": "Ladataan Invoke AI:ta",
"loading": "Ladataan",
"statusGenerating": "Generoidaan",
"txt2img": "Teksti kuvaksi",
"trainingDesc1": "Erillinen työnkulku omien upotusten ja tarkastuspisteiden kouluttamiseksi käyttäen tekstin inversiota ja dreamboothia selaimen käyttöliittymässä.",
"postProcessDesc3": "Invoke AI:n komentorivi tarjoaa paljon muita ominaisuuksia, kuten esimerkiksi Embiggenin.",
"unifiedCanvas": "Yhdistetty kanvas",
"statusGenerationComplete": "Generointi valmis"
},
"gallery": {
"uploads": "Lataukset",
"showUploads": "Näytä lataukset",
"galleryImageResetSize": "Resetoi koko",
"maintainAspectRatio": "Säilytä kuvasuhde",
"galleryImageSize": "Kuvan koko",
"pinGallery": "Kiinnitä galleria",
"showGenerations": "Näytä generaatiot",
"singleColumnLayout": "Yhden sarakkeen asettelu",
"generations": "Generoinnit",
"gallerySettings": "Gallerian asetukset",
"autoSwitchNewImages": "Vaihda uusiin kuviin automaattisesti",
"allImagesLoaded": "Kaikki kuvat ladattu",
"noImagesInGallery": "Ei kuvia galleriassa",
"loadMore": "Lataa lisää"
},
"hotkeys": {
"keyboardShortcuts": "näppäimistön pikavalinnat",
"appHotkeys": "Sovelluksen pikanäppäimet",
"generalHotkeys": "Yleiset pikanäppäimet",
"galleryHotkeys": "Gallerian pikanäppäimet",
"unifiedCanvasHotkeys": "Yhdistetyn kanvaan pikanäppäimet",
"cancel": {
"desc": "Peruuta kuvan luominen",
"title": "Peruuta"
},
"invoke": {
"desc": "Luo kuva"
}
}
}

View File

@ -0,0 +1 @@
{}

View File

@ -0,0 +1,254 @@
{
"accessibility": {
"copyMetadataJson": "Kopiera metadata JSON",
"zoomIn": "Zooma in",
"exitViewer": "Avslutningsvisare",
"modelSelect": "Välj modell",
"uploadImage": "Ladda upp bild",
"invokeProgressBar": "Invoke förloppsmätare",
"nextImage": "Nästa bild",
"toggleAutoscroll": "Växla automatisk rullning",
"flipHorizontally": "Vänd vågrätt",
"flipVertically": "Vänd lodrätt",
"zoomOut": "Zooma ut",
"toggleLogViewer": "Växla logvisare",
"reset": "Starta om",
"previousImage": "Föregående bild",
"useThisParameter": "Använd denna parametern",
"showGallery": "Visa galleri",
"rotateCounterClockwise": "Rotera moturs",
"rotateClockwise": "Rotera medurs",
"modifyConfig": "Ändra konfiguration",
"showOptionsPanel": "Visa inställningspanelen"
},
"common": {
"hotkeysLabel": "Snabbtangenter",
"reportBugLabel": "Rapportera bugg",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Inställningar",
"darkTheme": "Mörk",
"lightTheme": "Ljus",
"greenTheme": "Grön",
"oceanTheme": "Hav",
"langEnglish": "Engelska",
"langDutch": "Nederländska",
"langFrench": "Franska",
"langGerman": "Tyska",
"langItalian": "Italienska",
"langArabic": "العربية",
"langHebrew": "עברית",
"langPolish": "Polski",
"langPortuguese": "Português",
"langBrPortuguese": "Português do Brasil",
"langSimplifiedChinese": "简体中文",
"langJapanese": "日本語",
"langKorean": "한국어",
"langRussian": "Русский",
"unifiedCanvas": "Förenad kanvas",
"nodesDesc": "Ett nodbaserat system för bildgenerering är under utveckling. Håll utkik för uppdateringar om denna fantastiska funktion.",
"langUkranian": "Украї́нська",
"langSpanish": "Español",
"postProcessDesc2": "Ett dedikerat användargränssnitt kommer snart att släppas för att underlätta mer avancerade arbetsflöden av efterbehandling.",
"trainingDesc1": "Ett dedikerat arbetsflöde för träning av dina egna inbäddningar och kontrollpunkter genom Textual Inversion eller Dreambooth från webbgränssnittet.",
"trainingDesc2": "InvokeAI stöder redan träning av anpassade inbäddningar med hjälp av Textual Inversion genom huvudscriptet.",
"upload": "Ladda upp",
"close": "Stäng",
"cancel": "Avbryt",
"accept": "Acceptera",
"statusDisconnected": "Frånkopplad",
"statusGeneratingTextToImage": "Genererar text till bild",
"statusGeneratingImageToImage": "Genererar Bild till bild",
"statusGeneratingInpainting": "Genererar Måla i",
"statusGenerationComplete": "Generering klar",
"statusModelConverted": "Modell konverterad",
"statusMergingModels": "Sammanfogar modeller",
"pinOptionsPanel": "Nåla fast inställningspanelen",
"loading": "Laddar",
"loadingInvokeAI": "Laddar Invoke AI",
"statusRestoringFaces": "Återskapar ansikten",
"languagePickerLabel": "Språkväljare",
"themeLabel": "Tema",
"txt2img": "Text till bild",
"nodes": "Noder",
"img2img": "Bild till bild",
"postprocessing": "Efterbehandling",
"postProcessing": "Efterbehandling",
"load": "Ladda",
"training": "Träning",
"postProcessDesc1": "Invoke AI erbjuder ett brett utbud av efterbehandlingsfunktioner. Uppskalning och ansiktsåterställning finns redan tillgängligt i webbgränssnittet. Du kommer åt dem ifrån Avancerade inställningar-menyn under Bild till bild-fliken. Du kan också behandla bilder direkt genom att använda knappen bildåtgärder ovanför nuvarande bild eller i bildvisaren.",
"postProcessDesc3": "Invoke AI's kommandotolk erbjuder många olika funktioner, bland annat \"Förstora\".",
"statusGenerating": "Genererar",
"statusError": "Fel",
"back": "Bakåt",
"statusConnected": "Ansluten",
"statusPreparing": "Förbereder",
"statusProcessingCanceled": "Bearbetning avbruten",
"statusProcessingComplete": "Bearbetning färdig",
"statusGeneratingOutpainting": "Genererar Fyll ut",
"statusIterationComplete": "Itterering klar",
"statusSavingImage": "Sparar bild",
"statusRestoringFacesGFPGAN": "Återskapar ansikten (GFPGAN)",
"statusRestoringFacesCodeFormer": "Återskapar ansikten (CodeFormer)",
"statusUpscaling": "Skala upp",
"statusUpscalingESRGAN": "Uppskalning (ESRGAN)",
"statusModelChanged": "Modell ändrad",
"statusLoadingModel": "Laddar modell",
"statusConvertingModel": "Konverterar modell",
"statusMergedModels": "Modeller sammanfogade"
},
"gallery": {
"generations": "Generationer",
"showGenerations": "Visa generationer",
"uploads": "Uppladdningar",
"showUploads": "Visa uppladdningar",
"galleryImageSize": "Bildstorlek",
"allImagesLoaded": "Alla bilder laddade",
"loadMore": "Ladda mer",
"galleryImageResetSize": "Återställ storlek",
"gallerySettings": "Galleriinställningar",
"maintainAspectRatio": "Behåll bildförhållande",
"pinGallery": "Nåla fast galleri",
"noImagesInGallery": "Inga bilder i galleriet",
"autoSwitchNewImages": "Ändra automatiskt till nya bilder",
"singleColumnLayout": "Enkolumnslayout"
},
"hotkeys": {
"generalHotkeys": "Allmänna snabbtangenter",
"galleryHotkeys": "Gallerisnabbtangenter",
"unifiedCanvasHotkeys": "Snabbtangenter för sammanslagskanvas",
"invoke": {
"title": "Anropa",
"desc": "Genererar en bild"
},
"cancel": {
"title": "Avbryt",
"desc": "Avbryt bildgenerering"
},
"focusPrompt": {
"desc": "Fokusera området för promptinmatning",
"title": "Fokusprompt"
},
"pinOptions": {
"desc": "Nåla fast alternativpanelen",
"title": "Nåla fast alternativ"
},
"toggleOptions": {
"title": "Växla inställningar",
"desc": "Öppna och stäng alternativpanelen"
},
"toggleViewer": {
"title": "Växla visaren",
"desc": "Öppna och stäng bildvisaren"
},
"toggleGallery": {
"title": "Växla galleri",
"desc": "Öppna eller stäng galleribyrån"
},
"maximizeWorkSpace": {
"title": "Maximera arbetsyta",
"desc": "Stäng paneler och maximera arbetsyta"
},
"changeTabs": {
"title": "Växla flik",
"desc": "Byt till en annan arbetsyta"
},
"consoleToggle": {
"title": "Växla konsol",
"desc": "Öppna och stäng konsol"
},
"setSeed": {
"desc": "Använd seed för nuvarande bild",
"title": "välj seed"
},
"setParameters": {
"title": "Välj parametrar",
"desc": "Använd alla parametrar från nuvarande bild"
},
"setPrompt": {
"desc": "Använd prompt för nuvarande bild",
"title": "Välj prompt"
},
"restoreFaces": {
"title": "Återskapa ansikten",
"desc": "Återskapa nuvarande bild"
},
"upscale": {
"title": "Skala upp",
"desc": "Skala upp nuvarande bild"
},
"showInfo": {
"title": "Visa info",
"desc": "Visa metadata för nuvarande bild"
},
"sendToImageToImage": {
"title": "Skicka till Bild till bild",
"desc": "Skicka nuvarande bild till Bild till bild"
},
"deleteImage": {
"title": "Radera bild",
"desc": "Radera nuvarande bild"
},
"closePanels": {
"title": "Stäng paneler",
"desc": "Stäng öppna paneler"
},
"previousImage": {
"title": "Föregående bild",
"desc": "Visa föregående bild"
},
"nextImage": {
"title": "Nästa bild",
"desc": "Visa nästa bild"
},
"toggleGalleryPin": {
"title": "Växla gallerinål",
"desc": "Nålar fast eller nålar av galleriet i gränssnittet"
},
"increaseGalleryThumbSize": {
"title": "Förstora galleriets bildstorlek",
"desc": "Förstora miniatyrbildernas storlek"
},
"decreaseGalleryThumbSize": {
"title": "Minska gelleriets bildstorlek",
"desc": "Minska miniatyrbildernas storlek i galleriet"
},
"decreaseBrushSize": {
"desc": "Förminska storleken på kanvas- pensel eller suddgummi",
"title": "Minska penselstorlek"
},
"increaseBrushSize": {
"title": "Öka penselstorlek",
"desc": "Öka stoleken på kanvas- pensel eller suddgummi"
},
"increaseBrushOpacity": {
"title": "Öka penselns opacitet",
"desc": "Öka opaciteten för kanvaspensel"
},
"decreaseBrushOpacity": {
"desc": "Minska kanvaspenselns opacitet",
"title": "Minska penselns opacitet"
},
"moveTool": {
"title": "Flytta",
"desc": "Tillåt kanvasnavigation"
},
"fillBoundingBox": {
"title": "Fyll ram",
"desc": "Fyller ramen med pensels färg"
},
"keyboardShortcuts": "Snabbtangenter",
"appHotkeys": "Appsnabbtangenter",
"selectBrush": {
"desc": "Välj kanvaspensel",
"title": "Välj pensel"
},
"selectEraser": {
"desc": "Välj kanvassuddgummi",
"title": "Välj suddgummi"
},
"eraseBoundingBox": {
"title": "Ta bort ram"
}
}
}

View File

@ -0,0 +1,64 @@
{
"accessibility": {
"invokeProgressBar": "Invoke ilerleme durumu",
"nextImage": "Sonraki Resim",
"useThisParameter": "Kullanıcı parametreleri",
"copyMetadataJson": "Metadata verilerini kopyala (JSON)",
"exitViewer": "Görüntüleme Modundan Çık",
"zoomIn": "Yakınlaştır",
"zoomOut": "Uzaklaştır",
"rotateCounterClockwise": "Döndür (Saat yönünün tersine)",
"rotateClockwise": "Döndür (Saat yönünde)",
"flipHorizontally": "Yatay Çevir",
"flipVertically": "Dikey Çevir",
"modifyConfig": "Ayarları Değiştir",
"toggleAutoscroll": "Otomatik kaydırmayı aç/kapat",
"toggleLogViewer": "Günlük Görüntüleyici Aç/Kapa",
"showOptionsPanel": "Ayarlar Panelini Göster",
"modelSelect": "Model Seçin",
"reset": "Sıfırla",
"uploadImage": "Resim Yükle",
"previousImage": "Önceki Resim",
"menu": "Menü",
"showGallery": "Galeriyi Göster"
},
"common": {
"hotkeysLabel": "Kısayol Tuşları",
"themeLabel": "Tema",
"languagePickerLabel": "Dil Seçimi",
"reportBugLabel": "Hata Bildir",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Ayarlar",
"darkTheme": "Karanlık Tema",
"lightTheme": "Aydınlık Tema",
"greenTheme": "Yeşil Tema",
"oceanTheme": "Okyanus Tema",
"langArabic": "Arapça",
"langEnglish": "İngilizce",
"langDutch": "Hollandaca",
"langFrench": "Fransızca",
"langGerman": "Almanca",
"langItalian": "İtalyanca",
"langJapanese": "Japonca",
"langPolish": "Lehçe",
"langPortuguese": "Portekizce",
"langBrPortuguese": "Portekizcr (Brezilya)",
"langRussian": "Rusça",
"langSimplifiedChinese": "Çince (Basit)",
"langUkranian": "Ukraynaca",
"langSpanish": "İspanyolca",
"txt2img": "Metinden Resime",
"img2img": "Resimden Metine",
"linear": "Çizgisel",
"nodes": "Düğümler",
"postprocessing": "İşlem Sonrası",
"postProcessing": "İşlem Sonrası",
"postProcessDesc2": "Daha gelişmiş özellikler için ve iş akışını kolaylaştırmak için özel bir kullanıcı arayüzü çok yakında yayınlanacaktır.",
"postProcessDesc3": "Invoke AI komut satırı arayüzü, bir çok yeni özellik sunmaktadır.",
"langKorean": "Korece",
"unifiedCanvas": "Akıllı Tuval",
"nodesDesc": "Görüntülerin oluşturulmasında hazırladığımız yeni bir sistem geliştirme aşamasındadır. Bu harika özellikler ve çok daha fazlası için bizi takip etmeye devam edin.",
"postProcessDesc1": "Invoke AI son kullanıcıya yönelik bir çok özellik sunar. Görüntü kalitesi yükseltme, yüz restorasyonu WebUI üzerinden kullanılabilir. Metinden resime ve resimden metne araçlarına gelişmiş seçenekler menüsünden ulaşabilirsiniz. İsterseniz mevcut görüntü ekranının üzerindeki veya görüntüleyicideki görüntüyü doğrudan düzenleyebilirsiniz."
}
}

View File

@ -0,0 +1 @@
{}

View File

@ -23,8 +23,7 @@
"dev": "concurrently \"vite dev\" \"yarn run theme:watch\"",
"dev:host": "concurrently \"vite dev --host\" \"yarn run theme:watch\"",
"build": "yarn run lint && vite build",
"api:web": "openapi -i http://localhost:9090/openapi.json -o src/services/api --client axios --useOptions --useUnionTypes --indent 2 --request src/services/fixtures/request.ts",
"api:file": "openapi -i src/services/fixtures/openapi.json -o src/services/api --client axios --useOptions --useUnionTypes --indent 2 --request src/services/fixtures/request.ts",
"typegen": "npx openapi-typescript http://localhost:9090/openapi.json --output src/services/api/schema.d.ts -t",
"preview": "vite preview",
"lint:madge": "madge --circular src/main.tsx",
"lint:eslint": "eslint --max-warnings=0 .",
@ -54,57 +53,62 @@
]
},
"dependencies": {
"@apidevtools/swagger-parser": "^10.1.0",
"@chakra-ui/anatomy": "^2.1.1",
"@chakra-ui/icons": "^2.0.19",
"@chakra-ui/react": "^2.6.0",
"@chakra-ui/styled-system": "^2.9.0",
"@chakra-ui/theme-tools": "^2.0.16",
"@dagrejs/graphlib": "^2.1.12",
"@chakra-ui/react": "^2.7.1",
"@chakra-ui/styled-system": "^2.9.1",
"@chakra-ui/theme-tools": "^2.0.18",
"@dagrejs/graphlib": "^2.1.13",
"@dnd-kit/core": "^6.0.8",
"@dnd-kit/modifiers": "^6.0.1",
"@emotion/react": "^11.11.1",
"@emotion/styled": "^11.10.6",
"@floating-ui/react-dom": "^2.0.0",
"@fontsource/inter": "^4.5.15",
"@mantine/core": "^6.0.13",
"@mantine/hooks": "^6.0.13",
"@emotion/styled": "^11.11.0",
"@floating-ui/react-dom": "^2.0.1",
"@fontsource-variable/inter": "^5.0.3",
"@fontsource/inter": "^5.0.3",
"@mantine/core": "^6.0.14",
"@mantine/hooks": "^6.0.14",
"@reduxjs/toolkit": "^1.9.5",
"@roarr/browser-log-writer": "^1.1.5",
"chakra-ui-contextmenu": "^1.0.5",
"dateformat": "^5.0.3",
"downshift": "^7.6.0",
"formik": "^2.2.9",
"framer-motion": "^10.12.4",
"formik": "^2.4.2",
"framer-motion": "^10.12.17",
"fuse.js": "^6.6.2",
"i18next": "^22.4.15",
"i18next-browser-languagedetector": "^7.0.1",
"i18next-http-backend": "^2.2.0",
"konva": "^9.0.1",
"i18next": "^23.2.3",
"i18next-browser-languagedetector": "^7.0.2",
"i18next-http-backend": "^2.2.1",
"konva": "^9.2.0",
"lodash-es": "^4.17.21",
"overlayscrollbars": "^2.1.1",
"nanostores": "^0.9.2",
"openapi-fetch": "^0.4.0",
"overlayscrollbars": "^2.2.0",
"overlayscrollbars-react": "^0.5.0",
"patch-package": "^7.0.0",
"query-string": "^8.1.0",
"re-resizable": "^6.9.9",
"react": "^18.2.0",
"react-colorful": "^5.6.1",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.3",
"react-hotkeys-hook": "4.4.0",
"react-i18next": "^12.2.2",
"react-icons": "^4.9.0",
"react-konva": "^18.2.7",
"react-redux": "^8.0.5",
"react-resizable-panels": "^0.0.42",
"react-i18next": "^13.0.1",
"react-icons": "^4.10.1",
"react-konva": "^18.2.10",
"react-redux": "^8.1.1",
"react-resizable-panels": "^0.0.52",
"react-use": "^17.4.0",
"react-virtuoso": "^4.3.5",
"react-zoom-pan-pinch": "^3.0.7",
"reactflow": "^11.7.0",
"react-virtuoso": "^4.3.11",
"react-zoom-pan-pinch": "^3.0.8",
"reactflow": "^11.7.4",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^3.3.1",
"roarr": "^7.15.0",
"serialize-error": "^11.0.0",
"socket.io-client": "^4.6.0",
"use-image": "^1.1.0",
"socket.io-client": "^4.7.0",
"use-image": "^1.1.1",
"uuid": "^9.0.0",
"zod": "^3.21.4"
},
@ -115,22 +119,22 @@
"ts-toolbelt": "^9.6.0"
},
"devDependencies": {
"@chakra-ui/cli": "^2.4.0",
"@chakra-ui/cli": "^2.4.1",
"@types/dateformat": "^5.0.0",
"@types/lodash-es": "^4.14.194",
"@types/node": "^18.16.2",
"@types/react": "^18.2.0",
"@types/react-dom": "^18.2.1",
"@types/node": "^20.3.1",
"@types/react": "^18.2.14",
"@types/react-dom": "^18.2.6",
"@types/react-redux": "^7.1.25",
"@types/react-transition-group": "^4.4.5",
"@types/uuid": "^9.0.0",
"@typescript-eslint/eslint-plugin": "^5.59.1",
"@typescript-eslint/parser": "^5.59.1",
"@vitejs/plugin-react-swc": "^3.3.0",
"@types/react-transition-group": "^4.4.6",
"@types/uuid": "^9.0.2",
"@typescript-eslint/eslint-plugin": "^5.60.0",
"@typescript-eslint/parser": "^5.60.0",
"@vitejs/plugin-react-swc": "^3.3.2",
"axios": "^1.4.0",
"babel-plugin-transform-imports": "^2.0.0",
"concurrently": "^8.0.1",
"eslint": "^8.39.0",
"concurrently": "^8.2.0",
"eslint": "^8.43.0",
"eslint-config-prettier": "^8.8.0",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-react": "^7.32.2",
@ -138,18 +142,20 @@
"form-data": "^4.0.0",
"husky": "^8.0.3",
"lint-staged": "^13.2.2",
"madge": "^6.0.0",
"openapi-types": "^12.1.0",
"madge": "^6.1.0",
"openapi-types": "^12.1.3",
"openapi-typescript": "^6.2.8",
"openapi-typescript-codegen": "^0.24.0",
"postinstall-postinstall": "^2.1.0",
"prettier": "^2.8.8",
"rollup-plugin-visualizer": "^5.9.0",
"terser": "^5.17.1",
"rollup-plugin-visualizer": "^5.9.2",
"terser": "^5.18.1",
"ts-toolbelt": "^9.6.0",
"vite": "^4.3.3",
"vite": "^4.3.9",
"vite-plugin-css-injected-by-js": "^3.1.1",
"vite-plugin-dts": "^2.3.0",
"vite-plugin-eslint": "^1.8.1",
"vite-plugin-node-polyfills": "^0.9.0",
"vite-tsconfig-paths": "^4.2.0",
"yarn": "^1.22.19"
}

View File

@ -1,14 +0,0 @@
diff --git a/node_modules/@chakra-ui/cli/dist/scripts/read-theme-file.worker.js b/node_modules/@chakra-ui/cli/dist/scripts/read-theme-file.worker.js
index 937cf0d..7dcc0c0 100644
--- a/node_modules/@chakra-ui/cli/dist/scripts/read-theme-file.worker.js
+++ b/node_modules/@chakra-ui/cli/dist/scripts/read-theme-file.worker.js
@@ -50,7 +50,8 @@ async function readTheme(themeFilePath) {
project: tsConfig.configFileAbsolutePath,
compilerOptions: {
module: "CommonJS",
- esModuleInterop: true
+ esModuleInterop: true,
+ jsx: 'react'
},
transpileOnly: true,
swc: true

View File

@ -0,0 +1,55 @@
diff --git a/node_modules/openapi-fetch/dist/index.js b/node_modules/openapi-fetch/dist/index.js
index cd4528a..8976b51 100644
--- a/node_modules/openapi-fetch/dist/index.js
+++ b/node_modules/openapi-fetch/dist/index.js
@@ -1,5 +1,5 @@
// settings & const
-const DEFAULT_HEADERS = {
+const CONTENT_TYPE_APPLICATION_JSON = {
"Content-Type": "application/json",
};
const TRAILING_SLASH_RE = /\/*$/;
@@ -29,18 +29,29 @@ export function createFinalURL(url, options) {
}
return finalURL;
}
+function stringifyBody(body) {
+ if (body instanceof ArrayBuffer || body instanceof File || body instanceof DataView || body instanceof Blob || ArrayBuffer.isView(body) || body instanceof URLSearchParams || body instanceof FormData) {
+ return;
+ }
+
+ if (typeof body === "string") {
+ return body;
+ }
+
+ return JSON.stringify(body);
+ }
+
export default function createClient(clientOptions = {}) {
const { fetch = globalThis.fetch, ...options } = clientOptions;
- const defaultHeaders = new Headers({
- ...DEFAULT_HEADERS,
- ...(options.headers ?? {}),
- });
+ const defaultHeaders = new Headers(options.headers ?? {});
async function coreFetch(url, fetchOptions) {
const { headers, body: requestBody, params = {}, parseAs = "json", querySerializer = defaultSerializer, ...init } = fetchOptions || {};
// URL
const finalURL = createFinalURL(url, { baseUrl: options.baseUrl, params, querySerializer });
+ // Stringify body if needed
+ const stringifiedBody = stringifyBody(requestBody);
// headers
- const baseHeaders = new Headers(defaultHeaders); // clone defaults (dont overwrite!)
+ const baseHeaders = new Headers(stringifiedBody ? { ...CONTENT_TYPE_APPLICATION_JSON, ...defaultHeaders } : defaultHeaders); // clone defaults (dont overwrite!)
const headerOverrides = new Headers(headers);
for (const [k, v] of headerOverrides.entries()) {
if (v === undefined || v === null)
@@ -54,7 +65,7 @@ export default function createClient(clientOptions = {}) {
...options,
...init,
headers: baseHeaders,
- body: typeof requestBody === "string" ? requestBody : JSON.stringify(requestBody),
+ body: stringifiedBody ?? requestBody,
});
// handle empty content
// note: we return `{}` because we want user truthy checks for `.data` or `.error` to succeed

View File

@ -24,16 +24,13 @@
},
"common": {
"hotkeysLabel": "Hotkeys",
"themeLabel": "Theme",
"darkMode": "Dark Mode",
"lightMode": "Light Mode",
"languagePickerLabel": "Language",
"reportBugLabel": "Report Bug",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Settings",
"darkTheme": "Dark",
"lightTheme": "Light",
"greenTheme": "Green",
"oceanTheme": "Ocean",
"langArabic": "العربية",
"langEnglish": "English",
"langDutch": "Nederlands",
@ -524,7 +521,8 @@
"initialImage": "Initial Image",
"showOptionsPanel": "Show Options Panel",
"hidePreview": "Hide Preview",
"showPreview": "Show Preview"
"showPreview": "Show Preview",
"controlNetControlMode": "Control Mode"
},
"settings": {
"models": "Models",

View File

@ -24,7 +24,8 @@ import Toaster from './Toaster';
import DeleteImageModal from 'features/gallery/components/DeleteImageModal';
import { requestCanvasRescale } from 'features/canvas/store/thunks/requestCanvasScale';
import UpdateImageBoardModal from '../../features/gallery/components/Boards/UpdateImageBoardModal';
import { useListModelsQuery } from 'services/apiSlice';
import { useListModelsQuery } from 'services/api/endpoints/models';
import DeleteBoardImagesModal from '../../features/gallery/components/Boards/DeleteBoardImagesModal';
const DEFAULT_CONFIG = {};
@ -48,7 +49,7 @@ const App = ({
const isApplicationReady = useIsApplicationReady();
const { data: pipelineModels } = useListModelsQuery({
model_type: 'pipeline',
model_type: 'main',
});
const { data: controlnetModels } = useListModelsQuery({
model_type: 'controlnet',
@ -158,6 +159,7 @@ const App = ({
</Grid>
<DeleteImageModal />
<UpdateImageBoardModal />
<DeleteBoardImagesModal />
<Toaster />
<GlobalHotkeys />
</>

Some files were not shown because too many files have changed in this diff Show More