Commit Graph

3777 Commits

Author SHA1 Message Date
Lincoln Stein
ab05144716
Change where !replay looks for its infile (#3129)
!fetch puts its output file into the output directory; it may be
beneficial to have !replay look in the output directory as well.
2023-04-06 22:02:06 -04:00
Lincoln Stein
aeb4914e67
Merge branch 'main' into replay-file_path 2023-04-06 21:45:23 -04:00
blessedcoolant
76bcd4d44f
Fix typo (#3133)
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-07 12:38:05 +12:00
Steven Frank
50f5e1bc83
Fix typo
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-06 16:47:57 -07:00
Thomas
7268131f57
change where !replay looks for its infile
!fetch puts its output file into the output directory; it may be beneficial to have !replay look in the output directory as well.
2023-04-06 08:14:11 -04:00
Kyle Schouviller
85b020f76c
[nodes] Add latent nodes, storage, and fix iteration bugs (#3091)
* Add latents nodes.
* Fix iteration expansion.
* Add collection generator nodes, math nodes.
* Add noise node.
* Add some graph debug commands to the CLI.
* Fix negative id linking in CLI.
* Fix a CLI bug with multiple links per node.
2023-04-06 04:06:05 +00:00
Kyle Schouviller
a7833cc9a9 [api] Add models router and list model API. 2023-04-05 23:59:07 -04:00
Matthias Wild
919294e977
fix build-container.yml (#3117)
Add permission go write packages to GITHUB_TOKEN
2023-04-06 00:25:00 +02:00
mauwii
7640acfb1f
update build-container.yml
- add packages write permission
2023-04-05 15:44:26 +02:00
psychedelicious
aed9ecef2a feat(nodes): add thumbnail generation to DiskImageStorage 2023-04-05 08:22:23 +10:00
Lincoln Stein
18cddd7972
Right link on pytorch installer for linux rocm (#3084)
Right link on pytorch installer for linux rocm
2023-04-04 17:40:42 -04:00
Lincoln Stein
e6b25f4ae3
Merge branch 'main' into patch-1 2023-04-04 17:40:12 -04:00
Lincoln Stein
d1c0050e65
fix(nodes): fix typo in list_sessions handler (#3109)
The typo accidentally did not affect functionality; when `query==""`, it
`search()`ed but found everything due to empty query, then paginated
results, so it worked the same as `list()`.

Still fix it
2023-04-03 21:24:48 -04:00
psychedelicious
ecdfa136a0 fix(nodes): fix typo in list_sessions handler 2023-04-04 00:34:32 +10:00
blessedcoolant
5cd513ee63
[deps] bump compel version to fix crash on invalid (auto111) syntax (#3107)
currently if users input eg `happy (camper:0.3)` it gets parsed
incorrectly, which causes crashes if it's in the negative prompt. bump
to compel 1.0.5 fixes the parser to avoid this (note the weight is
parsed as plain text, it's not converted to proper invoke syntax)
2023-04-04 02:30:17 +12:00
blessedcoolant
ab45086546
Merge branch 'main' into deps_bump_compel 2023-04-04 02:05:40 +12:00
psychedelicious
77ba7359f4 fix(nodes): commit changes to db 2023-04-03 19:09:49 +10:00
Damian Stewart
8cbe2e14d9 bump compel version to fix on invalid (auto111) syntax 2023-04-03 10:37:01 +02:00
creachec
ee86eedf01
Right link on pytorch installer for linux rocm
Right link on pytorch installer for linux rocm
2023-03-31 17:22:00 -03:00
Lincoln Stein
c4e6511a59
Add support for yet another TI embedding format (main version) (#3050)
- This PR adds support for embedding files that contain a single key
"emb_params". The only example I know of this format is the
"EasyNegative" embedding on HuggingFace, but there are certainly others.

- This PR also adds support for loading embedding files that have been
saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
selecting the right format parser is clear.

- This is the same as #3045, which is on the 2.3 branch.
2023-03-31 03:57:57 -04:00
Lincoln Stein
44843be4c8
Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 23:16:52 -04:00
Lincoln Stein
054e963bef
add basic autocomplete functionality to node cli (#3035)
- Commands, invocations and their parameters will now autocomplete using
introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet,
so path autocompletion is not implemented
2023-03-30 08:25:36 -04:00
Lincoln Stein
afb66a7884
Merge branch 'main' into feat/node-cli-autocompleter 2023-03-30 07:51:51 -04:00
Lincoln Stein
b9df9e26f2
Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 07:51:23 -04:00
Lincoln Stein
25ae36ceb5
I18n build mode (#3051)
Add build mode option to bundle english translation with UI
2023-03-29 22:26:45 -04:00
Lincoln Stein
3ae8daedaa
Merge branch 'main' into i18n-build-mode 2023-03-29 22:26:17 -04:00
Lincoln Stein
e11c1d66ab handle multiple tokens and embeddings in single file 2023-03-29 22:05:06 -04:00
Lincoln Stein
b913e1e11e
improve importation and conversion of legacy checkpoint files (#3053)
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

## Model configuration file selection

To improve the user experience, the model manager's `heuristic_import()`
method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
The yaml file is then used as the configuration file for importation and
conversion.

3. If no such file is found, then the method opens up the checkpoint and
probes it to determine whether it is V1, V1-inpaint or V2. If it is a V1
format, then the appropriate v1-inference.yaml config file is used.
Unfortunately there are two V2 variants that cannot be distinguished by
introspection.

4. If the probe algorithm is unable to determine the model type, then
its last-ditch effort is to execute an optional callback function that
can be provided by the caller. This callback, named
`config_file_callback` receives the path to the legacy checkpoint and
returns the path to the config file to use. The CLI uses to put up a
multiple choice prompt to the user. The WebUI **could** use this to
prompt the user to choose from a radio-button selection.

5. If the config file cannot be determined, then the import is
abandoned.

## Custom VAE Selection

The user can attach a custom VAE to the imported and converted model by
copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file can
be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.

Note that this is the same fix that was applied to the 2.3 branch in
#3043 . This applies to `main`.
2023-03-29 17:22:15 -04:00
Lincoln Stein
3c4b6d5735
Merge branch 'main' into enhance/heuristic-import-improvements 2023-03-29 16:54:43 -04:00
Mary Hipp Rogers
e6123eac19
Merge branch 'main' into i18n-build-mode 2023-03-29 05:33:14 -07:00
Lincoln Stein
30ca25897e
Fix bugs in online ckpt conversion of 2.0 models (#3057)
## Enable the on-the-fly conversion of models based on SD 2.0/2.1 into
diffusers

This commit fixes bugs related to the on-the-fly conversion and loading
of legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
on-the-fly using --ckpt_convert, generation would crash with a precision
incompatibility error. This problem has been found and fixed.
2023-03-28 23:34:53 -04:00
Lincoln Stein
abaee6b9ed
Merge branch 'main' into feat/node-cli-autocompleter 2023-03-28 23:32:10 -04:00
Lincoln Stein
4d7c9e1ab7
Merge branch 'main' into bugfix/convert-2.0-models 2023-03-28 23:01:36 -04:00
Eugene
cc5687f26c [nodes] downgrade fastapi+uvicorn to fix openapi schema 2023-03-28 22:53:20 -04:00
Lincoln Stein
cdb3616dca
Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-28 21:03:06 -04:00
Mary Hipp Rogers
78e76f26f9
Merge branch 'main' into i18n-build-mode 2023-03-28 11:04:32 -04:00
Lincoln Stein
9a7580dedd fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.
2023-03-28 00:17:20 -04:00
Lincoln Stein
dc2da8cff4
Doc: updating ROCm version in documentation (#3041)
The Pytorch ROCm version in the documentation in outdated (`rocm5.2`)
which leads to errors during the installation of InvokeAI.

This PR updates the documentation with the latest Pytorch ROCm `5.4.2`
version.
2023-03-27 22:37:43 -04:00
Lincoln Stein
019a9f0329 address change requests in PR
1. Prompt has changed to "invoke> ".
2. Function to initialize the autocompleter has been renamed "set_autocompleter()"
2023-03-27 12:20:24 -04:00
Lincoln Stein
fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
Mary Hipp
dbc0093b31 Merge remote-tracking branch 'origin' into i18n-build-mode 2023-03-27 10:57:41 -04:00
Mary Hipp
92e512b8b6 add package mode option for i18next 2023-03-27 10:49:52 -04:00
Lincoln Stein
abe4dc8ac1 Add support for yet another textual inversion embedding format
- This PR adds support for embedding files that contain a single key
  "emb_params". The only example I know of this format is the
  "EasyNegative" embedding on HuggingFace, but there are certainly
  others.

- This PR also adds support for loading embedding files that have been
  saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
  selecting the right format parser is clear.
2023-03-27 09:39:03 -04:00
Lincoln Stein
dc14701d20
Merge branch 'main' into feat/node-cli-autocompleter 2023-03-26 23:46:10 -04:00
Tom Gouville
737e0f3085
doc: fixing error in rocm version 2023-03-26 12:40:20 +02:00
Tom Gouville
81b7ea4362
doc: updating ROCm version for pip install 2023-03-26 12:32:12 +02:00
blessedcoolant
09dfde0ba1
fix(ui): fix viewer tooltip localisation strings (#3037)
fixes #2923
2023-03-26 20:35:52 +13:00
blessedcoolant
3ba7e966b5
Merge branch 'main' into fix/ui/viewer-localisation 2023-03-26 20:35:12 +13:00
blessedcoolant
a1cd4834d1
nodes: add cancelation, updated progress callback, typing fixes (#3036)
keeping `main` up to date with my api nodes branch:
- bd7e515290: [nodes] Add cancelation to
the API @Kyle0654
- 5fe38f7: fix(backend): simple typing fixes
  - just picking some low-hanging fruit to improve IDE hinting
- c34ac91: fix(nodes): fix cancel; fix callback for img2img, inpaint
- makes nodes cancel immediate, use fix progress images on nodes, fix
callbacks for img2img/inpaint
- 4221cf7: fix(nodes): fix schema generation for output classes
- did this previously for some other class; needed to not have node
outputs be optional
2023-03-26 20:34:27 +13:00
psychedelicious
a724038dc6 fix(ui): fix viewer tooltip localisation strings
fixes #2923
2023-03-26 17:43:00 +11:00