Commit Graph

3490 Commits

Author SHA1 Message Date
blessedcoolant
6e0c6d9cc9
perf(invoke_ai_web_server): encode intermediate result previews as jpeg (#2817)
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-26 18:47:51 +13:00
Kevin Turner
a3076cf951 perf(invoke_ai_web_server): encode intermediate result previews as jpeg
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-25 21:23:25 -08:00
blessedcoolant
6696882c71
doc(invoke_ai_web_server): put docstrings inside their functions (#2816)
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-26 18:20:10 +13:00
Kevin Turner
17b039e85d doc(invoke_ai_web_server): put docstrings inside their functions
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-25 20:21:47 -08:00
mauwii
81539e6ab4
Merge remote-tracking branch 'upstream/main' into dev/docker/separate-req-inst 2023-02-26 00:55:03 +01:00
mauwii
92304b9f8a
remove pip-tools, still split requirements install
- also use user:group for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 00:53:43 +01:00
mauwii
ec1de5ae8b
more detailed volume parameters 2023-02-26 00:51:30 +01:00
mauwii
49198a61ef
enable BuildKit in env.sh 2023-02-26 00:50:13 +01:00
blessedcoolant
c22d529528
Add node-based invocation system (#1650)
This PR adds the core of the node-based invocation system first
discussed in https://github.com/invoke-ai/InvokeAI/discussions/597 and
implements it through a basic CLI and API. This supersedes #1047, which
was too far behind to rebase.

## Architecture

### Invocations
The core of the new system is **invocations**, found in
`/ldm/invoke/app/invocations`. These represent individual nodes of
execution, each with inputs and outputs. Core invocations are already
implemented (`txt2img`, `img2img`, `upscale`, `face_restore`) as well as
a debug invocation (`show_image`). To implement a new invocation, all
that is required is to add a new implementation in this folder (there is
a markdown document describing the specifics, though it is slightly
out-of-date).

### Sessions
Invocations and links between them are maintained in a **session**.
These can be queued for invocation (either the next ready node, or all
nodes). Some notes:
* Sessions may be added to at any time (including after invocation), but
may not be modified.
* Links are always added with a node, and are always links from existing
nodes to the new node. These links can be relative "history" links, e.g.
`-1` to link from a previously executed node, and can link either
specific outputs, or can opportunistically link all matching outputs by
name and type by using `*`.
* There are no iteration/looping constructs. Most needs for this could
be solved by either duplicating nodes or cloning sessions. This is open
for discussion, but is a difficult problem to solve in a way that
doesn't make the code even more complex/confusing (especially regarding
node ids and history).

### Services
These make up the core the invocation system, found in
`/ldm/invoke/app/services`. One of the key design philosophies here is
that most components should be replaceable when possible. For example,
if someone wants to use cloud storage for their images, they should be
able to replace the image storage service easily.

The services are broken down as follows (several of these are
intentionally implemented with an initial simple/naïve approach):
* Invoker: Responsible for creating and executing **sessions** and
managing services used to do so.
* Session Manager: Manages session history. An on-disk implementation is
provided, which stores sessions as json files on disk, and caches
recently used sessions for quick access.
* Image Storage: Stores images of multiple types. An on-disk
implementation is provided, which stores images on disk and retains
recently used images in an in-memory cache.
* Invocation Queue: Used to queue invocations for execution. An
in-memory implementation is provided.
* Events: An event system, primarily used with socket.io to support
future web UI integration.

## Apps

Apps are available through the `/scripts/invoke-new.py` script (to-be
integrated/renamed).

### CLI
```
python scripts/invoke-new.py
```

Implements a simple CLI. The CLI creates a single session, and
automatically links all inputs to the previous node's output. Commands
are automatically generated from all invocations, with command options
being automatically generated from invocation inputs. Help is also
available for the cli and for each command, and is very verbose.
Additionally, the CLI supports command piping for single-line entry of
multiple commands. Example:

```
> txt2img --prompt "a cat eating sushi" --steps 20 --seed 1234 | upscale | show_image
```

### API
```
python scripts/invoke-new.py --api --host 0.0.0.0
```

Implements an API using FastAPI with Socket.io support for signaling.
API documentation is available at `http://localhost:9090/docs` or
`http://localhost:9090/redoc`. This includes OpenAPI schema for all
available invocations, session interaction APIs, and image APIs.
Socket.io signals are per-session, and can be subscribed to by session
id. These aren't currently auto-documented, though the code for event
emission is centralized in `/ldm/invoke/app/services/events.py`.

A very simple test html and script are available at
`http://localhost:9090/static/test.html` This demonstrates creating a
session from a graph, invoking it, and receiving signals from Socket.io.

## What's left?

* There are a number of features not currently covered by invocations. I
kept the set of invocations small during core development in order to
simplify refactoring as I went. Now that the invocation code has
stabilized, I'd love some help filling those out!
* There's no image metadata generated. It would be fairly
straightforward (and would make good sense) to serialize either a
session and node reference into an image, or the entire node into the
image. There are a lot of questions to answer around source images,
linked images, etc. though. This history is all stored in the session as
well, and with complex sessions, the metadata in an image may lose its
value. This needs some further discussion.
* We need a list of features (both current and future) that would be
difficult to implement without looping constructs so we can have a good
conversation around it. I'm really hoping we can avoid needing
looping/iteration in the graph execution, since it'll necessitate
separating an execution of a graph into its own concept/system, and will
further complicate the system.
* The API likely needs further filling out to support the UI. I think
using the new API for the current UI is possible, and potentially
interesting, since it could work like the new/demo CLI in a "single
operation at a time" workflow. I don't know how compatible that will be
with our UI goals though. It would be nice to support only a single API
though.
* Deeper separation of systems. I intentionally tried to not touch
Generate or other systems too much, but a lot could be gained by
breaking those apart. Even breaking apart Args into two pieces (command
line arguments and the parser for the current CLI) would make it easier
to maintain. This is probably in the future though.
2023-02-26 12:25:41 +13:00
mauwii
8c5773abc1
add a workflow to close stale issues
with values set as discussed in discord
2023-02-25 13:20:05 +01:00
Kyle Schouviller
cd98d88fe7 [nodes] Removed InvokerServices, simplying service model 2023-02-24 20:11:28 -08:00
Kyle Schouviller
34e3aa1f88 parent 9eed1919c2
author Kyle Schouviller <kyle0654@hotmail.com> 1669872800 -0800
committer Kyle Schouviller <kyle0654@hotmail.com> 1676240900 -0800

Adding base node architecture

Fix type annotation errors

Runs and generates, but breaks in saving session

Fix default model value setting. Fix deprecation warning.

Fixed node api

Adding markdown docs

Simplifying Generate construction in apps

[nodes] A few minor changes (#2510)

* Pin api-related requirements

* Remove confusing extra CORS origins list

* Adds response models for HTTP 200

[nodes] Adding graph_execution_state to soon replace session. Adding tests with pytest.

Minor typing fixes

[nodes] Fix some small output query hookups

[node] Fixing some additional typing issues

[nodes] Move and expand graph code. Add base item storage and sqlite implementation.

Update startup to match new code

[nodes] Add callbacks to item storage

[nodes] Adding an InvocationContext object to use for invocations to provide easier extensibility

[nodes] New execution model that handles iteration

[nodes] Fixing the CLI

[nodes] Adding a note to the CLI

[nodes] Split processing thread into separate service

[node] Add error message on node processing failure

Removing old files and duplicated packages

Adding python-multipart
2023-02-24 18:57:02 -08:00
psychedelicious
49ffb64ef3
ui: translations update from weblate (#2804)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-02-25 10:09:37 +11:00
Gabriel Mackievicz Telles
ec14e2db35
translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 91.8% (431 of 469 strings)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-02-24 17:54:54 +01:00
Jeff Mahoney
5725fcb3e0
translationBot(ui): added translation (Romanian)
Co-authored-by: Jeff Mahoney <jbmahoney@gmail.com>
2023-02-24 17:54:54 +01:00
gallegonovato
1447b6df96
translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-02-24 17:54:54 +01:00
Lincoln Stein
e700da23d8
Sync main with v2.3.1 (#2792)
This PR will bring `main` up to date with released v2.3.1
2023-02-24 11:54:46 -05:00
Lincoln Stein
b4ed8bc47a
Merge branch 'main' into v2.3 2023-02-24 10:52:03 -05:00
Lincoln Stein
bd85e00530
Last PR needed for v2.3.1 (#2788)
- Add curated set of starter models based on team discussion. The final
list of starter models can be found in
`invokeai/configs/INITIAL_MODELS.yaml`

- To test model installation, I selected and installed all the models on
the list. This led to my discovering that when there are no more starter
models to display, the console front end crashes. So I made a fix to
this in which the entire starter model selection is no longer shown.

- Update model table in 050_INSTALL_MODELS.md

- Add guide to dealing with low-memory situations
- Version is now `v2.3.1`
2023-02-24 10:31:38 -05:00
Lincoln Stein
4e446130d8
Merge branch 'v2.3' into enhance/curated-2.3.1-models 2023-02-24 10:30:42 -05:00
Lincoln Stein
4c93b514bb bump version to final 2.3.1 2023-02-24 10:04:41 -05:00
Lincoln Stein
d078941316 add low memory troubleshooting guide 2023-02-24 10:04:06 -05:00
Lincoln Stein
230d3a496d document starter models
- add new script `scripts/make_models_markdown_table.py` that parses
  INITIAL_MODELS.yaml and creates markdown table for the model installation
  documentation file

- update 050_INSTALLING_MODELS.md with above table, and add a warning
  about additional license terms that apply to some of the models.
2023-02-24 09:33:07 -05:00
Jonathan
ec2890c19b
Run garbage collection to allow the CUDA cache to completely empty. (#2791) 2023-02-24 08:48:54 -05:00
Lincoln Stein
a540cc537f add curated set of HuggingFace diffusers models for 2.3.1 release
- Final list can be found in invokeai/configs/INITIAL_MODELS.yaml

- After installing all the models, I discovered a bug in the file
  selection form that caused a crash when no remaining uninstalled
  models remained. So had to fix this.
2023-02-24 00:53:48 -05:00
Lincoln Stein
39c57aa358
fix generate backend to generate "accurate" intermediate images (#2787)
The sample_to_image method in `ldm.invoke.generator.base` was still
using ckpt-era code. As a result when the WebUI was set to show
"accurate" intermediate images, there'd be a crash. This PR corrects the
problem.

- Closes #2784
- Closes #2775
2023-02-24 00:33:29 -05:00
mauwii
01f8c37bd3
rename amd flavor to rocm 2023-02-24 06:20:44 +01:00
Lincoln Stein
2d990c1f54
Merge branch 'v2.3' into bugfix/webui-accurate-intermediates 2023-02-23 22:07:18 -05:00
Lincoln Stein
7fb2da8741 fix generate backend to generate "accurate" intermediate images
- Closes #2784
- Closes #2775
2023-02-23 22:03:28 -05:00
mauwii
b7718985d5
update build-container.yml
- add branches 'dev/ci/docker/*' and 'dev/docker/*'
2023-02-24 03:58:22 +01:00
Lincoln Stein
c69fcb1c10
fix ckpt_convert module to work with dreambooth v2 models (#2776)
- Discord member @marcus.llewellyn reported that some civitai
2.1-derived checkpoints were not converting properly (probably
dreambooth-generated):
https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
derive vector length of the CLIP model used by fetching the second
dimension of the tensor at "cond_stage_model.model.text_projection".

- On inspection, I found that the same second dimension can be recovered
from key 'cond_stage_model.model.ln_final.bias', and use that instead. I
hope this is correct; tested on multiple v1, v2 and inpainting models
and they converted correctly.

- While debugging this, I found and fixed several other issues:

- model download script was not pre-downloading the OpenCLIP
text_encoder or text_tokenizer. This is fixed.
- got rid of legacy code in `ckpt_to_diffuser.py` and replaced with
calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 21:51:57 -05:00
mauwii
90cda11868
separate installation of requirements and source
this should highly increase rebuilding of the image when:
- version did not change
- requirements didn't change
2023-02-24 03:51:18 +01:00
Lincoln Stein
0982548e1f
Merge branch 'v2.3' into bugfix/v2-model-conversion 2023-02-23 21:27:49 -05:00
mauwii
5cb877e096
reorder .dockerignore 2023-02-24 02:53:27 +01:00
Matthias Wild
11a29fdc4d
fix python 3.9 compatibility (#2780)
without this change, the project can be installed on 3.9 but not used
this also fixes the container images

Maybe we should re-enable Python 3.9 checks which would have prevented
this.
2023-02-24 00:49:25 +01:00
Lincoln Stein
24407048a5
Version 2.3.1-rc4 (#2782)
Just a version bump to use a format recognized by PyPi.
2023-02-23 18:09:43 -05:00
Matthias Wild
a7c2333312
Merge branch 'main' into fix/py39-compatibility 2023-02-23 23:53:38 +01:00
Lincoln Stein
b5b541c747 bump version; use correct format for PyPi 2023-02-23 17:47:36 -05:00
Lincoln Stein
ad6ea02c9c
Update main with V2.3 fixes (#2774)
Until the nodes merge happens, we can continue to merge bugfixes from
the 2.3 branch into `main`. This will bring main into sync with
`v2.3.1+rc3`
2023-02-23 17:38:16 -05:00
mauwii
1a6ed85d99
fix typeing to be compatible with python 3.9
without this, the project can be installed on 3.9 but not used
this also fixes the container images
2023-02-23 23:27:16 +01:00
Lincoln Stein
a094bbd839
push to pypi from branch v2.3 (#2778)
This change will cause releases on the v2.3 branch to be pushed to PyPi.
2023-02-23 17:20:24 -05:00
Lincoln Stein
73dda812ea push to pypi from branch v2.3
This change will cause releases on the v2.3 branch to be pushed
to PyPi.
2023-02-23 16:55:25 -05:00
Lincoln Stein
8eaf1c4033 Revert "(updater) style 'pip' progress to use dark background"
This reverts commit 89239d1c54.

- This was making a subprocess call to 'bash', and hence crashing
  on windows systems!
2023-02-23 16:33:57 -05:00
Lincoln Stein
4f44b64052 fix ckpt_convert module to work with dreambooth v2 models
- Discord member @marcus.llewellyn reported that some civitai 2.1-derived checkpoints were
  not converting properly (probably dreambooth-generated):
  https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
  derive vector length of the CLIP model used by fetching the second
  dimension of the tensor at "cond_stage_model.model.text_projection".
  His proposed solution was to hardcode a value of 1024.

- On inspection, I found that the same second dimension can be
  recovered from key 'cond_stage_model.model.ln_final.bias', and use
  that instead. I hope this is correct; tested on multiple v1, v2 and
  inpainting models and they converted correctly.

- While debugging this, I found and fixed several other issues:

  - model download script was not pre-downloading the OpenCLIP
    text_encoder or text_tokenizer. This is fixed.
  - got rid of legacy code in `ckpt_to_diffuser.py` and replaced
    with calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 15:43:58 -05:00
Lincoln Stein
c559bf3e10
Add a sanity check to root directory finding algorithm (#2772)
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developers are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this PR
adds a sanity check that looks for the existence of
`VIRTUAL_ENV/invokeai.init`, and moves on to (4) if not found.
2023-02-23 11:37:11 -05:00
Lincoln Stein
a485515bc6
Merge branch 'v2.3' into bugfix/sanity-check-rootdir 2023-02-23 11:14:52 -05:00
Lincoln Stein
2c9b29725b
Bugfix/windows install (#2770)
# This will constitute v2.3.1+rc2

## Windows installer enhancements
  
1. resize installer window to give more room for configure and download
forms
2. replace '\' with '/' in directory names to allow user to
drag-and-drop
       folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model
commands
4. better error reporting when a model download fails due to network
errors
5. put the launcher scripts into a loop so that menu reappears after
       invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
8. Detect when install failed for some reason and print helpful error
      message rather than stack trace.
9. Detect window size and resize to minimum acceptable values to provide
      better display of configure and install forms.
10. Fix a bug in the CLI which prevented diffusers imported by their
repo_ids
from being correctly registered in the current session (though they
install
      correctly)
11. Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 11:14:30 -05:00
Lincoln Stein
28612c899a add a sanity check to root directory finding algorithm
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developer's are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this
PR adds a sanity check that looks for the existence of
VIRTUAL_ENV/invokeai.init, and moves to (4) if not found.
2023-02-23 10:15:01 -05:00
Lincoln Stein
88acbeaa35 install creator tags but don't commit 2023-02-23 07:08:41 -05:00
Lincoln Stein
46729efe95 upgrade to compel 0.1.7 2023-02-23 07:06:40 -05:00