Compare commits

..

54 Commits

Author SHA1 Message Date
17ab982200 installers download branch HEAD not tag 2022-11-11 03:56:54 +00:00
a04965b0e9 improve messaging during installation process 2022-11-11 03:48:21 +00:00
0b529f0c57 enable outcropping of random JPG/PNG images
- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
  Original code predicated on the image being generated within InvokeAI.
2022-11-11 02:57:46 +00:00
6f9f848345 enhance outcropping with ability to direct contents of new regions
- When outcropping an image you can now add a `--new_prompt` option, to specify
  a new prompt to be used instead of the original one used to generate the image.

- Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
  will pick one randomly.

- This PR also fixes the crash that happened when trying to outcrop an image
  that does not contain InvokeAI metadata.
2022-11-11 02:56:51 +00:00
918c1589ef fix #1402 2022-11-11 02:55:54 +00:00
116415b3fc fix invoke.py crash if no models.yaml file present
- Script will now offer the user the ability to create a
  minimal models.yaml and then gracefully exit.
- Closes #1420
2022-11-11 02:55:54 +00:00
b4b6eabaac Revert "Log strength with hires"
This reverts commit 82d4904c07.
2022-11-10 21:58:35 +00:00
4ef1f4a854 remove temporary directory from repo 2022-11-10 20:01:49 +00:00
510fc4ebaa remove -e from clipseg load in installer 2022-11-10 19:59:03 +00:00
a20914434b change clipseg repo branch to avoid clipseg not found error 2022-11-10 19:37:07 +00:00
0d134195fd update repo URL to point to rc 2022-11-10 18:39:29 +00:00
649d8c8573 integrate tildebyte installer 2022-11-10 18:13:28 +00:00
a358d370a0 add @tildebyte compiled pip installer 2022-11-10 17:48:14 +00:00
94a9033c4f ignore source installer zip files 2022-11-10 14:52:00 +00:00
18a947c503 documentation and environment file fixes
- Have clarified the relationship between the @tildebyte and @cmdr2 installers;
  However, @tildebyte installer merge is still a WIP due to conflicts over
  such things as `invoke.sh`.
- Rechristened 1click installer as "source" installer. @tildebyte installer will be
  "the" installer. (We'll see which one generates the least support requests and
  maintenance work.)
- Sync'd `environment-mac.yml` with `development`. The former was failing with a
  taming-transformers error as per https://discord.com/channels/@me/1037201214154231899/1040060947378749460
2022-11-10 14:46:36 +00:00
a23b031895 Fixes typos in README.md 2022-11-10 14:18:15 +00:00
23af68c7d7 downgrade win installs to basicsr==1.4.1 2022-11-10 07:02:27 -05:00
e258beeb51 Merge branch 'release-candidate-2-1-3' of github.com:invoke-ai/InvokeAI into release-candidate-2-1-3 2022-11-10 06:37:45 -05:00
7460c069b8 remove --prefer-binary from requirements-base.txt
It appears that some versions of pip do not recognize this option
when it appears in the requirements file. Did not explore this further
but recommend --prefer-binary in the manual install instructions on
the command line.
2022-11-10 06:36:48 -05:00
e481bfac61 Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3 2022-11-10 11:21:56 +00:00
5040747c67 fix windows install instructions & bat file 2022-11-10 11:21:43 +00:00
d1ab65a431 update WEBUIHOTKEYS.md 2022-11-10 07:18:59 +01:00
af4ee7feb8 update INSTALL_DOCKER.md 2022-11-10 06:33:49 +01:00
764fb29ade fix formatting in INSTALL.md 2022-11-10 06:30:15 +01:00
1014d3ba44 fix build.sh invokeai_conda_env_file default value 2022-11-10 06:29:14 +01:00
40a48aca88 fix environment-mac.yml
moved taming-transformers-rom1504 to pip dependencies
2022-11-10 05:25:30 +01:00
92abc00f16 fix test-invoke-conda
- copy required conda environment yaml
- use environment.yml
- I use cp instead of ln since would be compatible for windows runners
2022-11-10 05:19:52 +01:00
a5719aabf8 update Dockerfile
- link environment.yml from new environemnts path
- change default conda_env_file
- quote all variables to avoid splitting
- also remove paths from conda-env-files in build-container.yml
2022-11-10 04:14:35 +01:00
44a18511fa update paths in container build workflow 2022-11-09 20:51:06 +00:00
b850dbadaf finished reorganization of install docs 2022-11-09 20:16:57 +00:00
9ef8b944d5 tweaks to manual install documentation
--prefer-binary is an iffy option in the requirements file. It isn't
supported by some versions of pip, so I removed it from
requirements-base.txt and inserted it into the manual install
instructions where it seems to do what it is supposed to.
2022-11-09 18:50:58 +00:00
efc5a98488 manual installation documentation tested on Linux 2022-11-09 18:20:03 +00:00
1417c87928 change name of requirements.txt to avoid confusion 2022-11-09 17:37:06 +00:00
2dd6fc2b93 Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3 2022-11-09 17:26:24 +00:00
22213612a0 directory cleanup; working on install docs 2022-11-09 17:25:59 +00:00
71ee44a827 prevent crash when switching to an invalid model 2022-11-09 10:16:37 -05:00
b17ca0a5e7 don't suppress exceptions when doing cross-attention control 2022-11-09 10:16:30 -05:00
71bbfe4a1a Fix #1362 by improving VRAM usage patterns when doing .swap()
commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:18:37 2022 +0100

    remove log spam

commit 7189d649622d4668b120b0dd278388ad672142c4
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:10:28 2022 +0100

    change the way saved slicing strategy is applied

commit 01c40f751ab72955140165c16f95ae411732265b
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:04:43 2022 +0100

    fix slicing_strategy_getter callsite

commit f8cfe25150a346958903316bc710737d99839923
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 11:56:22 2022 +0100

    cleanup, consistent dim=0 also tested

commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 11:34:09 2022 +0100

    refactored context, tested with non-sliced cross attention control

commit d58a46e39bf562e7459290d2444256e8c08ad0b6
Author: damian0815 <null@damianstewart.com>
Date:   Sun Nov 6 00:41:52 2022 +0100

    cleanup

commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:57:31 2022 +0100

    disable logs

commit 20ee89d93841b070738b3d8a4385c93b097d92eb
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:36:58 2022 +0100

    slice saved attention if necessary

commit 0a7684a22c880ec0f48cc22bfed4526358f71546
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:32:38 2022 +0100

    raise instead of asserting

commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:31:00 2022 +0100

    store dim when saving slices

commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:27:16 2022 +0100

    don't retry on exception

commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:24:50 2022 +0100

    stuff

commit 032ab90e9533be8726301ec91b97137e2aadef9a
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:20:17 2022 +0100

    more logging

commit 3dc34b387f033482305360e605809d95a40bf6f8
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:16:47 2022 +0100

    logs

commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:12:39 2022 +0100

    actually set save_slicing_strategy to True

commit f780e0a0a7c6b6a3db320891064da82589358c8a
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:10:35 2022 +0100

    store slicing strategy

commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
Author: damian <git@damianstewart.com>
Date:   Sat Nov 5 20:43:48 2022 +0100

    still not it

commit 5e3a9541f8ae00bde524046963910323e20c40b7
Author: damian <git@damianstewart.com>
Date:   Sat Nov 5 17:20:02 2022 +0100

    wip offloading attention slices on-demand

commit 4c2966aa856b6f3b446216da3619ae931552ef08
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 15:47:40 2022 +0100

    pre-emptive offloading, idk if it works

commit 572576755e9f0a878d38e8173e485126c0efbefb
Author: root <you@example.com>
Date:   Sat Nov 5 11:25:32 2022 +0000

    push attention slices to cpu. slow but saves memory.

commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 12:04:22 2022 +0100

    verbose logging

commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 11:50:48 2022 +0100

    wip fixing mem strategy crash (4 test on runpod)

commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
Author: damian0815 <null@damianstewart.com>
Date:   Fri Nov 4 09:02:40 2022 +0100

    wip, only works on cuda
2022-11-09 10:16:21 -05:00
5702271991 speculative reorganization of the requirements & environment files
- This is only a test!
- The various environment*.yml and requirements*.txt files have all
  been moved into a directory named "environments-and-requirements".
- The idea is to clean up our root directory so that the github home
  page is tidy.
- The manual install instructions will start with the instructions to
  create a symbolic link from environment.yml to the appropriate file
  for OS and GPU.
- The 1-click installers have been updated to accommodate this change.
2022-11-09 14:09:36 +00:00
10781e7dc4 refactoring requirements 2022-11-09 01:59:45 +00:00
099d1157c5 better way to make sure if conda is useable 2022-11-09 00:16:18 +01:00
ab825bf7ee add back --prefer-binaries to requirements 2022-11-08 22:05:33 +00:00
10cfeb5ada add quotes to set and use $environment_file 2022-11-08 22:27:19 +01:00
e97515d045 set environment file for conda update 2022-11-08 22:24:21 +01:00
0f04bc5789 use conda env update 2022-11-08 22:21:25 +01:00
3f74aabecd use command instead of hash 2022-11-08 22:20:44 +01:00
b1a99a51b7 remove --global git config from 1-click installers 2022-11-08 14:44:44 -05:00
8004f8a6d9 Revert "Use array slicing to calc ddim timesteps"
This reverts commit 1f0c5b4cf1.
2022-11-08 13:13:20 -05:00
ff8ff2212a add initfile support from PR #1386 2022-11-08 14:01:40 +00:00
8e5363cd83 move 'installer/' to '1-click-installer' to make room for tildebyte installer 2022-11-08 13:26:18 +00:00
1450779146 update branch for installer to pull against 2022-11-08 12:56:36 +00:00
8cd5d95b8a move all models into subdirectories of ./models
- this required an update to the invoke-ai fork of gfpgan
- simultaneously reverted consolidation of environment and
  requirements files, as their presence in a directory
  triggered setup.py to try to install a sub-package.
2022-11-08 05:31:02 +00:00
abd6407394 leave a copy of environment-cuda.yml at top level
- named it environment.yml
- need to avoid a big change for users and breaking older support
  instructions.
2022-11-08 03:52:46 +00:00
734dacfbe9 consolidate environment files
- starting to remove unneeded entries and pins
- no longer require -e in front of github dependencies
- update setup.py with release number
- update manual installation instructions
2022-11-08 03:50:07 +00:00
419 changed files with 26252 additions and 19782 deletions

View File

@ -1,12 +1,3 @@
*
!backend
!configs
!environments-and-requirements
!frontend
!installer
!ldm
!main.py
!scripts
!server
!static
!setup.py
!environment*.yml
!docker-build

View File

@ -1,12 +0,0 @@
# All files
[*]
charset = utf-8
end_of_line = lf
indent_size = 2
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
# Python
[*.py]
indent_size = 4

3
.github/CODEOWNERS vendored
View File

@ -2,6 +2,3 @@ ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
installer/ @tildebyte
.github/workflows/ @mauwii
docker_build/ @mauwii

View File

@ -6,7 +6,6 @@ on:
branches:
- 'main'
- 'development'
- 'update-dockerfile'
jobs:
docker:
@ -16,11 +15,13 @@ jobs:
arch:
- x86_64
- aarch64
pip-requirements:
- requirements-lin-amd.txt
- requirements-lin-cuda.txt
include:
- arch: x86_64
conda-env-file: environment-lin-cuda.yml
- arch: aarch64
conda-env-file: environment-lin-aarch64.yml
runs-on: ubuntu-latest
name: ${{ matrix.pip-requirements }} ${{ matrix.arch }}
name: ${{ matrix.arch }}
steps:
- name: prepare docker-tag
env:
@ -39,5 +40,9 @@ jobs:
file: docker-build/Dockerfile
platforms: Linux/${{ matrix.arch }}
push: false
tags: ${{ env.dockertag }}:${{ matrix.pip-requirements }}-${{ matrix.arch }}
build-args: pip_requirements=${{ matrix.pip-requirements }}
tags: ${{ env.dockertag }}:${{ matrix.arch }}
build-args: |
conda_env_file=${{ matrix.conda-env-file }}
conda_version=py39_4.12.0-Linux-${{ matrix.arch }}
invokeai_git=${{ github.repository }}
invokeai_branch=${{ github.ref_name }}

View File

@ -22,7 +22,7 @@ jobs:
- name: install requirements
run: |
python -m \
pip install -r docs/requirements-mkdocs.txt
pip install -r requirements-mkdocs.txt
- name: confirm buildability
run: |

View File

@ -13,32 +13,31 @@ on:
jobs:
matrix:
strategy:
fail-fast: false
matrix:
stable-diffusion-model:
- 'stable-diffusion-1.5'
environment-yaml:
- environment-lin-amd.yml
- environment-lin-cuda.yml
- environment-mac.yml
# - 'https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt'
- 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt'
os:
- ubuntu-latest
- macOS-12
include:
- environment-yaml: environment-lin-amd.yml
os: ubuntu-latest
- os: ubuntu-latest
environment-file: environment-lin-cuda.yml
default-shell: bash -l {0}
- environment-yaml: environment-lin-cuda.yml
os: ubuntu-latest
- os: macOS-12
environment-file: environment-mac.yml
default-shell: bash -l {0}
- environment-yaml: environment-mac.yml
os: macos-12
default-shell: bash -l {0}
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.environment-yaml }} on ${{ matrix.os }}
# - stable-diffusion-model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
# stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
# stable-diffusion-model-switch: stable-diffusion-1.4
- stable-diffusion-model: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-switch: stable-diffusion-1.5
name: ${{ matrix.os }} with ${{ matrix.stable-diffusion-model-switch }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
defaults:
run:
shell: ${{ matrix.default-shell }}
@ -48,19 +47,17 @@ jobs:
uses: actions/checkout@v3
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
run: cp configs/models.yaml.example configs/models.yaml
- name: create environment.yml
run: cp "environments-and-requirements/${{ matrix.environment-yaml }}" environment.yml
run: cp environments-and-requirements/${{ matrix.environment-file }} environment.yml
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-yaml) }}
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-file) }}
- name: Activate Conda Env
id: activate-conda-env
@ -86,40 +83,34 @@ jobs:
id: cache-sd-model
uses: actions/cache@v3
env:
cache-name: cache-${{ matrix.stable-diffusion-model }}
cache-name: cache-${{ matrix.stable-diffusion-model-switch }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
path: ${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: Download ${{ matrix.stable-diffusion-model }}
- name: Download ${{ matrix.stable-diffusion-model-switch }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" \
-L ${{ matrix.stable-diffusion-model-url }}
-o ${{ matrix.stable-diffusion-model-dl-path }} \
-L ${{ matrix.stable-diffusion-model }}
- name: run configure_invokeai.py
- name: run preload_models.py
id: run-preload-models
run: |
python scripts/configure_invokeai.py --no-interactive --yes
- name: cat ~/.invokeai
id: cat-invokeai
run: cat ~/.invokeai
python scripts/preload_models.py \
--no-interactive
- name: Run the tests
id: run-tests
run: |
time python scripts/invoke.py \
--no-patchmatch \
--no-nsfw_checker \
--model ${{ matrix.stable-diffusion-model }} \
--from_file ${{ env.TEST_PROMPTS }} \
--root="${{ env.INVOKEAI_ROOT }}" \
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
--model ${{ matrix.stable-diffusion-model-switch }} \
--from_file ${{ env.TEST_PROMPTS }}
- name: export conda env
id: export-conda-env
@ -131,5 +122,5 @@ jobs:
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs
name: results_${{ matrix.os }}_${{ matrix.stable-diffusion-model-switch }}
path: outputs/img-samples

View File

@ -1,128 +0,0 @@
name: Test invoke.py pip
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
jobs:
matrix:
strategy:
matrix:
stable-diffusion-model:
- stable-diffusion-1.5
requirements-file:
- requirements-lin-cuda.txt
- requirements-lin-amd.txt
- requirements-mac-mps-cpu.txt
python-version:
# - '3.9'
- '3.10'
include:
- requirements-file: requirements-lin-cuda.txt
os: ubuntu-latest
default-shell: bash -l {0}
- requirements-file: requirements-lin-amd.txt
os: ubuntu-latest
default-shell: bash -l {0}
- requirements-file: requirements-mac-mps-cpu.txt
os: macOS-12
default-shell: bash -l {0}
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.requirements-file }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
defaults:
run:
shell: ${{ matrix.default-shell }}
env:
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
- name: create requirements.txt
run: cp 'environments-and-requirements/${{ matrix.requirements-file }}' '${{ matrix.requirements-file }}'
- name: setup python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
cache-dependency-path: ${{ matrix.requirements-file }}
# - name: install dependencies
# run: ${{ env.pythonLocation }}/bin/pip install --upgrade pip setuptools wheel
- name: install requirements
run: ${{ env.pythonLocation }}/bin/pip install -r '${{ matrix.requirements-file }}'
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" \
-L ${{ matrix.stable-diffusion-model-url }}
- name: run configure_invokeai.py
id: run-preload-models
run: |
${{ env.pythonLocation }}/bin/python scripts/configure_invokeai.py --no-interactive --yes
- name: cat ~/.invokeai
id: cat-invokeai
run: cat ~/.invokeai
- name: Run the tests
id: run-tests
run: |
time ${{ env.pythonLocation }}/bin/python scripts/invoke.py \
--no-patchmatch \
--no-nsfw_checker \
--model ${{ matrix.stable-diffusion-model }} \
--from_file ${{ env.TEST_PROMPTS }} \
--root="${{ env.INVOKEAI_ROOT }}" \
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

13
.gitignore vendored
View File

@ -194,6 +194,10 @@ checkpoints
# Let the frontend manage its own gitignore
!frontend/*
frontend/apt-get
frontend/dist
frontend/sudo
frontend/update
# Scratch folder
.scratch/
@ -214,7 +218,7 @@ models/clipseg
models/gfpgan
# ignore initfile
.invokeai
invokeai.init
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements
@ -224,13 +228,6 @@ requirements.txt
# source installer files
source_installer/*zip
source_installer/invokeAI
install.bat
install.sh
update.bat
update.sh
# this may be present if the user created a venv
invokeai
# no longer stored in source directory
models

View File

@ -1,128 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@ -1,85 +0,0 @@
<img src="docs/assets/invoke_ai_banner.png" align="center">
Invoke-AI is a community of software developers, researchers, and user
interface experts who have come together on a voluntary basis to build
software tools which support cutting edge AI text-to-image
applications. This community is open to anyone who wishes to
contribute to the effort and has the skill and time to do so.
# Our Values
The InvokeAI team is a diverse community which includes individuals
from various parts of the world and many walks of life. Despite our
differences, we share a number of core values which we ask prospective
contributors to understand and respect. We believe:
1. That Open Source Software is a positive force in the world. We
create software that can be used, reused, and redistributed, without
restrictions, under a straightforward Open Source license (MIT). We
believe that Open Source benefits society as a whole by increasing the
availability of high quality software to all.
2. That those who create software should receive proper attribution
for their creative work. While we support the exchange and reuse of
Open Source Software, we feel strongly that the original authors of a
piece of code should receive credit for their contribution, and we
endeavor to do so whenever possible.
3. That there is moral ambiguity surrounding AI-assisted art. We are
aware of the moral and ethical issues surrounding the release of the
Stable Diffusion model and similar products. We are aware that, due to
the composition of their training sets, current AI-generated image
models are biased against certain ethnic groups, cultural concepts of
beauty, ethnic stereotypes, and gender roles.
1. We recognize the potential for harm to these groups that these biases
represent and trust that future AI models will take steps towards
reducing or eliminating the biases noted above, respect and give due
credit to the artists whose work is sourced, and call on developers
and users to favor these models over the older ones as they become
available.
4. We are deeply committed to ensuring that this technology benefits
everyone, including artists. We see AI art not as a replacement for
the artist, but rather as a tool to empower them. With that
in mind, we are constantly debating how to build systems that put
artists needs first: tools which can be readily integrated into an
artists existing workflows and practices, enhancing their work and
helping them to push it further. Every decision we take as a team,
which includes several artists, aims to build towards that goal.
5. That artificial intelligence can be a force for good in the world,
but must be used responsibly. Artificial intelligence technologies
have the potential to improve society, in everything from cancer care,
to customer service, to creative writing.
1. While we do not believe that software should arbitrarily limit what
users can do with it, we recognize that when used irresponsibly, AI
has the potential to do much harm. Our Discord server is actively
moderated in order to minimize the potential of harm from
user-contributed images. In addition, we ask users of our software to
refrain from using it in any way that would cause mental, emotional or
physical harm to individuals and vulnerable populations including (but
not limited to) women; minors; ethnic minorities; religious groups;
members of LGBTQIA communities; and people with disabilities or
impairments.
2. Note that some of the image generation AI models which the Invoke-AI
toolkit supports carry licensing agreements which impose restrictions
on how the model is used. We ask that our users read and agree to
these terms if they wish to make use of these models. These agreements
are distinct from the MIT license which applies to the InvokeAI
software and source code.
6. That mutual respect is key to a healthy software development
community. Members of the InvokeAI community are expected to treat
each other with respect, beneficence, and empathy. Each of us has a
different background and a unique set of skills. We strive to help
each other grow and gain new skills, and we apportion expectations in
a way that balances the members' time, skillset, and interest
area. Disputes are resolved by open and honest communication.
## Signature
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, and **keturn**. Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.

View File

@ -65,11 +65,14 @@ requests. Be sure to use the provided templates. They will help aid diagnose iss
### Installation
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
This fork is supported across multiple platforms. You can find individual installation instructions
below.
- #### [Linux](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_LINUX/)
- #### [Windows](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_WINDOWS/)
- #### [Macintosh](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_MAC/)
### Hardware Requirements
@ -172,22 +175,15 @@ problems and other issues.
# Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. To join, just raise your hand on the InvokeAI
Discord server or discussion board.
If you are unfamiliar with how
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress, but for now the most
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
A full set of contribution guidelines, along with templates, are in progress, but for now the most
important thing is to **make your pull request against the "development" branch**, and not against
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
to become part of our community.
Welcome to InvokeAI!
### Contributors
This fork is a combined effort of various people from across the world.

View File

File diff suppressed because it is too large Load Diff

View File

@ -1,117 +0,0 @@
from PIL import Image, ImageChops
from PIL.Image import Image as ImageType
from typing import Union, Literal
# https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent
def check_for_any_transparency(img: Union[ImageType, str]) -> bool:
if type(img) is str:
img = Image.open(str)
if img.info.get("transparency", None) is not None:
return True
if img.mode == "P":
transparent = img.info.get("transparency", -1)
for _, index in img.getcolors():
if index == transparent:
return True
elif img.mode == "RGBA":
extrema = img.getextrema()
if extrema[3][0] < 255:
return True
return False
def get_canvas_generation_mode(
init_img: Union[ImageType, str], init_mask: Union[ImageType, str]
) -> Literal["txt2img", "outpainting", "inpainting", "img2img",]:
if type(init_img) is str:
init_img = Image.open(init_img)
if type(init_mask) is str:
init_mask = Image.open(init_mask)
init_img = init_img.convert("RGBA")
# Get alpha from init_img
init_img_alpha = init_img.split()[-1]
init_img_alpha_mask = init_img_alpha.convert("L")
init_img_has_transparency = check_for_any_transparency(init_img)
if init_img_has_transparency:
init_img_is_fully_transparent = (
True if init_img_alpha_mask.getbbox() is None else False
)
"""
Mask images are white in areas where no change should be made, black where changes
should be made.
"""
# Fit the mask to init_img's size and convert it to greyscale
init_mask = init_mask.resize(init_img.size).convert("L")
"""
PIL.Image.getbbox() returns the bounding box of non-zero areas of the image, so we first
invert the mask image so that masked areas are white and other areas black == zero.
getbbox() now tells us if the are any masked areas.
"""
init_mask_bbox = ImageChops.invert(init_mask).getbbox()
init_mask_exists = False if init_mask_bbox is None else True
if init_img_has_transparency:
if init_img_is_fully_transparent:
return "txt2img"
else:
return "outpainting"
else:
if init_mask_exists:
return "inpainting"
else:
return "img2img"
def main():
# Testing
init_img_opaque = "test_images/init-img_opaque.png"
init_img_partial_transparency = "test_images/init-img_partial_transparency.png"
init_img_full_transparency = "test_images/init-img_full_transparency.png"
init_mask_no_mask = "test_images/init-mask_no_mask.png"
init_mask_has_mask = "test_images/init-mask_has_mask.png"
print(
"OPAQUE IMAGE, NO MASK, expect img2img, got ",
get_canvas_generation_mode(init_img_opaque, init_mask_no_mask),
)
print(
"IMAGE WITH TRANSPARENCY, NO MASK, expect outpainting, got ",
get_canvas_generation_mode(
init_img_partial_transparency, init_mask_no_mask
),
)
print(
"FULLY TRANSPARENT IMAGE NO MASK, expect txt2img, got ",
get_canvas_generation_mode(init_img_full_transparency, init_mask_no_mask),
)
print(
"OPAQUE IMAGE, WITH MASK, expect inpainting, got ",
get_canvas_generation_mode(init_img_opaque, init_mask_has_mask),
)
print(
"IMAGE WITH TRANSPARENCY, WITH MASK, expect outpainting, got ",
get_canvas_generation_mode(
init_img_partial_transparency, init_mask_has_mask
),
)
print(
"FULLY TRANSPARENT IMAGE WITH MASK, expect txt2img, got ",
get_canvas_generation_mode(init_img_full_transparency, init_mask_has_mask),
)
if __name__ == "__main__":
main()

View File

@ -5,8 +5,6 @@ SAMPLER_CHOICES = [
"ddim",
"k_dpm_2_a",
"k_dpm_2",
"k_dpmpp_2_a",
"k_dpmpp_2",
"k_euler_a",
"k_euler",
"k_heun",

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 KiB

View File

@ -1,80 +0,0 @@
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
config: v1-inference.yaml
file: v1-5-pruned-emaonly.ckpt
recommended: true
width: 512
height: 512
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
config: v1-inpainting-inference.yaml
file: sd-v1-5-inpainting.ckpt
recommended: True
width: 512
height: 512
ft-mse-improved-autoencoder-840000:
description: StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB)
repo_id: stabilityai/sd-vae-ft-mse-original
config: VAE/default
file: vae-ft-mse-840000-ema-pruned.ckpt
recommended: True
width: 512
height: 512
stable-diffusion-1.4:
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
repo_id: CompVis/stable-diffusion-v-1-4-original
config: v1-inference.yaml
file: sd-v1-4.ckpt
recommended: False
width: 512
height: 512
waifu-diffusion-1.3:
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
repo_id: hakurei/waifu-diffusion-v1-3
config: v1-inference.yaml
file: model-epoch09-float32.ckpt
recommended: False
width: 512
height: 512
trinart-2.0:
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
config: v1-inference.yaml
file: trinart2_step95000.ckpt
recommended: False
width: 512
height: 512
trinart_characters-1.0:
description: An SD model finetuned with 19.2M anime/manga style images (2.13 GB)
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: v1-inference.yaml
file: trinart_characters_it4_v1.ckpt
recommended: False
width: 512
height: 512
trinart_vae:
description: Custom autoencoder for trinart_characters
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: VAE/trinart
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
recommended: False
width: 512
height: 512
papercut-1.0:
description: SD 1.5 fine-tuned for papercut art (use "PaperCut" in your prompts) (2.13 GB)
repo_id: Fictiverse/Stable_Diffusion_PaperCut_Model
config: v1-inference.yaml
file: PaperCut_v1.ckpt
recommended: False
width: 512
height: 512
voxel_art-1.0:
description: Stable Diffusion trained on voxel art (use "VoxelArt" in your prompts) (4.27 GB)
repo_id: Fictiverse/Stable_Diffusion_VoxelArt_Model
config: v1-inference.yaml
file: VoxelArt_v1.ckpt
recommended: False
width: 512
height: 512

View File

@ -7,8 +7,8 @@
# was trained on.
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
config: configs/stable-diffusion/v1-inference.yaml
weights: ./models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt

View File

@ -1,803 +0,0 @@
sd-concepts-library/001glitch-core
sd-concepts-library/2814-roth
sd-concepts-library/3d-female-cyborgs
sd-concepts-library/4tnght
sd-concepts-library/80s-anime-ai
sd-concepts-library/80s-anime-ai-being
sd-concepts-library/852style-girl
sd-concepts-library/8bit
sd-concepts-library/8sconception
sd-concepts-library/Aflac-duck
sd-concepts-library/Akitsuki
sd-concepts-library/Atako
sd-concepts-library/Exodus-Styling
sd-concepts-library/RINGAO
sd-concepts-library/a-female-hero-from-the-legend-of-mir
sd-concepts-library/a-hat-kid
sd-concepts-library/a-tale-of-two-empires
sd-concepts-library/aadhav-face
sd-concepts-library/aavegotchi
sd-concepts-library/abby-face
sd-concepts-library/abstract-concepts
sd-concepts-library/accurate-angel
sd-concepts-library/agm-style-nao
sd-concepts-library/aj-fosik
sd-concepts-library/alberto-mielgo
sd-concepts-library/alex-portugal
sd-concepts-library/alex-thumbnail-object-2000-steps
sd-concepts-library/aleyna-tilki
sd-concepts-library/alf
sd-concepts-library/alicebeta
sd-concepts-library/alien-avatar
sd-concepts-library/alisa
sd-concepts-library/all-rings-albuns
sd-concepts-library/altvent
sd-concepts-library/altyn-helmet
sd-concepts-library/amine
sd-concepts-library/amogus
sd-concepts-library/anders-zorn
sd-concepts-library/angus-mcbride-style
sd-concepts-library/animalve3-1500seq
sd-concepts-library/anime-background-style
sd-concepts-library/anime-background-style-v2
sd-concepts-library/anime-boy
sd-concepts-library/anime-girl
sd-concepts-library/anyXtronXredshift
sd-concepts-library/anya-forger
sd-concepts-library/apex-wingman
sd-concepts-library/apulian-rooster-v0-1
sd-concepts-library/arcane-face
sd-concepts-library/arcane-style-jv
sd-concepts-library/arcimboldo-style
sd-concepts-library/armando-reveron-style
sd-concepts-library/armor-concept
sd-concepts-library/arq-render
sd-concepts-library/art-brut
sd-concepts-library/arthur1
sd-concepts-library/artist-yukiko-kanagai
sd-concepts-library/arwijn
sd-concepts-library/ashiok
sd-concepts-library/at-wolf-boy-object
sd-concepts-library/atm-ant
sd-concepts-library/atm-ant-2
sd-concepts-library/axe-tattoo
sd-concepts-library/ayush-spider-spr
sd-concepts-library/azura-from-vibrant-venture
sd-concepts-library/ba-shiroko
sd-concepts-library/babau
sd-concepts-library/babs-bunny
sd-concepts-library/babushork
sd-concepts-library/backrooms
sd-concepts-library/bad_Hub_Hugh
sd-concepts-library/bada-club
sd-concepts-library/baldi
sd-concepts-library/baluchitherian
sd-concepts-library/bamse
sd-concepts-library/bamse-og-kylling
sd-concepts-library/bee
sd-concepts-library/beholder
sd-concepts-library/beldam
sd-concepts-library/belen
sd-concepts-library/bella-goth
sd-concepts-library/belle-delphine
sd-concepts-library/bert-muppet
sd-concepts-library/better-collage3
sd-concepts-library/between2-mt-fade
sd-concepts-library/birb-style
sd-concepts-library/black-and-white-design
sd-concepts-library/black-waifu
sd-concepts-library/bloo
sd-concepts-library/blue-haired-boy
sd-concepts-library/blue-zombie
sd-concepts-library/blue-zombiee
sd-concepts-library/bluebey
sd-concepts-library/bluebey-2
sd-concepts-library/bobs-burgers
sd-concepts-library/boissonnard
sd-concepts-library/bonzi-monkey
sd-concepts-library/borderlands
sd-concepts-library/bored-ape-textual-inversion
sd-concepts-library/boris-anderson
sd-concepts-library/bozo-22
sd-concepts-library/breakcore
sd-concepts-library/brittney-williams-art
sd-concepts-library/bruma
sd-concepts-library/brunnya
sd-concepts-library/buddha-statue
sd-concepts-library/bullvbear
sd-concepts-library/button-eyes
sd-concepts-library/canadian-goose
sd-concepts-library/canary-cap
sd-concepts-library/cancer_style
sd-concepts-library/captain-haddock
sd-concepts-library/captainkirb
sd-concepts-library/car-toy-rk
sd-concepts-library/carasibana
sd-concepts-library/carlitos-el-mago
sd-concepts-library/carrascharacter
sd-concepts-library/cartoona-animals
sd-concepts-library/cat-toy
sd-concepts-library/centaur
sd-concepts-library/cgdonny1
sd-concepts-library/cham
sd-concepts-library/chandra-nalaar
sd-concepts-library/char-con
sd-concepts-library/character-pingu
sd-concepts-library/cheburashka
sd-concepts-library/chen-1
sd-concepts-library/child-zombie
sd-concepts-library/chillpill
sd-concepts-library/chonkfrog
sd-concepts-library/chop
sd-concepts-library/christo-person
sd-concepts-library/chuck-walton
sd-concepts-library/chucky
sd-concepts-library/chungus-poodl-pet
sd-concepts-library/cindlop
sd-concepts-library/collage-cutouts
sd-concepts-library/collage14
sd-concepts-library/collage3
sd-concepts-library/collage3-hubcity
sd-concepts-library/cologne
sd-concepts-library/color-page
sd-concepts-library/colossus
sd-concepts-library/command-and-conquer-remastered-cameos
sd-concepts-library/concept-art
sd-concepts-library/conner-fawcett-style
sd-concepts-library/conway-pirate
sd-concepts-library/coop-himmelblau
sd-concepts-library/coraline
sd-concepts-library/cornell-box
sd-concepts-library/cortana
sd-concepts-library/covid-19-rapid-test
sd-concepts-library/cow-uwu
sd-concepts-library/cowboy
sd-concepts-library/crazy-1
sd-concepts-library/crazy-2
sd-concepts-library/crb-portraits
sd-concepts-library/crb-surrealz
sd-concepts-library/crbart
sd-concepts-library/crested-gecko
sd-concepts-library/crinos-form-garou
sd-concepts-library/cry-baby-style
sd-concepts-library/crybaby-style-2-0
sd-concepts-library/csgo-awp-object
sd-concepts-library/csgo-awp-texture-map
sd-concepts-library/cubex
sd-concepts-library/cumbia-peruana
sd-concepts-library/cute-bear
sd-concepts-library/cute-cat
sd-concepts-library/cute-game-style
sd-concepts-library/cyberpunk-lucy
sd-concepts-library/dabotap
sd-concepts-library/dan-mumford
sd-concepts-library/dan-seagrave-art-style
sd-concepts-library/dark-penguin-pinguinanimations
sd-concepts-library/darkpenguinanimatronic
sd-concepts-library/darkplane
sd-concepts-library/david-firth-artstyle
sd-concepts-library/david-martinez-cyberpunk
sd-concepts-library/david-martinez-edgerunners
sd-concepts-library/david-moreno-architecture
sd-concepts-library/daycare-attendant-sun-fnaf
sd-concepts-library/ddattender
sd-concepts-library/degods
sd-concepts-library/degodsheavy
sd-concepts-library/depthmap
sd-concepts-library/depthmap-style
sd-concepts-library/design
sd-concepts-library/detectivedinosaur1
sd-concepts-library/diaosu-toy
sd-concepts-library/dicoo
sd-concepts-library/dicoo2
sd-concepts-library/dishonored-portrait-styles
sd-concepts-library/disquieting-muses
sd-concepts-library/ditko
sd-concepts-library/dlooak
sd-concepts-library/doc
sd-concepts-library/doener-red-line-art
sd-concepts-library/dog
sd-concepts-library/dog-django
sd-concepts-library/doge-pound
sd-concepts-library/dong-ho
sd-concepts-library/dong-ho2
sd-concepts-library/doose-s-realistic-art-style
sd-concepts-library/dq10-anrushia
sd-concepts-library/dr-livesey
sd-concepts-library/dr-strange
sd-concepts-library/dragonborn
sd-concepts-library/dreamcore
sd-concepts-library/dreamy-painting
sd-concepts-library/drive-scorpion-jacket
sd-concepts-library/dsmuses
sd-concepts-library/dtv-pkmn
sd-concepts-library/dullboy-caricature
sd-concepts-library/duranduran
sd-concepts-library/durer-style
sd-concepts-library/dyoudim-style
sd-concepts-library/early-mishima-kurone
sd-concepts-library/eastward
sd-concepts-library/eddie
sd-concepts-library/edgerunners-style
sd-concepts-library/edgerunners-style-v2
sd-concepts-library/el-salvador-style-style
sd-concepts-library/elegant-flower
sd-concepts-library/elspeth-tirel
sd-concepts-library/eru-chitanda-casual
sd-concepts-library/erwin-olaf-style
sd-concepts-library/ettblackteapot
sd-concepts-library/explosions-cat
sd-concepts-library/eye-of-agamotto
sd-concepts-library/f-22
sd-concepts-library/facadeplace
sd-concepts-library/fairy-tale-painting-style
sd-concepts-library/fairytale
sd-concepts-library/fang-yuan-001
sd-concepts-library/faraon-love-shady
sd-concepts-library/fasina
sd-concepts-library/felps
sd-concepts-library/female-kpop-singer
sd-concepts-library/fergal-cat
sd-concepts-library/filename-2
sd-concepts-library/fileteado-porteno
sd-concepts-library/final-fantasy-logo
sd-concepts-library/fireworks-over-water
sd-concepts-library/fish
sd-concepts-library/flag-ussr
sd-concepts-library/flatic
sd-concepts-library/floral
sd-concepts-library/fluid-acrylic-jellyfish-creatures-style-of-carl-ingram-art
sd-concepts-library/fnf-boyfriend
sd-concepts-library/fold-structure
sd-concepts-library/fox-purple
sd-concepts-library/fractal
sd-concepts-library/fractal-flame
sd-concepts-library/fractal-temple-style
sd-concepts-library/frank-frazetta
sd-concepts-library/franz-unterberger
sd-concepts-library/freddy-fazbear
sd-concepts-library/freefonix-style
sd-concepts-library/furrpopasthetic
sd-concepts-library/fursona
sd-concepts-library/fzk
sd-concepts-library/galaxy-explorer
sd-concepts-library/ganyu-genshin-impact
sd-concepts-library/garcon-the-cat
sd-concepts-library/garfield-pizza-plush
sd-concepts-library/garfield-pizza-plush-v2
sd-concepts-library/gba-fe-class-cards
sd-concepts-library/gba-pokemon-sprites
sd-concepts-library/geggin
sd-concepts-library/ggplot2
sd-concepts-library/ghost-style
sd-concepts-library/ghostproject-men
sd-concepts-library/gibasachan-v0
sd-concepts-library/gim
sd-concepts-library/gio
sd-concepts-library/giygas
sd-concepts-library/glass-pipe
sd-concepts-library/glass-prism-cube
sd-concepts-library/glow-forest
sd-concepts-library/goku
sd-concepts-library/gram-tops
sd-concepts-library/green-blue-shanshui
sd-concepts-library/green-tent
sd-concepts-library/grifter
sd-concepts-library/grisstyle
sd-concepts-library/grit-toy
sd-concepts-library/gt-color-paint-2
sd-concepts-library/gta5-artwork
sd-concepts-library/guttestreker
sd-concepts-library/gymnastics-leotard-v2
sd-concepts-library/half-life-2-dog
sd-concepts-library/handstand
sd-concepts-library/hanfu-anime-style
sd-concepts-library/happy-chaos
sd-concepts-library/happy-person12345
sd-concepts-library/happy-person12345-assets
sd-concepts-library/harley-quinn
sd-concepts-library/harmless-ai-1
sd-concepts-library/harmless-ai-house-style-1
sd-concepts-library/hd-emoji
sd-concepts-library/heather
sd-concepts-library/henjo-techno-show
sd-concepts-library/herge-style
sd-concepts-library/hiten-style-nao
sd-concepts-library/hitokomoru-style-nao
sd-concepts-library/hiyuki-chan
sd-concepts-library/hk-bamboo
sd-concepts-library/hk-betweenislands
sd-concepts-library/hk-bicycle
sd-concepts-library/hk-blackandwhite
sd-concepts-library/hk-breakfast
sd-concepts-library/hk-buses
sd-concepts-library/hk-clouds
sd-concepts-library/hk-goldbuddha
sd-concepts-library/hk-goldenlantern
sd-concepts-library/hk-hkisland
sd-concepts-library/hk-leaves
sd-concepts-library/hk-market
sd-concepts-library/hk-oldcamera
sd-concepts-library/hk-opencamera
sd-concepts-library/hk-peach
sd-concepts-library/hk-phonevax
sd-concepts-library/hk-streetpeople
sd-concepts-library/hk-vintage
sd-concepts-library/hoi4
sd-concepts-library/hoi4-leaders
sd-concepts-library/homestuck-sprite
sd-concepts-library/homestuck-troll
sd-concepts-library/hours-sentry-fade
sd-concepts-library/hours-style
sd-concepts-library/hrgiger-drmacabre
sd-concepts-library/huang-guang-jian
sd-concepts-library/huatli
sd-concepts-library/huayecai820-greyscale
sd-concepts-library/hub-city
sd-concepts-library/hubris-oshri
sd-concepts-library/huckleberry
sd-concepts-library/hydrasuit
sd-concepts-library/i-love-chaos
sd-concepts-library/ibere-thenorio
sd-concepts-library/ic0n
sd-concepts-library/ie-gravestone
sd-concepts-library/ikea-fabler
sd-concepts-library/illustration-style
sd-concepts-library/ilo-kunst
sd-concepts-library/ilya-shkipin
sd-concepts-library/im-poppy
sd-concepts-library/ina-art
sd-concepts-library/indian-watercolor-portraits
sd-concepts-library/indiana
sd-concepts-library/ingmar-bergman
sd-concepts-library/insidewhale
sd-concepts-library/interchanges
sd-concepts-library/inuyama-muneto-style-nao
sd-concepts-library/irasutoya
sd-concepts-library/iridescent-illustration-style
sd-concepts-library/iridescent-photo-style
sd-concepts-library/isabell-schulte-pv-pvii-3000steps
sd-concepts-library/isabell-schulte-pviii-1-image-style
sd-concepts-library/isabell-schulte-pviii-1024px-1500-steps-style
sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-1-lr-3000-steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-3-lr-5000-steps-style
sd-concepts-library/isabell-schulte-pviii-4tiles-500steps
sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps
sd-concepts-library/isabell-schulte-pviii-style
sd-concepts-library/isometric-tile-test
sd-concepts-library/jacqueline-the-unicorn
sd-concepts-library/james-web-space-telescope
sd-concepts-library/jamie-hewlett-style
sd-concepts-library/jamiels
sd-concepts-library/jang-sung-rak-style
sd-concepts-library/jetsetdreamcastcovers
sd-concepts-library/jin-kisaragi
sd-concepts-library/jinjoon-lee-they
sd-concepts-library/jm-bergling-monogram
sd-concepts-library/joe-mad
sd-concepts-library/joe-whiteford-art-style
sd-concepts-library/joemad
sd-concepts-library/john-blanche
sd-concepts-library/johnny-silverhand
sd-concepts-library/jojo-bizzare-adventure-manga-lineart
sd-concepts-library/jos-de-kat
sd-concepts-library/junji-ito-artstyle
sd-concepts-library/kaleido
sd-concepts-library/kaneoya-sachiko
sd-concepts-library/kanovt
sd-concepts-library/kanv1
sd-concepts-library/karan-gloomy
sd-concepts-library/karl-s-lzx-1
sd-concepts-library/kasumin
sd-concepts-library/kawaii-colors
sd-concepts-library/kawaii-girl-plus-object
sd-concepts-library/kawaii-girl-plus-style
sd-concepts-library/kawaii-girl-plus-style-v1-1
sd-concepts-library/kay
sd-concepts-library/kaya-ghost-assasin
sd-concepts-library/ki
sd-concepts-library/kinda-sus
sd-concepts-library/kings-quest-agd
sd-concepts-library/kiora
sd-concepts-library/kira-sensei
sd-concepts-library/kirby
sd-concepts-library/klance
sd-concepts-library/kodakvision500t
sd-concepts-library/kogatan-shiny
sd-concepts-library/kogecha
sd-concepts-library/kojima-ayami
sd-concepts-library/koko-dog
sd-concepts-library/kuvshinov
sd-concepts-library/kysa-v-style
sd-concepts-library/laala-character
sd-concepts-library/larrette
sd-concepts-library/lavko
sd-concepts-library/lazytown-stephanie
sd-concepts-library/ldr
sd-concepts-library/ldrs
sd-concepts-library/led-toy
sd-concepts-library/lego-astronaut
sd-concepts-library/leica
sd-concepts-library/leif-jones
sd-concepts-library/lex
sd-concepts-library/liliana
sd-concepts-library/liliana-vess
sd-concepts-library/liminal-spaces-2-0
sd-concepts-library/liminalspaces
sd-concepts-library/line-art
sd-concepts-library/line-style
sd-concepts-library/linnopoke
sd-concepts-library/liquid-light
sd-concepts-library/liqwid-aquafarmer
sd-concepts-library/lizardman
sd-concepts-library/loab-character
sd-concepts-library/loab-style
sd-concepts-library/lofa
sd-concepts-library/logo-with-face-on-shield
sd-concepts-library/lolo
sd-concepts-library/looney-anime
sd-concepts-library/lost-rapper
sd-concepts-library/lphr-style
sd-concepts-library/lucario
sd-concepts-library/lucky-luke
sd-concepts-library/lugal-ki-en
sd-concepts-library/luinv2
sd-concepts-library/lula-13
sd-concepts-library/lumio
sd-concepts-library/lxj-o4
sd-concepts-library/m-geo
sd-concepts-library/m-geoo
sd-concepts-library/madhubani-art
sd-concepts-library/mafalda-character
sd-concepts-library/magic-pengel
sd-concepts-library/malika-favre-art-style
sd-concepts-library/manga-style
sd-concepts-library/marbling-art
sd-concepts-library/margo
sd-concepts-library/marty
sd-concepts-library/marty6
sd-concepts-library/mass
sd-concepts-library/masyanya
sd-concepts-library/masyunya
sd-concepts-library/mate
sd-concepts-library/matthew-stone
sd-concepts-library/mattvidpro
sd-concepts-library/maurice-quentin-de-la-tour-style
sd-concepts-library/maus
sd-concepts-library/max-foley
sd-concepts-library/mayor-richard-irvin
sd-concepts-library/mechasoulall
sd-concepts-library/medazzaland
sd-concepts-library/memnarch-mtg
sd-concepts-library/metagabe
sd-concepts-library/meyoco
sd-concepts-library/meze-audio-elite-headphones
sd-concepts-library/midjourney-style
sd-concepts-library/mikako-method
sd-concepts-library/mikako-methodi2i
sd-concepts-library/miko-3-robot
sd-concepts-library/milady
sd-concepts-library/mildemelwe-style
sd-concepts-library/million-live-akane-15k
sd-concepts-library/million-live-akane-3k
sd-concepts-library/million-live-akane-shifuku-3k
sd-concepts-library/million-live-spade-q-object-3k
sd-concepts-library/million-live-spade-q-style-3k
sd-concepts-library/minecraft-concept-art
sd-concepts-library/mishima-kurone
sd-concepts-library/mizkif
sd-concepts-library/moeb-style
sd-concepts-library/moebius
sd-concepts-library/mokoko
sd-concepts-library/mokoko-seed
sd-concepts-library/monster-girl
sd-concepts-library/monster-toy
sd-concepts-library/monte-novo
sd-concepts-library/moo-moo
sd-concepts-library/morino-hon-style
sd-concepts-library/moxxi
sd-concepts-library/msg
sd-concepts-library/mtg-card
sd-concepts-library/mtl-longsky
sd-concepts-library/mu-sadr
sd-concepts-library/munch-leaks-style
sd-concepts-library/museum-by-coop-himmelblau
sd-concepts-library/muxoyara
sd-concepts-library/my-hero-academia-style
sd-concepts-library/my-mug
sd-concepts-library/mycat
sd-concepts-library/mystical-nature
sd-concepts-library/naf
sd-concepts-library/nahiri
sd-concepts-library/namine-ritsu
sd-concepts-library/naoki-saito
sd-concepts-library/nard-style
sd-concepts-library/naruto
sd-concepts-library/natasha-johnston
sd-concepts-library/nathan-wyatt
sd-concepts-library/naval-portrait
sd-concepts-library/nazuna
sd-concepts-library/nebula
sd-concepts-library/ned-flanders
sd-concepts-library/neon-pastel
sd-concepts-library/new-priests
sd-concepts-library/nic-papercuts
sd-concepts-library/nikodim
sd-concepts-library/nissa-revane
sd-concepts-library/nixeu
sd-concepts-library/noggles
sd-concepts-library/nomad
sd-concepts-library/nouns-glasses
sd-concepts-library/obama-based-on-xi
sd-concepts-library/obama-self-2
sd-concepts-library/og-mox-style
sd-concepts-library/ohisashiburi-style
sd-concepts-library/oleg-kuvaev
sd-concepts-library/olli-olli
sd-concepts-library/on-kawara
sd-concepts-library/one-line-drawing
sd-concepts-library/onepunchman
sd-concepts-library/onzpo
sd-concepts-library/orangejacket
sd-concepts-library/ori
sd-concepts-library/ori-toor
sd-concepts-library/orientalist-art
sd-concepts-library/osaka-jyo
sd-concepts-library/osaka-jyo2
sd-concepts-library/osrsmini2
sd-concepts-library/osrstiny
sd-concepts-library/other-mother
sd-concepts-library/ouroboros
sd-concepts-library/outfit-items
sd-concepts-library/overprettified
sd-concepts-library/owl-house
sd-concepts-library/painted-by-silver-of-999
sd-concepts-library/painted-by-silver-of-999-2
sd-concepts-library/painted-student
sd-concepts-library/painting
sd-concepts-library/pantone-milk
sd-concepts-library/paolo-bonolis
sd-concepts-library/party-girl
sd-concepts-library/pascalsibertin
sd-concepts-library/pastelartstyle
sd-concepts-library/paul-noir
sd-concepts-library/pen-ink-portraits-bennorthen
sd-concepts-library/phan
sd-concepts-library/phan-s-collage
sd-concepts-library/phc
sd-concepts-library/phoenix-01
sd-concepts-library/pineda-david
sd-concepts-library/pink-beast-pastelae-style
sd-concepts-library/pintu
sd-concepts-library/pion-by-august-semionov
sd-concepts-library/piotr-jablonski
sd-concepts-library/pixel-mania
sd-concepts-library/pixel-toy
sd-concepts-library/pjablonski-style
sd-concepts-library/plant-style
sd-concepts-library/plen-ki-mun
sd-concepts-library/pokemon-conquest-sprites
sd-concepts-library/pool-test
sd-concepts-library/poolrooms
sd-concepts-library/poring-ragnarok-online
sd-concepts-library/poutine-dish
sd-concepts-library/princess-knight-art
sd-concepts-library/progress-chip
sd-concepts-library/puerquis-toy
sd-concepts-library/purplefishli
sd-concepts-library/pyramidheadcosplay
sd-concepts-library/qpt-atrium
sd-concepts-library/quiesel
sd-concepts-library/r-crumb-style
sd-concepts-library/rahkshi-bionicle
sd-concepts-library/raichu
sd-concepts-library/rail-scene
sd-concepts-library/rail-scene-style
sd-concepts-library/ralph-mcquarrie
sd-concepts-library/ransom
sd-concepts-library/rayne-weynolds
sd-concepts-library/rcrumb-portraits-style
sd-concepts-library/rd-chaos
sd-concepts-library/rd-paintings
sd-concepts-library/red-glasses
sd-concepts-library/reeducation-camp
sd-concepts-library/reksio-dog
sd-concepts-library/rektguy
sd-concepts-library/remert
sd-concepts-library/renalla
sd-concepts-library/repeat
sd-concepts-library/retro-girl
sd-concepts-library/retro-mecha-rangers
sd-concepts-library/retropixelart-pinguin
sd-concepts-library/rex-deno
sd-concepts-library/rhizomuse-machine-bionic-sculpture
sd-concepts-library/ricar
sd-concepts-library/rickyart
sd-concepts-library/rico-face
sd-concepts-library/riker-doll
sd-concepts-library/rikiart
sd-concepts-library/rikiboy-art
sd-concepts-library/rilakkuma
sd-concepts-library/rishusei-style
sd-concepts-library/rj-palmer
sd-concepts-library/rl-pkmn-test
sd-concepts-library/road-to-ruin
sd-concepts-library/robertnava
sd-concepts-library/roblox-avatar
sd-concepts-library/roy-lichtenstein
sd-concepts-library/ruan-jia
sd-concepts-library/russian
sd-concepts-library/s1m-naoto-ohshima
sd-concepts-library/saheeli-rai
sd-concepts-library/sakimi-style
sd-concepts-library/salmonid
sd-concepts-library/sam-yang
sd-concepts-library/sanguo-guanyu
sd-concepts-library/sas-style
sd-concepts-library/scarlet-witch
sd-concepts-library/schloss-mosigkau
sd-concepts-library/scrap-style
sd-concepts-library/scratch-project
sd-concepts-library/sculptural-style
sd-concepts-library/sd-concepts-library-uma-meme
sd-concepts-library/seamless-ground
sd-concepts-library/selezneva-alisa
sd-concepts-library/sem-mac2n
sd-concepts-library/senneca
sd-concepts-library/seraphimmoonshadow-art
sd-concepts-library/sewerslvt
sd-concepts-library/she-hulk-law-art
sd-concepts-library/she-mask
sd-concepts-library/sherhook-painting
sd-concepts-library/sherhook-painting-v2
sd-concepts-library/shev-linocut
sd-concepts-library/shigure-ui-style
sd-concepts-library/shiny-polyman
sd-concepts-library/shrunken-head
sd-concepts-library/shu-doll
sd-concepts-library/shvoren-style
sd-concepts-library/sims-2-portrait
sd-concepts-library/singsing
sd-concepts-library/singsing-doll
sd-concepts-library/sintez-ico
sd-concepts-library/skyfalls
sd-concepts-library/slm
sd-concepts-library/smarties
sd-concepts-library/smiling-friend-style
sd-concepts-library/smooth-pencils
sd-concepts-library/smurf-style
sd-concepts-library/smw-map
sd-concepts-library/society-finch
sd-concepts-library/sorami-style
sd-concepts-library/spider-gwen
sd-concepts-library/spritual-monsters
sd-concepts-library/stable-diffusion-conceptualizer
sd-concepts-library/star-tours-posters
sd-concepts-library/stardew-valley-pixel-art
sd-concepts-library/starhavenmachinegods
sd-concepts-library/sterling-archer
sd-concepts-library/stretch-re1-robot
sd-concepts-library/stuffed-penguin-toy
sd-concepts-library/style-of-marc-allante
sd-concepts-library/summie-style
sd-concepts-library/sunfish
sd-concepts-library/super-nintendo-cartridge
sd-concepts-library/supitcha-mask
sd-concepts-library/sushi-pixel
sd-concepts-library/swamp-choe-2
sd-concepts-library/t-skrang
sd-concepts-library/takuji-kawano
sd-concepts-library/tamiyo
sd-concepts-library/tangles
sd-concepts-library/tb303
sd-concepts-library/tcirle
sd-concepts-library/teelip-ir-landscape
sd-concepts-library/teferi
sd-concepts-library/tela-lenca
sd-concepts-library/tela-lenca2
sd-concepts-library/terraria-style
sd-concepts-library/tesla-bot
sd-concepts-library/test
sd-concepts-library/test-epson
sd-concepts-library/test2
sd-concepts-library/testing
sd-concepts-library/thalasin
sd-concepts-library/thegeneral
sd-concepts-library/thorneworks
sd-concepts-library/threestooges
sd-concepts-library/thunderdome-cover
sd-concepts-library/thunderdome-covers
sd-concepts-library/ti-junglepunk-v0
sd-concepts-library/tili-concept
sd-concepts-library/titan-robot
sd-concepts-library/tnj
sd-concepts-library/toho-pixel
sd-concepts-library/tomcat
sd-concepts-library/tonal1
sd-concepts-library/tony-diterlizzi-s-planescape-art
sd-concepts-library/towerplace
sd-concepts-library/toy
sd-concepts-library/toy-bonnie-plush
sd-concepts-library/toyota-sera
sd-concepts-library/transmutation-circles
sd-concepts-library/trash-polka-artstyle
sd-concepts-library/travis-bedel
sd-concepts-library/trigger-studio
sd-concepts-library/trust-support
sd-concepts-library/trypophobia
sd-concepts-library/ttte
sd-concepts-library/tubby
sd-concepts-library/tubby-cats
sd-concepts-library/tudisco
sd-concepts-library/turtlepics
sd-concepts-library/type
sd-concepts-library/ugly-sonic
sd-concepts-library/uliana-kudinova
sd-concepts-library/uma
sd-concepts-library/uma-clean-object
sd-concepts-library/uma-meme
sd-concepts-library/uma-meme-style
sd-concepts-library/uma-style-classic
sd-concepts-library/unfinished-building
sd-concepts-library/urivoldemort
sd-concepts-library/uzumaki
sd-concepts-library/valorantstyle
sd-concepts-library/vb-mox
sd-concepts-library/vcr-classique
sd-concepts-library/venice
sd-concepts-library/vespertine
sd-concepts-library/victor-narm
sd-concepts-library/vietstoneking
sd-concepts-library/vivien-reid
sd-concepts-library/vkuoo1
sd-concepts-library/vraska
sd-concepts-library/w3u
sd-concepts-library/walter-wick-photography
sd-concepts-library/warhammer-40k-drawing-style
sd-concepts-library/waterfallshadow
sd-concepts-library/wayne-reynolds-character
sd-concepts-library/wedding
sd-concepts-library/wedding-HandPainted
sd-concepts-library/werebloops
sd-concepts-library/wheatland
sd-concepts-library/wheatland-arknight
sd-concepts-library/wheelchair
sd-concepts-library/wildkat
sd-concepts-library/willy-hd
sd-concepts-library/wire-angels
sd-concepts-library/wish-artist-stile
sd-concepts-library/wlop-style
sd-concepts-library/wojak
sd-concepts-library/wojaks-now
sd-concepts-library/wojaks-now-now-now
sd-concepts-library/xatu
sd-concepts-library/xatu2
sd-concepts-library/xbh
sd-concepts-library/xi
sd-concepts-library/xidiversity
sd-concepts-library/xioboma
sd-concepts-library/xuna
sd-concepts-library/xyz
sd-concepts-library/yb-anime
sd-concepts-library/yerba-mate
sd-concepts-library/yesdelete
sd-concepts-library/yf21
sd-concepts-library/yilanov2
sd-concepts-library/yinit
sd-concepts-library/yoji-shinkawa-style
sd-concepts-library/yolandi-visser
sd-concepts-library/yoshi
sd-concepts-library/youpi2
sd-concepts-library/youtooz-candy
sd-concepts-library/yuji-himukai-style
sd-concepts-library/zaney
sd-concepts-library/zaneypixelz
sd-concepts-library/zdenek-art
sd-concepts-library/zero
sd-concepts-library/zero-bottle
sd-concepts-library/zero-suit-samus
sd-concepts-library/zillertal-can
sd-concepts-library/zizigooloo
sd-concepts-library/zk
sd-concepts-library/zoroark

View File

@ -30,9 +30,9 @@ model:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 8
num_vectors_per_token: 1
progressive_words: False
unet_config:

View File

@ -30,9 +30,9 @@ model:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 8
num_vectors_per_token: 1
progressive_words: False
unet_config:

View File

@ -22,7 +22,7 @@ model:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 6
progressive_words: False

View File

@ -1,13 +1,34 @@
FROM ubuntu:22.10
FROM ubuntu AS get_miniconda
SHELL ["/bin/bash", "-c"]
# install wget
RUN apt-get update \
&& apt-get install -y \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# download and install miniconda
ARG conda_version=py39_4.12.0-Linux-x86_64
ARG conda_prefix=/opt/conda
RUN wget --progress=dot:giga -O /miniconda.sh \
https://repo.anaconda.com/miniconda/Miniconda3-${conda_version}.sh \
&& bash /miniconda.sh -b -p ${conda_prefix} \
&& rm -f /miniconda.sh
FROM ubuntu AS invokeai
# use bash
SHELL [ "/bin/bash", "-c" ]
# clean bashrc
RUN echo "" > ~/.bashrc
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential \
gcc \
git \
libgl1-mesa-glx \
@ -18,17 +39,46 @@ RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set workdir and copy sources
WORKDIR /invokeai
ARG PIP_REQUIREMENTS=requirements-lin-cuda.txt
COPY . ./environments-and-requirements/${PIP_REQUIREMENTS} ./
# clone repository, create models.yaml and create symlinks
ARG invokeai_git=invoke-ai/InvokeAI
ARG invokeai_branch=main
ARG project_name=invokeai
ARG conda_env_file=environment-lin-cuda.yml
RUN git clone -b ${invokeai_branch} https://github.com/${invokeai_git}.git "/${project_name}" \
&& cp \
"/${project_name}/configs/models.yaml.example" \
"/${project_name}/configs/models.yaml" \
&& ln -sf \
"/${project_name}/environments-and-requirements/${conda_env_file}" \
"/${project_name}/environment.yml" \
&& ln -sf \
/data/models/v1-5-pruned-emaonly.ckpt \
"/${project_name}/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt" \
&& ln -sf \
/data/outputs/ \
"/${project_name}/outputs"
# install requirements and link outputs folder
RUN pip install \
--no-cache-dir \
-r ${PIP_REQUIREMENTS}
# set workdir
WORKDIR "/${project_name}"
# set Environment, Entrypoint and default CMD
ENV INVOKEAI_ROOT /data
ENTRYPOINT [ "python3", "scripts/invoke.py", "--outdir=/data/outputs" ]
CMD [ "--web", "--host=0.0.0.0" ]
# install conda env and preload models
ARG conda_prefix=/opt/conda
COPY --from=get_miniconda "${conda_prefix}" "${conda_prefix}"
RUN source "${conda_prefix}/etc/profile.d/conda.sh" \
&& conda init bash \
&& source ~/.bashrc \
&& conda env create \
--name "${project_name}" \
&& rm -Rf ~/.cache \
&& conda clean -afy \
&& echo "conda activate ${project_name}" >> ~/.bashrc
RUN source ~/.bashrc \
&& python scripts/preload_models.py \
--no-interactive
# Copy entrypoint and set env
ENV CONDA_PREFIX="${conda_prefix}"
ENV PROJECT_NAME="${project_name}"
COPY docker-build/entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]

View File

@ -1,49 +1,84 @@
#!/usr/bin/env bash
set -e
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoint!!!
# configure values by using env when executing build.sh
# f.e. env ARCH=aarch64 GITHUB_INVOKE_AI=https://github.com/yourname/yourfork.git ./build.sh
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
# configure values by using env when executing build.sh f.e. `env ARCH=aarch64 ./build.sh`
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
source ./docker-build/env.sh \
|| echo "please execute docker-build/build.sh from repository root" \
|| exit 1
pip_requirements=${PIP_REQUIREMENTS:-requirements-lin-cuda.txt}
dockerfile=${INVOKE_DOCKERFILE:-docker-build/Dockerfile}
invokeai_conda_version=${INVOKEAI_CONDA_VERSION:-py39_4.12.0-${platform/\//-}}
invokeai_conda_prefix=${INVOKEAI_CONDA_PREFIX:-\/opt\/conda}
invokeai_conda_env_file=${INVOKEAI_CONDA_ENV_FILE:-environment-lin-cuda.yml}
invokeai_git=${INVOKEAI_GIT:-invoke-ai/InvokeAI}
invokeai_branch=${INVOKEAI_BRANCH:-main}
huggingface_token=${HUGGINGFACE_TOKEN?}
# print the settings
echo "You are using these values:"
echo -e "Dockerfile:\t\t ${dockerfile}"
echo -e "requirements:\t\t ${pip_requirements}"
echo -e "project_name:\t\t ${project_name}"
echo -e "volumename:\t\t ${volumename}"
echo -e "arch:\t\t\t ${arch}"
echo -e "platform:\t\t ${platform}"
echo -e "invokeai_conda_version:\t ${invokeai_conda_version}"
echo -e "invokeai_conda_prefix:\t ${invokeai_conda_prefix}"
echo -e "invokeai_conda_env_file: ${invokeai_conda_env_file}"
echo -e "invokeai_git:\t\t ${invokeai_git}"
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
_runAlpine() {
docker run \
--rm \
--interactive \
--tty \
--mount source="$volumename",target=/data \
--workdir /data \
alpine "$@"
}
_copyCheckpoints() {
echo "creating subfolders for models and outputs"
_runAlpine mkdir models
_runAlpine mkdir outputs
echo "downloading v1-5-pruned-emaonly.ckpt"
_runAlpine wget \
--header="Authorization: Bearer ${huggingface_token}" \
-O models/v1-5-pruned-emaonly.ckpt \
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
echo "done"
}
_checkVolumeContent() {
_runAlpine ls -lhA /data/models
}
_getModelMd5s() {
_runAlpine \
alpine sh -c "md5sum /data/models/*.ckpt"
}
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
echo "Volume already exists"
echo
if [[ -z "$(_checkVolumeContent)" ]]; then
echo "looks empty, copying checkpoint"
_copyCheckpoints
fi
echo "Models in ${volumename}:"
_checkVolumeContent
else
echo -n "createing docker volume "
docker volume create "${volumename}"
_copyCheckpoints
fi
# Build Container
docker build \
--platform="${platform}" \
--tag="${invokeai_tag}" \
--build-arg="PIP_REQUIREMENTS=${pip_requirements}" \
--file="${dockerfile}" \
--tag "${invokeai_tag}" \
--build-arg project_name="${project_name}" \
--build-arg conda_version="${invokeai_conda_version}" \
--build-arg conda_prefix="${invokeai_conda_prefix}" \
--build-arg conda_env_file="${invokeai_conda_env_file}" \
--build-arg invokeai_git="${invokeai_git}" \
--build-arg invokeai_branch="${invokeai_branch}" \
--file ./docker-build/Dockerfile \
.
docker run \
--rm \
--platform="$platform" \
--name="$project_name" \
--hostname="$project_name" \
--mount="source=$volumename,target=/data" \
--mount="type=bind,source=$HOME/.huggingface,target=/root/.huggingface" \
--env="HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN}" \
--entrypoint="python3" \
"${invokeai_tag}" \
scripts/configure_invokeai.py --yes

8
docker-build/entrypoint.sh Executable file
View File

@ -0,0 +1,8 @@
#!/bin/bash
set -e
source "${CONDA_PREFIX}/etc/profile.d/conda.sh"
conda activate "${PROJECT_NAME}"
python scripts/invoke.py \
${@:---web --host=0.0.0.0}

View File

@ -4,7 +4,7 @@ project_name=${PROJECT_NAME:-invokeai}
volumename=${VOLUMENAME:-${project_name}_data}
arch=${ARCH:-x86_64}
platform=${PLATFORM:-Linux/${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}:${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}-${arch}}
export project_name
export volumename

View File

@ -7,9 +7,9 @@ docker run \
--interactive \
--tty \
--rm \
--platform="$platform" \
--name="$project_name" \
--hostname="$project_name" \
--mount="source=$volumename,target=/data" \
--publish=9090:9090 \
--platform "$platform" \
--name "$project_name" \
--hostname "$project_name" \
--mount source="$volumename",target=/data \
--publish 9090:9090 \
"$invokeai_tag" ${1:+$@}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 359 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 528 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 601 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

View File

@ -99,7 +99,8 @@ overridden on a per-prompt basis (see
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file. |
| `--gfpgan_dir` | | `src/gfpgan` | Path to where GFPGAN is installed. |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file, relative to `--gfpgan_dir`. |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
@ -269,12 +270,6 @@ value, we are insisting on a more stringent classification.
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
```
### Custom Styles and Subjects
You can load and use hundreds of community-contributed Textual
Inversion models just by typing the appropriate trigger phrase. Please
see [Concepts Library](CONCEPTS.md) for more details.
# Other Commands
The CLI offers a number of commands that begin with "!".

View File

@ -1,132 +0,0 @@
---
title: The Hugging Face Concepts Library and Importing Textual Inversion files
---
# :material-file-document: Concepts Library
## Using Textual Inversion Files
Textual inversion (TI) files are small models that customize the output of
Stable Diffusion image generation. They can augment SD with
specialized subjects and artistic styles. They are also known as
"embeds" in the machine learning world.
Each TI file introduces one or more vocabulary terms to the SD
model. These are known in InvokeAI as "triggers." Triggers are often,
but not always, denoted using angle brackets as in
"&lt;trigger-phrase&gt;". The two most common type of TI files that you'll
encounter are `.pt` and `.bin` files, which are produced by different
TI training packages. InvokeAI supports both formats, but its [built-in
TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
The [Hugging Face company](https://huggingface.co/sd-concepts-library)
has amassed a large ligrary of &gt;800 community-contributed TI files
covering a broad range of subjects and styles. InvokeAI has built-in
support for this library which downloads and merges TI files
automatically upon request. You can also install your own or others'
TI files by placing them in a designated directory.
### An Example
Here are a few examples to illustrate how it works. All these images
were generated using the command-line client and the Stable Diffusion
1.5 model:
Japanese gardener
<br>
<img src="../assets/concepts/image1.png">
Japanese gardener &lt;ghibli-face&gt;
<br>
<img src="../assets/concepts/image2.png">
Japanese gardener &lt;hoi4-leaders&gt;
<br>
<img src="../assets/concepts/image3.png">
Japanese gardener &lt;cartoona-animals&gt;
<br>
<img src="../assets/concepts/image4.png">
You can also combine styles and concepts:
A portrait of &lt;alf&gt; in &lt;cartoona-animal&gt; style
<br>
<img src="../assets/concepts/image5.png">
## Using a Hugging Face Concept
Hugging Face TI concepts are downloaded and installed automatically as
you require them. This requires your machine to be connected to the
Internet. To find out what each concept is for, you can browse the
[Hugging Face concepts
library](https://huggingface.co/sd-concepts-library) and look at
examples of what each concept produces.
When you have an idea of a concept you wish to try, go to the
command-line client (CLI) and type a "&lt;" character and the beginning
of the Hugging Face concept name you wish to load. Press the Tab key,
and the CLI will show you all matching concepts. You can also type "&lt;"
and Tab to get a listing of all ~800 concepts, but be prepared to
scroll up to see them all! If there is more than one match you can
continue to type and Tab until the concept is completed.
For example if you type "&lt;x" and Tab, you'll be prompted with the completions:
```
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
```
Now type "id" and press Tab. It will be autocompleted to
"&lt;xidiversity&gt;" because this is a unique match.
Finish your prompt and generate as usual. You may include multiple
concept terms in the prompt.
If you have never used this concept before, you will see a message
that the TI model is being downloaded and installed. After this, the
concept will be saved locally (in the `models/sd-concepts-library`
directory) for future use.
Several steps happen during downloading and
installation, including a scan of the file for malicious code. Should
any errors occur, you will be warned and the concept will fail to
load. Generation will then continue treating the trigger term as a
normal string of characters (e.g. as literal "&lt;ghibli-face&gt;").
Currently auto-installation of concepts is a feature only available on
the command-line client. Support for the WebUI is a work in progress.
## Installing your Own TI Files
You may install any number of `.pt` and `.bin` files simply by copying
them into the `embeddings` directory of the InvokeAI runtime directory
(usually `invokeai` in your home directory). You may create
subdirectories in order to organize the files in any way you wish. Be
careful not to overwrite one file with another. For example, TI files
generated by the Hugging Face toolkit share the named
`learned_embedding.bin`. You can use subdirectories to keep them
distinct.
At startup time, InvokeAI will scan the `embeddings` directory and
load any TI files it finds there. At startup you will see a message
similar to this one:
```
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
```
Note the "*" trigger term. This is a placeholder term that many early
TI tutorials taught people to use rather than a more descriptive
term. Unfortunately, if you have multiple TI files that all use this
term, only the first one loaded will be triggered by use of the term.
To avoid this problem, you can use the `merge_embeddings.py` script to
merge two or more TI files together. If it encounters a collision of
terms, the script will prompt you to select new terms that do not
collide. See [Textual Inversion](TEXTUAL_INVERSION.md) for details.
## Further Reading
Please see [the repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.

View File

@ -85,7 +85,7 @@ increasing size, every tile after the first in a row or column
effectively only covers an extra `1 - overlap_ratio` on each axis. If
the input/`--init_img` is same size as a tile, the ideal (for time)
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
4.0, etc.
4.0 etc..
`-embiggen_tiles <spaced list of tiles>`
@ -100,15 +100,6 @@ Tiles are numbered starting with one, and left-to-right,
top-to-bottom. So, if you are generating a 3x3 tiled image, the
middle row would be `4 5 6`.
`-embiggen_strength <strength>`
Another advanced option if you want to experiment with the strength parameter
that embiggen uses when it calls Img2Img. Values range from 0.0 to 1.0
and lower values preserve more of the character of the initial image.
Values that are too high will result in a completely different end image,
while values that are too low will result in an image not dissimilar to one
you would get with ESRGAN upscaling alone. The default value is 0.4.
### Examples
!!! example ""

View File

@ -19,13 +19,13 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic
This will take the original image shown here:
<figure markdown>
![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 }
![](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png)
</figure>
and generate a new image based on it as shown here:
<figure markdown>
![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 }
![](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png)
</figure>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
@ -45,16 +45,15 @@ Note that the prompt makes a big difference. For example, this slight variation
on the prompt produces a very different image:
<figure markdown>
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 }
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png)
<caption markdown>photograph of a tree on a hill with a river</caption>
</figure>
!!! tip
When designing prompts, think about how the images scraped from the internet were
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
They will, however, be captioned with the publication, photographer, camera model,
or film settings.
When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.
If the initial image contains transparent regions, then Stable Diffusion will
only draw within the transparent regions, a process called
@ -62,17 +61,17 @@ only draw within the transparent regions, a process called
However, for this to work correctly, the color information underneath the
transparent needs to be preserved, not erased.
!!! warning "**IMPORTANT ISSUE** "
!!! warning
`img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
## How does it actually work, though?
@ -88,7 +87,7 @@ from a prompt. If the step count is 10, then the "latent space" (Stable
Diffusion's internal representation of the image) for the prompt "fire" with
seed `1592514025` develops something like this:
```bash
```commandline
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
@ -134,9 +133,9 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to
| | strength = 0.7 | strength = 0.4 |
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
| initial image that SD sees | ![step-0](../assets/img2img/000032.step-0.png) | ![step-0](../assets/img2img/000030.step-0.png) |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| steps argument to `invoke>` | `-S10` | `-S10` |
| steps actually taken | `7` | `4` |
| steps actually taken | 7 | 4 |
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
@ -151,7 +150,7 @@ If you want to try this out yourself, all of these are using a seed of
`1592514025` with a width/height of `384`, step count `10`, the default sampler
(`k_lms`), and the single-word prompt `"fire"`:
```bash
```commandline
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
@ -171,7 +170,7 @@ give each generation 20 steps.
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
does `20` steps from my image):
```bash
```commandline
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```

View File

@ -1,89 +0,0 @@
---
title: The NSFW Checker
---
# :material-image-off: NSFW Checker
## The NSFW ("Safety") Checker
The Stable Diffusion image generation models will produce sexual
imagery if deliberately prompted, and will occasionally produce such
images when this is not intended. Such images are colloquially known
as "Not Safe for Work" (NSFW). This behavior is due to the nature of
the training set that Stable Diffusion was trained on, which culled
millions of "aesthetic" images from the Internet.
You may not wish to be exposed to these images, and in some
jurisdictions it may be illegal to publicly distribute such imagery,
including mounting a publicly-available server that provides
unfiltered images to the public. Furthermore, the [Stable Diffusion
weights
License](https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE-ModelWeights.txt)
forbids the model from being used to "exploit any of the
vulnerabilities of a specific group of persons."
For these reasons Stable Diffusion offers a "safety checker," a
machine learning model trained to recognize potentially disturbing
imagery. When a potentially NSFW image is detected, the checker will
blur the image and paste a warning icon on top. The checker can be
turned on and off on the command line using `--nsfw_checker` and
`--no-nsfw_checker`.
At installation time, InvokeAI will ask whether the checker should be
activated by default (neither argument given on the command line). The
response is stored in the InvokeAI initialization file (usually
`.invokeai` in your home directory). You can change the default at any
time by opening this file in a text editor and commenting or
uncommenting the line `--nsfw_checker`.
## Caveats
There are a number of caveats that you need to be aware of.
### Accuracy
The checker is [not perfect](https://arxiv.org/abs/2210.04610).It will
occasionally flag innocuous images (false positives), and will
frequently miss violent and gory imagery (false negatives). It rarely
fails to flag sexual imagery, but this has been known to happen. For
these reasons, the InvokeAI team prefers to refer to the software as a
"NSFW Checker" rather than "safety checker."
### Memory Usage and Performance
The NSFW checker consumes an additional 1.2G of GPU VRAM on top of the
3.4G of VRAM used by Stable Diffusion v1.5 (this is with
half-precision arithmetic). This means that the checker will not run
successfully on GPU cards with less than 6GB VRAM, and will reduce the
size of the images that you can produce.
The checker also introduces a slight performance penalty. Images will
take ~1 second longer to generate when the checker is
activated. Generally this is not noticeable.
### Intermediate Images in the Web UI
The checker only operates on the final image produced by the Stable
Diffusion algorithm. If you are using the Web UI and have enabled the
display of intermediate images, you will briefly be exposed to a
low-resolution (mosaicized) version of the final image before it is
flagged by the checker and replaced by a fully blurred version. You
are encouraged to turn **off** intermediate image rendering when you
are using the checker. Future versions of InvokeAI will apply
additional blurring to intermediate images when the checker is active.
### Watermarking
InvokeAI does not apply any sort of watermark to images it
generates. However, it does write metadata into the PNG data area,
including the prompt used to generate the image and relevant parameter
settings. These fields can be examined using the `sd-metadata.py`
script that comes with the InvokeAI package.
Note that several other Stable Diffusion distributions offer
wavelet-based "invisible" watermarking. We have experimented with the
library used to generate these watermarks and have reached the
conclusion that while the watermarking library may be adding
watermarks to PNG images, the currently available version is unable to
retrieve them successfully. If and when a functioning version of the
library becomes available, we will offer this feature as well.

View File

@ -133,6 +133,29 @@ outputs = g.txt2img("a unicorn in manhattan")
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
Please see the documentation in ldm/generate.py for more information.
Please see ldm/generate.py for more information. A set of example scripts is coming RSN.
---
## **Preload Models**
In situations where you have limited internet connectivity or are blocked behind a firewall, you can
use the preload script to preload the required files for Stable Diffusion to run.
The preload script `scripts/preload_models.py` needs to be run once at least while connected to the
internet. In the following runs, it will load up the cached versions of the required files from the
`.cache` directory of the system.
```bash
(invokeai) ~/stable-diffusion$ python3 ./scripts/preload_models.py
preloading bert tokenizer...
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]
Downloading: 100%|██████████████████████████████████| 455k/455k [00:00<00:00, 4.36MB/s]
Downloading: 100%|██████████████████████████████████| 570/570 [00:00<00:00, 477kB/s]
...success
preloading kornia requirements...
Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /u/lstein/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
100%|███████████████████████████████████████████████| 5.10M/5.10M [00:00<00:00, 101MB/s]
...success
```

View File

@ -105,7 +105,7 @@ invoke> !fix images/vacation.png --outcrop top 128 --new_prompt "family vacation
```
You may also provide a different seed for outcropping to use by passing
`-S<seed>`. A negative seed will generate a new random seed.
`-S<seed>`. A seed of "0" will generate a new random seed.
A number of caveats:

View File

@ -6,39 +6,49 @@ title: Postprocessing
## Intro
This extension provides the ability to restore faces and upscale images.
This extension provides the ability to restore faces and upscale
images.
Face restoration and upscaling can be applied at the time you generate the
images, or at any later time against a previously-generated PNG file, using the
[!fix](#fixing-previously-generated-images) command.
[Outpainting and outcropping](OUTPAINTING.md) can only be applied after the
fact.
Face restoration and upscaling can be applied at the time you generate
the images, or at any later time against a previously-generated PNG
file, using the [!fix](#fixing-previously-generated-images)
command. [Outpainting and outcropping](OUTPAINTING.md) can only be
applied after the fact.
## Face Fixing
The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see
[CodeFormer Support](#codeformer-support) below.
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
Support](#codeformer-support) below.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
should "just work" without further intervention. Simply pass the `--upscale`
(`-U`) option on the `invoke>` command line, or indicate the desired scale on
the popup in the Web GUI.
As of version 1.14, environment.yaml will install the Real-ESRGAN
package into the standard install location for python packages, and
will put GFPGAN into a subdirectory of "src" in the InvokeAI
directory. Upscaling with Real-ESRGAN should "just work" without
further intervention. Simply pass the `--upscale` (`-U`) option on the
`invoke>` command line, or indicate the desired scale on the popup in
the Web GUI.
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `scripts/preload_models.py`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
**GFPGAN** requires a series of downloadable model files to
work. These are loaded when you run `scripts/preload_models.py`. If
GFPAN is failing with an error, please run the following from the
InvokeAI directory:
```bash
python scripts/preload_models.py
```
If you do not run this script in advance, the GFPGAN module will attempt to
download the models files the first time you try to perform facial
If you do not run this script in advance, the GFPGAN module will attempt
to download the models files the first time you try to perform facial
reconstruction.
Alternatively, if you have GFPGAN installed elsewhere, or if you are
using an earlier version of this package which asked you to install
GFPGAN in a sibling directory, you may use the `--gfpgan_dir` argument
with `invoke.py` to set a custom path to your GFPGAN directory. _There
are other GFPGAN related boot arguments if you wish to customize
further._
## Usage
You will now have access to two new prompt arguments.
@ -109,15 +119,15 @@ actions.
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `preload_models.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
In order to setup CodeFormer to work, you need to download the models
like with GFPGAN. You can do this either by running
`preload_models.py` or by manually downloading the [model
file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
You can use `-ft` prompt argument to swap between CodeFormer and the
default GFPGAN. The above mentioned `-G` prompt argument will allow
you to control the strength of the restoration effect.
### Usage
@ -147,9 +157,9 @@ situations when there is very little facial data to work with.
## Fixing Previously-Generated Images
It is easy to apply face restoration and/or upscaling to any
previously-generated file. Just use the syntax
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
previously-generated file. Just use the syntax `!fix path/to/file.png
<options>`. For example, to apply GFPGAN at strength 0.8 and upscale
2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```bash

View File

@ -1,117 +0,0 @@
The Unified Canvas is a tool designed to streamline and simplify the process of composing an image using Stable Diffusion. It offers artists all of the available Stable Diffusion generation modes (Text To Image, Image To Image, Inpainting, and Outpainting) as a single unified workflow. The flexibility of the tool allows you to tweak and edit image generations, extend images beyond their initial size, and to create new content in a freeform way both inside and outside of existing images.
This document explains the basics of using the Unified Canvas, introducing you to its features and tools one by one. It also describes some of the more advanced tools available to power users of the Canvas.
# Basics
The Unified Canvas consists of two layers: the **Base Layer** and the **Mask Layer**. You can swap from one layer to the other by selecting the layer you want in the drop-down menu on the top left corner of the Unified Canvas, or by pressing the (Q) hotkey.
### Base Layer
The **Base Layer** is the image content currently managed by the Canvas, and can be exported at any time to the gallery by using the **Save to Gallery** option. When the Base Layer is selected, the Brush (B) and Eraser (E) tools will directly manipulate the base layer. Any images uploaded to the Canvas, or sent to the Unified Canvas from the gallery, will clear out all existing content and set the Base layer to the new image.
### Staging Area
When you generate images, they will display in the Canvas's **Staging Area**, alongside the Staging Area toolbar buttons. While the Staging Area is active, you cannot interact with the Canvas itself.
<figure markdown>
![staging area](../assets/canvas/staging_area.png)
</figure>
Accepting generations will commit the new generation to the **Base Layer**. You can review all generated images using the Prev/Next arrows, save any individual generations to your gallery (without committing to the Base layer) or discard generations. While you can Undo a discard in an individual Canvas session, any generations that are not saved will be lost when the Canvas resets.
### Mask Layer
The **Mask Layer** consists of any masked sections that have been created to inform Inpainting generations. You can paint a new mask, or edit an existing mask, using the Brush tool and the Eraser with the Mask layer set as your Active layer. Any masked areas will only affect generation inside of the current bounding box.
### Bounding Box
When generating a new image, Invoke will process and apply new images within the area denoted by the **Bounding Box**. The Width & Height settings of the Bounding Box, as well as its location within the Unified Canvas and pixels or empty space that it encloses, determine how new invocations are generated - see [Inpainting & Outpainting](#inpainting-and-outpainting) below. The Bounding Box can be moved and resized using the Move (V) tool. It can also be resized using the Bounding Box options in the Options Panel. By using these controls you can generate larger or smaller images, control which sections of the image are being processed, as well as control Bounding Box tools like the Bounding Box fill/erase.
### <a name="inpainting-and-outpainting"></a> Inpainting & Outpainting
"Inpainting" means asking the AI to refine part of an image while leaving the rest alone. For example, updating a portrait of your grandmother to have her wear a biker's jacket.
<figure markdown>
![granny with a mask applied](../assets/canvas/mask_granny.png)
</figure>
<figure markdown>
![just like magic, granny with a biker's jacket](../assets/canvas/biker_jacket_granny.png)
</figure>
"Outpainting" means asking the AI to expand the original image beyond its original borders, making a bigger image that's still based on the original. For example, extending the above image of your Grandmother in a biker's jacket to include her wearing jeans (and while we're at it, a motorcycle!)
<figure markdown>
![more magic - granny with a tattooed arm, denim pants, and an obscured motorcycle](../assets/canvas/biker_jacket_granny.png)
</figure>
When you are using the Unified Canvas, Invoke decides automatically whether to do Inpainting, Outpainting, ImageToImage, or TextToImage by looking inside the area enclosed by the Bounding Box. It chooses the appropriate type of generation based on whether the Bounding Box contains empty (transparent) areas on the Base layer, or whether it contains colored areas from previous generations (or from painted brushstrokes) on the Base layer, and/or whether the Mask layer contains any brushstrokes. See [Generation Methods](#generation-methods) below for more information.
# Getting Started
To get started with the Unified Canvas, you will want to generate a new base layer using Txt2Img or importing an initial image. We'll refer to either of these methods as the "initial image" in the below guide.
From there, you can consider the following techniques to augment your image:
* **New Images**: Move the bounding box to an empty area of the Canvas, type in your prompt, and Invoke, to generate a new image using the Text to Image function.
* **Image Correction**: Use the color picker and brush tool to paint corrections on the image, switch to the Mask layer, and brush a mask over your painted area to use **Inpainting**. You can also use the **ImageToImage** generation method to invoke new interpretations of the image.
* **Image Expansion**: Move the bounding box to include a portion of your initial image, and a portion of transparent/empty pixels, then Invoke using a prompt that describes what you'd like to see in that area. This will Outpaint the image. You'll typically find more coherent results if you keep about 50-60% of the original image in the bounding box. Make sure that the Image To Image Strength slider is set to a high value - you may need to set it higher than you are used to.
* **New Content on Existing Images**: If you want to add new details or objects into your image, use the brush tool to paint a sketch of what you'd like to see on the image, switch to the Mask layer, and brush a mask over your painted area to use **Inpainting**. If the masked area is small, consider using a smaller bounding box to take advantage of Invoke's automatic Scaling features, which can help to produce better details.
* **And more**: There are a number of creative ways to use the Canvas, and the above are just starting points. We're excited to see what you come up with!
# <a name="generation-methods"></a> Generation Methods
The Canvas can use all generation methods available (Txt2Img, Img2Img, Inpainting, and Outpainting), and these will be automatically selected and used based on the current selection area within the Bounding Box.
## Text to Image
If the Bounding Box is placed over an area of Canvas with an **empty Base Layer**, invoking a new image will use **TextToImage**. This generates an entirely new image based on your prompt.
## Image to Image
If the Bounding Box is placed over an area of Canvas with an **existing Base Layer area with no transparent pixels or masks**, invoking a new image will use **ImageToImage**. This uses the image within the bounding box and your prompt to interpret a new image. The image will be closer to your original image at lower Image to Image strengths.
## Inpainting
If the Bounding Box is placed over an area of Canvas with an **existing Base Layer and any pixels selected using the Mask layer**, invoking a new image will use **Inpainting**. Inpainting uses the existing colors/forms in the masked area in order to generate a new image for the masked area only. The unmasked portion of the image will remain the same. Image to Image strength applies to the inpainted area.
If you desire something completely different from the original image in your new generation (i.e., if you want Invoke to ignore existing colors/forms), consider toggling the Inpaint Replace setting on, and use high values for both Inpaint Replace and Image To Image Strength.
> Note: By default, the **Scale Before Processing** option &mdash; which inpaints more coherent details by generating at a larger resolution and then scaling &mdash; is only activated when the Bounding Box is relatively small. To get the best inpainting results you should therefore resize your Bounding Box to the smallest area that contains your mask and enough surrounding detail to help Stable Diffusion understand the context of what you want it to draw. You should also update your prompt so that it describes *just* the area within the Bounding Box.
## Outpainting
If the Bounding Box is placed over an area of Canvas partially filled by an existing Base Layer area and partially by transparent pixels or masks, invoking a new image will use **Outpainting**, as well as **Inpainting** any masked areas.
____
# Advanced Features
Features with non-obvious behavior are detailed below, in order to provide clarity on the intent and common use cases we expect for utilizing them.
## Toolbar
### Mask Options
* **Enable Mask** - This flag can be used to Enable or Disable the currently painted mask. If you have painted a mask, but you don't want it affect the next invocation, but you *also* don't want to delete it, then you can set this option to Disable. When you want the mask back, set this back to Enable.
* **Preserve Masked Area** - When enabled, Preserve Masked Area inverts the effect of the Mask on the Inpainting process. Pixels in masked areas will be kept unchanged, and unmasked areas will be regenerated.
### Creative Tools
* **Brush - Base/Mask Modes** - The Brush tool switches automatically between different modes of operation for the Base and Mask layers respectively.
* On the Base layer, the brush will directly paint on the Canvas using the color selected on the Brush Options menu.
* On the Mask layer, the brush will create a new mask. If you're finding the mask difficult to see over the existing content of the Unified Canvas, you can change the color it is drawn with using the color selector on the Mask Options dropdown.
* **Erase Bounding Box** - On the Base layer, erases all pixels within the Bounding Box.
* **Fill Bounding Box** - On the Base layer, fills all pixels within the Bounding Box with the currently selected color.
### Canvas Tools
* **Move Tool** - Allows for manipulation of the Canvas view (by dragging on the Canvas, outside the bounding box), the Bounding Box (by dragging the edges of the box), or the Width/Height of the Bounding Box (by dragging one of the 9 directional handles).
* **Reset View** - Click to re-orients the view to the center of the Bounding Box.
* **Merge Visible** - If your browser is having performance problems drawing the image in the Unified Canvas, click this to consolidate all of the information currently being rendered by your browser into a merged copy of the image. This lowers the resource requirements and should improve performance.
## Seam Correction
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated by Stable Diffusion into your existing image. To do this, the area around the `seam` at the boundary between your image and the new generation is automatically blended to produce a seamless output. In a fully automatic process, a mask is generated to cover the seam, and then the area of the seam is Inpainted.
Although the default options should work well most of the time, sometimes it can help to alter the parameters that control the seam Inpainting. A wider seam and a blur setting of about 1/3 of the seam have been noted as producing consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with both sides). Seam strength of 0.7 is best for reducing hard seams.
* **Seam Size** - The size of the seam masked area. Set higher to make a larger mask around the seam.
* **Seam Blur** - The size of the blur that is applied on *each* side of the masked area.
* **Seam Strength** - The Image To Image Strength parameter used for the Inpainting generation that is applied to the seam area.
* **Seam Steps** - The number of generation steps that should be used to Inpaint the seam.
## Infill & Scaling
* **Scale Before Processing & W/H**: When generating images with a bounding box smaller than the optimized W/H of the model (e.g., 512x512 for SD1.5), this feature first generates at a larger size with the same aspect ratio, and then scales that image down to fill the selected area. This is particularly useful when inpainting very small details. Scaling is optional but is enabled by default.
* **Inpaint Replace**: When Inpainting, the default method is to utilize the existing RGB values of the Base layer to inform the generation process. If Inpaint Replace is enabled, noise is generated and blended with the existing pixels (completely replacing the original RGB values at an Inpaint Replace value of 1). This can help generate more variation from the pixels on the Base layers.
* > When using Inpaint Replace you should use a higher Image To Image Strength value, especially at higher Inpaint Replace values
* **Infill Method**: Invoke currently supports two methods for producing RGB values for use in the Outpainting process: Patchmatch and Tile. We believe that Patchmatch is the superior method, however we provide support for Tile in case Patchmatch cannot be installed or is unavailable on your computer.
* **Tile Size**: The Tile method for Outpainting sources small portions of the original image and randomly place these into the areas being Outpainted. This value sets the size of those tiles.
# Hot Keys
The Unified Canvas is a tool that excels when you use hotkeys. You can view the full list of keyboard shortcuts, updated with all new features, by clicking the Keyboard Shortcuts icon at the top right of the InvokeAI WebUI.

View File

@ -303,8 +303,6 @@ The WebGUI is only rapid development. Check back regularly for updates!
| `--cors [CORS ...]` | Additional allowed origins, comma-separated |
| `--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network. |
| `--port PORT` | Web server: Port to listen on |
| `--certfile CERTFILE` | Web server: Path to certificate file to use for SSL. Use together with --keyfile |
| `--keyfile KEYFILE` | Web server: Path to private key file to use for SSL. Use together with --certfile' |
| `--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver. |
### Web Specific Features

View File

@ -80,11 +80,12 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
## :octicons-package-dependencies-24: Installation
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
This fork is supported across multiple platforms. You can find individual
installation instructions below.
- :fontawesome-brands-linux: [Linux](installation/INSTALL_LINUX.md)
- :fontawesome-brands-windows: [Windows](installation/INSTALL_WINDOWS.md)
- :fontawesome-brands-apple: [Macintosh](installation/INSTALL_MAC.md)
## :fontawesome-solid-computer: Hardware Requirements
@ -119,36 +120,9 @@ You wil need one of the following:
```bash
(invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
```
## :octicons-gift-24: InvokeAI Features
- [The InvokeAI Web Interface](features/WEB.md)
- [WebGUI hotkey reference guide](features/WEBUIHOTKEYS.md)
<!-- this link does not exist - [WebGUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md) -->
- [The Command Line Interace](features/CLI.md)
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
<!-- seperator -->
- [Generating Variations](features/VARIATIONS.md)
<!-- seperator -->
- [Prompt Engineering](features/PROMPTS.md)
<!-- seperator -->
- Miscellaneous
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other](features/OTHER.md)
## :octicons-log-16: Latest Changes
### v2.1.3 <small>(13 November 2022)</small>
- A choice of installer scripts that automate installation and configuration. See [Installation](https://github.com/invoke-ai/InvokeAI/blob/2.1.3-rc6/docs/installation/INSTALL.md).
- A streamlined manual installation process that works for both Conda and PIP-only installs. See [Manual Installation](https://github.com/invoke-ai/InvokeAI/blob/2.1.3-rc6/docs/installation/INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps, sampler, etc) in a `.invokeai` file. See [Client](https://github.com/invoke-ai/InvokeAI/blob/2.1.3-rc6/docs/features/CLI.md)
- Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
### v2.1.0 <small>(2 November 2022)</small>
- [Inpainting](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)

View File

@ -1,7 +1,9 @@
---
title: Overview
title: Installation Overview
---
## Installation
We offer several ways to install InvokeAI, each one suited to your
experience and preferences.
@ -16,14 +18,6 @@ experience and preferences.
work", don't have an interest in tinkering with it, and do not
care about upgrading to unreleased experimental features.
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
- This script has difficulty on some Macintosh machines
that have previously been used for Python development due to
conflicting development tools versions. Mac developers may wish
to try the source code installer or one of the manual methods instead.
2. [Source code installer](INSTALL_SOURCE.md)
This is a script that will install InvokeAI and all its essential

View File

@ -4,17 +4,12 @@ title: Docker
# :fontawesome-brands-docker: Docker
!!! warning "For end users"
## Before you begin
We highly recommend to Install InvokeAI locally using [these instructions](index.md)"
!!! tip "For developers"
For container-related development tasks or for enabling easy
deployment to other environments (on-premises or cloud), follow these
instructions.
For general use, install locally to leverage your machine's GPU.
- For end users: Install InvokeAI locally using the instructions for your OS.
- For developers: For container-related development tasks or for enabling easy
deployment to other environments (on-premises or cloud), follow these
instructions. For general use, install locally to leverage your machine's GPU.
## Why containers?
@ -42,19 +37,16 @@ another environment with NVIDIA GPUs on-premises or in the cloud.
#### Install [Docker](https://github.com/santisbon/guides#docker)
On the [Docker Desktop app](https://docs.docker.com/get-docker/), go to
Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the
CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
#### Get a Huggingface-Token
Besides the Docker Agent you will need an Account on
[huggingface.co](https://huggingface.co/join).
After you succesfully registered your account, go to
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
a token and copy it, since you will need in for the next step.
Go to [Hugging Face](https://huggingface.co/settings/tokens), create a token and
temporary place it somewhere like a open texteditor window (but dont save it!,
only keep it open, we need it in the next step)
### Setup
@ -72,19 +64,14 @@ created in the last step.
Some Suggestions of variables you may want to change besides the Token:
<figure markdown>
| Environment-Variable | Default value | Description |
| -------------------- | ----------------------------- | -------------------------------------------------------------------------------------------- |
| `HUGGINGFACE_TOKEN` | No default, but **required**! | This is the only **required** variable, without it you can't download the huggingface models |
| `PROJECT_NAME` | `invokeai` | affects the project folder, tag- and volume name |
| `VOLUMENAME` | `${PROJECT_NAME}_data` | Name of the Docker Volume where model files will be stored |
| `ARCH` | `x86_64` | can be changed to f.e. aarch64 if you are using a ARM based CPU |
| `INVOKEAI_TAG` | `${PROJECT_NAME}:${ARCH}` | the Container Repository / Tag which will be used |
| `PIP_REQUIREMENTS` | `requirements-lin-cuda.txt` | the requirements file to use (from `environments-and-requirements`) |
| `INVOKE_DOCKERFILE` | `docker-build/Dockerfile` | the Dockerfile which should be built, handy for development |
</figure>
| Environment-Variable | Default value | Description |
| ------------------------- | ----------------------------- | ---------------------------------------------------------------------------- |
| `HUGGINGFACE_TOKEN` | No default, but **required**! | This is the only **required** variable, without you can't get the checkpoint |
| `ARCH` | x86_64 | if you are using a ARM based CPU |
| `INVOKEAI_TAG` | invokeai-x86_64 | the Container Repository / Tag which will be used |
| `INVOKEAI_CONDA_ENV_FILE` | environment-lin-cuda.yml | since environment.yml wouldn't work with aarch |
| `INVOKEAI_GIT` | invoke-ai/InvokeAI | the repository to use |
| `INVOKEAI_BRANCH` | main | the branch to checkout |
#### Build the Image
@ -107,25 +94,25 @@ After the build process is done, you can run the container via the provided
./docker-build/run.sh
```
When used without arguments, the container will start the webserver and provide
When used without arguments, the container will start the website and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
!!! example "run script example"
!!! example
```bash
./docker-build/run.sh "banana sushi" -Ak_lms -S42 -s10
docker-build/run.sh --from_file tests/validate_pr_prompt.txt
```
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
The output folder is located on the volume which is also used to store the model.
Find out more about available CLI-Parameters at [features/CLI.md](../../features/CLI/#arguments)
Find out more about available CLI-Parameter at [features/CLI.md](../features/CLI.md)
---
!!! warning "Deprecated"
From here on you will find the the previous Docker-Docs, which will still
From here on you will find the rest of the previous Docker-Docs, which will still
provide some usefull informations.
## Usage (time to have fun)

View File

@ -2,41 +2,30 @@
title: InvokeAI Installer
---
The InvokeAI installer is a shell script that will install InvokeAI onto a stock
computer running recent versions of Linux, MacOSX or Windows. It will leave you
with a version that runs a stable version of InvokeAI. When a new version of
InvokeAI is released, you will download and reinstall the new version.
The InvokeAI installer is a shell script that will install InvokeAI
onto a stock computer running recent versions of Linux, MacOSX or
Windows. It will leave you with a version that runs a stable version
of InvokeAI. When a new version of InvokeAI is released, you will
download and reinstall the new version.
If you wish to tinker with unreleased versions of InvokeAI that introduce
potentially unstable new features, you should consider using the
[source installer](INSTALL_SOURCE.md) or one of the
[manual install](INSTALL_MANUAL.md) methods.
If you wish to tinker with unreleased versions of InvokeAI that
introduce potentially unstable new features, you should consider using
the [source installer](INSTALL_SOURCE.md) or one of the [manual
install](INSTALL_MANUAL.md) methods.
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
Before you begin, make sure that you meet the [hardware
requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an
AMD GPU installed, you may need to install the [ROCm
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
- This script has difficulty on some Macintosh machines
that have previously been used for Python development due to
conflicting development tools versions. Mac developers may wish
to try the source code installer or one of the manual methods instead.
!!! todo
Before you begin, make sure that you meet
the[hardware requirements](/#hardware-requirements) and has the
appropriate GPU drivers installed. In particular, if you are a Linux user with
an AMD GPU installed, you may need to install the
[ROCm-driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Installation requires roughly 18G of free disk space to load the libraries and
recommended model weights files.
Installation requires roughly 18G of free disk space to load the
libraries and recommended model weights files.
## Steps to Install
1. Download the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
InvokeAI's installer for your platform
1. Download the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest)
of InvokeAI's installer for your platform
2. Place the downloaded package someplace where you have plenty of HDD space,
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
@ -45,8 +34,7 @@ recommended model weights files.
4. Open the extracted 'InvokeAI' folder
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from
a terminal)
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from a terminal)
6. Follow the prompts
@ -55,10 +43,10 @@ recommended model weights files.
## Troubleshooting
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.
If you run into problems during or after installation, the InvokeAI
team is available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub
site, or make a request for help on the "bugs-and-support" channel of
our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100%
volunteer organization, but typically somebody will be available to
help you within 24 hours, and often much sooner.

View File

@ -14,7 +14,8 @@ download the notebook from the link above and load it up in VSCode
(with the appropriate extensions installed)/Jupyter/JupyterLab and
start running the cells one-by-one.
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
## Walkthrough
@ -24,4 +25,4 @@ start running the cells one-by-one.
### Updating to the development version
## Troubleshooting
## Troubleshooting

View File

@ -2,54 +2,51 @@
title: Manual Installation
---
<figure markdown>
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
</figure>
!!! warning "This is for advanced Users"
who are already expirienced with using conda or pip
# :fontawesome-brands-linux: Linux
# :fontawesome-brands-apple: macOS
# :fontawesome-brands-windows: Windows
## Introduction
You have two choices for manual installation, the [first one](#Conda_method)
based on the Anaconda3 package manager (`conda`), and
[a second one](#PIP_method) which uses basic Python virtual environment (`venv`)
commands and the PIP package manager. Both methods require you to enter commands
on the terminal, also known as the "console".
You have two choices for manual installation, the [first
one](#Conda_method) based on the Anaconda3 package manager (`conda`),
and [a second one](#PIP_method) which uses basic Python virtual
environment (`venv`) commands and the PIP package manager. Both
methods require you to enter commands on the command-line shell, also
known as the "console".
On Windows systems you are encouraged to install and use the
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
which provides compatibility with Linux and Mac shells and nice features such as
command-line completion.
which provides compatibility with Linux and Mac shells and nice
features such as command-line completion.
### Conda method
1. Check that your system meets the
[hardware requirements](index.md#Hardware_Requirements) and has the
appropriate GPU drivers installed. In particular, if you are a Linux user
with an AMD GPU installed, you may need to install the
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
1. Check that your system meets the [hardware
requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an
AMD GPU installed, you may need to install the [ROCm
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
InvokeAI does not yet support Windows machines with AMD GPUs due to the lack
of ROCm driver support on this platform.
InvokeAI does not yet support Windows machines with AMD GPUs due to
the lack of ROCm driver support on this platform.
To confirm that the appropriate drivers are installed, run `nvidia-smi` on
NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return
information about the installed video card.
To confirm that the appropriate drivers are installed, run
`nvidia-smi` on NVIDIA/CUDA systems, and `rocm-smi` on AMD
systems. These should return information about the installed video
card.
Macintosh users with MPS acceleration, or anybody with a CPU-only system,
can skip this step.
Macintosh users with MPS acceleration, or anybody with a CPU-only
system, can skip this step.
2. You will need to install Anaconda3 and Git if they are not already
available. Use your operating system's preferred package manager, or
download the installers manually. You can find them here:
2. You will need to install Anaconda3 and Git if they are not already
available. Use your operating system's preferred installer, or
download installers from the following URLs
- [Anaconda3](https://www.anaconda.com/)
- [git](https://git-scm.com/downloads)
- Anaconda3 (https://www.anaconda.com/)
- git (https://git-scm.com/downloads)
3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
GitHub:
3. Copy the InvokeAI source code from GitHub using `git`:
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
@ -58,158 +55,122 @@ command-line completion.
This will create InvokeAI folder where you will follow the rest of the
steps.
4. Enter the newly-created InvokeAI folder:
3. Enter the newly-created InvokeAI folder. From this step forward make sure
that you are working in the InvokeAI directory!
```bash
cd InvokeAI
```
4. Select the appropriate environment file:
From this step forward make sure that you are working in the InvokeAI
directory!
5. Select the appropriate environment file:
We have created a series of environment files suited for different operating
systems and GPU hardware. They are located in the
We have created a series of environment files suited for different
operating systems and GPU hardware. They are located in the
`environments-and-requirements` directory:
<figure markdown>
```bash
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
```
| filename | OS |
| :----------------------: | :----------------------------: |
| environment-lin-amd.yml | Linux with an AMD (ROCm) GPU |
| environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU |
| environment-mac.yml | Macintosh |
| environment-win-cuda.yml | Windows with an NVIDA CUDA GPU |
Select the appropriate environment file, and make a link to it
from `environment.yml` in the top-level InvokeAI directory. The
command to do this from the top-level directory is:
</figure>
Choose the appropriate environment file for your system and link or copy it
to `environment.yml` in InvokeAI's top-level directory. To do so, run
following command from the repository-root:
!!! Example ""
=== "Macintosh and Linux"
!!! todo "Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the table above"
```bash
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
```
When this is done, confirm that a file `environment.yml` has been linked in
the InvokeAI root directory and that it points to the correct file in the
`environments-and-requirements`.
```bash
ls -la
```
=== "Windows"
!!! todo " Since it requires admin privileges to create links, we will use the copy command to create your `environment.yml`"
```cmd
copy environments-and-requirements\environment-win-cuda.yml environment.yml
```
Afterwards verify that the file `environment.yml` has been created, either via the
explorer or by using the command `dir` from the terminal
```cmd
dir
```
!!! warning "Do not try to run conda on directly on the subdirectory environments file. This won't work. Instead, copy or link it to the top-level directory as shown."
!!! todo "Macintosh and Linux"
6. Create the conda environment:
```bash
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
```
```bash
conda env update
```
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
This will create a new environment named `invokeai` and install all InvokeAI
dependencies into it. If something goes wrong you should take a look at
[troubleshooting](#troubleshooting).
!!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command"
```bash
cp environments-and-requirements\environment-win-cuda.yml environment.yml
```
7. Activate the `invokeai` environment:
When this is done, confirm that a file `environment.yml` has been created in
the InvokeAI root directory and that it points to the correct file in the
`environments-and-requirements`.
In order to use the newly created environment you will first need to
activate it
4. Run conda:
```bash
conda activate invokeai
```
```bash
conda env update
```
Your command-line prompt should change to indicate that `invokeai` is active
by prepending `(invokeai)`.
This will create a new environment named `invokeai` and install all
InvokeAI dependencies into it.
8. Pre-Load the model weights files:
If something goes wrong at this point, see
[troubleshooting](#Troubleshooting).
!!! tip
5. Activate the `invokeai` environment:
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [here](INSTALLING_MODELS.md).
```bash
conda activate invokeai
```
```bash
python scripts/preload_models.py
```
Your command-line prompt should change to indicate that `invokeai` is active.
The script `preload_models.py` will interactively guide you through the
process of downloading and installing the weights files needed for InvokeAI.
Note that the main Stable Diffusion weights file is protected by a license
agreement that you have to agree to. The script will list the steps you need
to take to create an account on the site that hosts the weights files,
accept the agreement, and provide an access token that allows InvokeAI to
legally download and install the weights files.
6. Load the model weights files:
If you get an error message about a module not being installed, check that
the `invokeai` environment is active and if not, repeat step 5.
```bash
python scripts/preload_models.py
```
9. Run the command-line- or the web- interface:
(Windows users should use the backslash instead of the slash)
!!! example ""
The script `preload_models.py` will interactively guide you through
downloading and installing the weights files needed for
InvokeAI. Note that the main Stable Diffusion weights file is
protected by a license agreement that you have to agree to. The
script will list the steps you need to take to create an account on
the site that hosts the weights files, accept the agreement, and
provide an access token that allows InvokeAI to legally download
and install the weights files.
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
If you have already downloaded the weights file(s) for another
Stable Diffusion distribution, you may skip this step (by selecting
"skip" when prompted) and configure InvokeAI to use the
previously-downloaded files. The process for this is described in
[INSTALLING_MODELS.md].
=== "CLI"
If you get an error message about a module not being installed,
check that the `invokeai` environment is active and if not, repeat
step 5.
```bash
python scripts/invoke.py
```
7. Run the command-line interface or the web interface:
=== "local Webserver"
```bash
python scripts/invoke.py # command line
python scripts/invoke.py --web # web interface
```
```bash
python scripts/invoke.py --web
```
(Windows users replace backslash with forward slash)
If you choose the run the web interface, point your browser at
http://localhost:9090 in order to load the GUI.
=== "Public Webserver"
8. Render away!
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
Browse the features listed in the [Stable Diffusion Toolkit
Docs](https://invoke-ai.git) to learn about all the things you can
do with InvokeAI.
If you choose the run the web interface, point your browser at
http://localhost:9090 in order to load the GUI.
Note that some GPUs are slow to warm up. In particular, when using
an AMD card with the ROCm driver, you may have to wait for over a
minute the first time you try to generate an image. Fortunately, after
the warm up period rendering will be fast.
10. Render away!
Browse the [features](../features/CLI.md) section to learn about all the things you
can do with InvokeAI.
Note that some GPUs are slow to warm up. In particular, when using an AMD
card with the ROCm driver, you may have to wait for over a minute the first
time you try to generate an image. Fortunately, after the warm up period
rendering will be fast.
11. Subsequently, to relaunch the script, be sure to run "conda activate
invokeai", enter the `InvokeAI` directory, and then launch the invoke
script. If you forget to activate the 'invokeai' environment, the script
will fail with multiple `ModuleNotFound` errors.
9. Subsequently, to relaunch the script, be sure to run "conda
activate invokeai", enter the `InvokeAI` directory, and then launch
the invoke script. If you forget to activate the 'invokeai'
environment, the script will fail with multiple `ModuleNotFound`
errors.
## Updating to newer versions of the script
@ -223,207 +184,185 @@ conda env update
python scripts/preload_models.py --no-interactive #optional
```
This will bring your local copy into sync with the remote one. The last step may
be needed to take advantage of new features or released models. The
`--no-interactive` flag will prevent the script from prompting you to download
the big Stable Diffusion weights files.
This will bring your local copy into sync with the remote one. The
last step may be needed to take advantage of new features or released
models. The `--no-interactive` flag will prevent the script from
prompting you to download the big Stable Diffusion weights files.
## pip Install
To install InvokeAI with only the PIP package manager, please follow these
steps:
To install InvokeAI with only the PIP package manager, please follow
these steps:
1. Make sure you are using Python 3.9 or higher. The rest of the install
procedure depends on this:
1. Make sure you are using Python 3.9 or higher. The rest of the install
procedure depends on this:
```bash
python -V
```
```bash
python -V
```
2. Install the `virtualenv` tool if you don't have it already:
2. Install the `virtualenv` tool if you don't have it already:
```bash
pip install virtualenv
```
```bash
pip install virtualenv
```
3. From within the InvokeAI top-level directory, create and activate a
virtual environment named `invokeai`:
3. From within the InvokeAI top-level directory, create and activate a virtual
environment named `invokeai`:
```bash
virtualenv invokeai
source invokeai/bin/activate
```
```bash
virtualenv invokeai
source invokeai/bin/activate
```
4. Pick the correct `requirements*.txt` file for your hardware and
operating system.
4. Pick the correct `requirements*.txt` file for your hardware and operating
system.
We have created a series of environment files suited for different operating
systems and GPU hardware. They are located in the
We have created a series of environment files suited for different
operating systems and GPU hardware. They are located in the
`environments-and-requirements` directory:
<figure markdown>
| filename | OS |
| :---------------------------------: | :-------------------------------------------------------------: |
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
| requirements-lin-arm64.txt | Linux running on arm64 systems |
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
</figure>
Select the appropriate requirements file, and make a link to it from
`requirements.txt` in the top-level InvokeAI directory. The command to do
this from the top-level directory is:
!!! example ""
=== "Macintosh and Linux"
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
```bash
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
```
=== "Windows"
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
```cmd
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
```
!!! warning
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
This is a base requirements file that does not have the platform-specific
libraries. Also, be sure to link or copy the platform-specific file to
a top-level file named `requirements.txt` as shown here. Running pip on
a requirements file in a subdirectory will not work as expected.
When this is done, confirm that a file named `requirements.txt` has been
created in the InvokeAI root directory and that it points to the correct
file in `environments-and-requirements`.
5. Run PIP
Be sure that the `invokeai` environment is active before doing this:
```bash
pip install --prefer-binary -r requirements.txt
requirements-lin-amd.txt # Linux with an AMD (ROCm) GPU
requirements-lin-arm64.txt # Linux running on arm64 systems
requirements-lin-cuda.txt # Linux with an NVIDIA (CUDA) GPU
requirements-mac-mps-cpu.txt # Macintoshes with MPS acceleration
requirements-lin-win-colab-cuda.txt # Windows with an NVIDA (CUDA) GPU
# (supports Google Colab too)
```
---
Select the appropriate requirements file, and make a link to it
from `environment.txt` in the top-level InvokeAI directory. The
command to do this from the top-level directory is:
!!! todo "Macintosh and Linux"
```bash
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
```
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
!!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command instead"
```bash
cp environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
```
Note that the order of arguments is reversed between the Linux/Mac and Windows
commands!
Please do not link directly to the file
`environments-and-requirements/requirements.txt`. This is a base requirements
file that does not have the platform-specific libraries.
When this is done, confirm that a file `requirements.txt` has been
created in the InvokeAI root directory and that it points to the
correct file in the `environments-and-requirements`.
5. Run PIP
Be sure that the `invokeai` environment is active before doing
this:
```bash
pip install --prefer-binary -r requirements.txt
```
## Troubleshooting
Here are some common issues and their suggested solutions.
### Conda
### Conda install
#### Conda fails before completing `conda update`
1. Conda fails before completing `conda update`:
The usual source of these errors is a package incompatibility. While we have
tried to minimize these, over time packages get updated and sometimes introduce
incompatibilities.
The usual source of these errors is a package
incompatibility. While we have tried to minimize these, over time
packages get updated and sometimes introduce incompatibilities.
We suggest that you search
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support"
channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
We suggest that you search
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the
"bugs-and-support" channel of the [InvokeAI
Discord](https://discord.gg/ZmtBAhwWhy).
You may also try to install the broken packages manually using PIP. To do this,
activate the `invokeai` environment, and run `pip install` with the name and
version of the package that is causing the incompatibility. For example:
You may also try to install the broken packages manually using PIP. To do this, activate
the `invokeai` environment, and run `pip install` with the name and version of the
package that is causing the incompatibility. For example:
```bash
pip install test-tube==0.7.5
```
```bash
pip install test-tube==0.7.5
```
You can keep doing this until all requirements are satisfied and the `invoke.py`
script runs without errors. Please report to
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do
to work around the problem so that others can benefit from your investigation.
You can keep doing this until all requirements are satisfied and
the `invoke.py` script runs without errors. Please report to
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you
were able to do to work around the problem so that others can
benefit from your investigation.
### Create Conda Environment fails on MacOS
2. `preload_models.py` or `invoke.py` crashes at an early stage
If conda create environment fails with lmdb error, this is most likely caused by Clang.
Run brew config to see which Clang is installed on your Mac. If Clang isn't installed, that's causing the error.
Start by installing additional XCode command line tools, followed by brew install llvm.
This is usually due to an incomplete or corrupted Conda install.
Make sure you have linked to the correct environment file and run
`conda update` again.
```bash
xcode-select --install
brew install llvm
```
If the problem persists, a more extreme measure is to clear Conda's
caches and remove the `invokeai` environment:
If brew config has Clang installed, update to the latest llvm and try creating the environment again.
```bash
conda deactivate
conda env remove -n invokeai
conda clean -a
conda update
```
#### `preload_models.py` or `invoke.py` crashes at an early stage
This removes all cached library files, including ones that may have
been corrupted somehow. (This is not supposed to happen, but does
anyway).
3. `invoke.py` crashes at a later stage.
This is usually due to an incomplete or corrupted Conda install. Make sure you
have linked to the correct environment file and run `conda update` again.
If the CLI or web site had been working ok, but something
unexpected happens later on during the session, you've encountered
a code bug that is probably unrelated to an install issue. Please
search [Issues](https://github.com/invoke-ai/InvokeAI/issues), file
a bug report, or ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
If the problem persists, a more extreme measure is to clear Conda's caches and
remove the `invokeai` environment:
4. My renders are running very slowly!
```bash
conda deactivate
conda env remove -n invokeai
conda clean -a
conda update
```
You may have installed the wrong torch (machine learning) package,
and the system is running on CPU rather than the GPU. To check,
look at the log messages that appear when `invoke.py` is first
starting up. One of the earlier lines should say `Using device type
cuda`. On AMD systems, it will also say "cuda", and on Macintoshes,
it should say "mps". If instead the message says it is running on
"cpu", then you may need to install the correct torch library.
This removes all cached library files, including ones that may have been
corrupted somehow. (This is not supposed to happen, but does anyway).
You may be able to fix this by installing a different torch
library. Here are the magic incantations for Conda and PIP.
#### `invoke.py` crashes at a later stage
!!! todo "For CUDA systems"
If the CLI or web site had been working ok, but something unexpected happens
later on during the session, you've encountered a code bug that is probably
unrelated to an install issue. Please search
[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or
ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
(conda)
```bash
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
```
#### My renders are running very slowly
(pip)
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
```
You may have installed the wrong torch (machine learning) package, and the
system is running on CPU rather than the GPU. To check, look at the log messages
that appear when `invoke.py` is first starting up. One of the earlier lines
should say `Using device type cuda`. On AMD systems, it will also say "cuda",
and on Macintoshes, it should say "mps". If instead the message says it is
running on "cpu", then you may need to install the correct torch library.
!!! todo "For AMD systems"
You may be able to fix this by installing a different torch library. Here are
the magic incantations for Conda and PIP.
(conda)
```bash
conda activate invokeai
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
!!! todo "For CUDA systems"
(pip)
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
- conda
```bash
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
```
- pip
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
```
!!! todo "For AMD systems"
- conda
```bash
conda activate invokeai
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
- pip
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
More information and troubleshooting tips can be found at https://pytorch.org.
More information and troubleshooting tips can be found at https://pytorch.org.

View File

@ -1,137 +1,148 @@
---
title: Source Installer
title: The InvokeAI Source Installer
---
# The InvokeAI Source Installer
## Introduction
The source installer is a shell script that attempts to automate every step
needed to install and run InvokeAI on a stock computer running recent versions
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
version of InvokeAI with the option to upgrade to experimental versions later.
It is not as foolproof as the [InvokeAI installer](INSTALL_INVOKE.md)
The source installer is a shell script that attempts to automate every
step needed to install and run InvokeAI on a stock computer running
recent versions of Linux, MacOSX or Windows. It will leave you with a
version that runs a stable version of InvokeAI with the option to
upgrade to experimental versions later. It is not as foolproof as the
[InvokeAI installer](INSTALL_INVOKE.md)
Before you begin, make sure that you meet the
[hardware requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
installed, you may need to install the
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Before you begin, make sure that you meet the [hardware
requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an
AMD GPU installed, you may need to install the [ROCm
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Installation requires roughly 18G of free disk space to load the libraries and
recommended model weights files.
Installation requires roughly 18G of free disk space to load the
libraries and recommended model weights files.
## Walk through
Though there are multiple steps, there really is only one click involved to kick
off the process.
Though there are multiple steps, there really is only one click
involved to kick off the process.
1. The source installer is distributed in ZIP files. Go to the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
look for a series of files named:
1. The source installer is distributed in ZIP files. Go to the [latest
release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
look for a series of files named:
- invokeAI-src-installer-mac.zip
- invokeAI-src-installer-windows.zip
- invokeAI-src-installer-linux.zip
- invokeAI-src-installer-mac.zip
- invokeAI-src-installer-windows.zip
- invokeAI-src-installer-linux.zip
Download the one that is appropriate for your operating system.
Download the one that is appropriate for your operating system.
2. Unpack the zip file into a directory that has at least 18G of free space. Do
_not_ unpack into a directory that has an earlier version of InvokeAI.
2. Unpack the zip file into a directory that has at least 18G of free
space. Do *not* unpack into a directory that has an earlier version of
InvokeAI.
This will create a new directory named "InvokeAI". This example shows how
this would look using the `unzip` command-line tool, but you may use any
graphical or command-line Zip extractor:
This will create a new directory named "InvokeAI". This example
shows how this would look using the `unzip` command-line tool,
but you may use any graphical or command-line Zip extractor:
```bash
C:\Documents\Linco> unzip invokeAI-windows.zip
Archive: C: \Linco\Downloads\invokeAI-linux.zip
creating: invokeAI\
inflating: invokeAI\install.bat
inflating: invokeAI\readme.txt
```
```cmd
C:\Documents\Linco> unzip invokeAI-windows.zip
Archive: C: \Linco\Downloads\invokeAI-linux.zip
creating: invokeAI\
inflating: invokeAI\install.bat
inflating: invokeAI\readme.txt
```
3. If you are using a desktop GUI, double-click the installer file.
It will be named `install.bat` on Windows systems and `install.sh`
on Linux and Macintosh systems.
3. If you are using a desktop GUI, double-click the installer file. It will be
named `install.bat` on Windows systems and `install.sh` on Linux and
Macintosh systems.
4. Alternatively, form the command line, run the shell script or .bat
file:
4. Alternatively, form the command line, run the shell script or .bat file:
```bash
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco> install.bat
```
```cmd
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> install.bat
```
5. Sit back and let the install script work. It will install various
binary requirements including Conda, Git and Python, then download
the current InvokeAI code and install it along with its
dependencies.
5. Sit back and let the install script work. It will install various binary
requirements including Conda, Git and Python, then download the current
InvokeAI code and install it along with its dependencies.
6. After installation completes, the installer will launch a script
called `preload_models.py`, which will guide you through the
first-time process of selecting one or more Stable Diffusion model
weights files, downloading and configuring them.
6. After installation completes, the installer will launch a script called
`preload_models.py`, which will guide you through the first-time process of
selecting one or more Stable Diffusion model weights files, downloading and
configuring them.
Note that the main Stable Diffusion weights file is protected by a
license agreement that you must agree to in order to use. The
script will list the steps you need to take to create an account on
the official site that hosts the weights files, accept the
agreement, and provide an access token that allows InvokeAI to
legally download and install the weights files.
Note that the main Stable Diffusion weights file is protected by a license
agreement that you must agree to in order to use. The script will list the
steps you need to take to create an account on the official site that hosts
the weights files, accept the agreement, and provide an access token that
allows InvokeAI to legally download and install the weights files.
If you have already downloaded the weights file(s) for another
Stable Diffusion distribution, you may skip this step (by selecting
"skip" when prompted) and configure InvokeAI to use the
previously-downloaded files. The process for this is described in
[INSTALLING_MODELS.md].
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [Installing Models](INSTALLING_MODELS.md).
7. The script will now exit and you'll be ready to generate some
images. The invokeAI directory will contain numerous files. Look
for a shell script named `invoke.sh` (Linux/Mac) or `invoke.bat`
(Windows). Launch the script by double-clicking it or typing
its name at the command-line:
7. The script will now exit and you'll be ready to generate some images. The
invokeAI directory will contain numerous files. Look for a shell script
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
by double-clicking it or typing its name at the command-line:
```bash
C:\Documents\Linco\invokeAI> cd invokeAI
C:\Documents\Linco\invokeAI> invoke.bat
```
```cmd
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> invoke.bat
```
The `invoke.bat` (`invoke.sh`) script will give you the choice of
starting (1) the command-line interface, or (2) the web GUI. If you
start the latter, you can load the user interface by pointing your
browser at http://localhost:9090.
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
the command-line interface, or (2) the web GUI. If you start the latter, you can
load the user interface by pointing your browser at http://localhost:9090.
The `invoke` script also offers you a third option labeled "open
the developer console". If you choose this option, you will be
dropped into a command-line interface in which you can run python
commands directly, access developer tools, and launch InvokeAI
with customized options. To do the latter, you would launch the
script `scripts/invoke.py` as shown in this example:
The `invoke` script also offers you a third option labeled "open the developer
console". If you choose this option, you will be dropped into a command-line
interface in which you can run python commands directly, access developer tools,
and launch InvokeAI with customized options. To do the latter, you would launch
the script `scripts/invoke.py` as shown in this example:
```bash
python scripts\invoke.py --web --max_load_models=3 \
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
```
```cmd
python scripts/invoke.py --web --max_load_models=3 \
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
```
These options are described in detail in the
[Command-Line Interface](../features/CLI.md) documentation.
These options are described in detail in the [Command-Line
Interface](../features/CLI.md) documentation.
## Updating to newer versions
This section describes how to update InvokeAI to new versions of the software.
This section describes how to update InvokeAI to new versions of the
software.
### Updating the stable version
This distribution is changing rapidly, and we add new features on a daily basis.
To update to the latest released version (recommended), run the `update.sh`
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
release and re-run the `preload_models` script to download any updated models
files that may be needed. You can also use this to add additional models that
you did not select at installation time.
This distribution is changing rapidly, and we add new features on a
daily basis. To update to the latest released version (recommended),
run the `update.sh` (Linux/Mac) or `update.bat` (Windows)
scripts. This will fetch the latest release and re-run the
`preload_models` script to download any updated models files that may
be needed. You can also use this to add additional models that you did
not select at installation time.
### Updating to the development version
There may be times that there is a feature in the `development` branch of
InvokeAI that you'd like to take advantage of. Or perhaps there is a branch that
corrects an annoying bug. To do this, you will use the developer's console.
There may be times that there is a feature in the `development` branch
of InvokeAI that you'd like to take advantage of. Or perhaps there is
a branch that corrects an annoying bug. To do this, you will use the
developer's console.
From within the invokeAI directory, run the command `invoke.sh` (Linux/Mac) or
`invoke.bat` (Windows) and selection option (3) to open the developers console.
Then run the following command to get the `development branch`:
From within the invokeAI directory, run the command `invoke.sh`
(Linux/Mac) or `invoke.bat` (Windows) and selection option (3) to open
the developers console. Then run the following command to get the
`development branch`:
```bash
git checkout development
@ -139,18 +150,19 @@ git pull
conda env update
```
You can now close the developer console and run `invoke` as before. If you get
complaints about missing models, then you may need to do the additional step of
running `preload_models.py`. This happens relatively infrequently. To do this,
simply open up the developer's console again and type
`python scripts/preload_models.py`.
You can now close the developer console and run `invoke` as before.
If you get complaints about missing models, then you may need to do
the additional step of running `preload_models.py`. This happens
relatively infrequently. To do this, simply open up the developer's
console again and type `python scripts/preload_models.py`.
## Troubleshooting
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.
If you run into problems during or after installation, the InvokeAI
team is available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub
site, or make a request for help on the "bugs-and-support" channel of
our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100%
volunteer organization, but typically somebody will be available to
help you within 24 hours, and often much sooner.

View File

@ -13,19 +13,6 @@ We thank them for all of their time and hard work.
- [Lincoln D. Stein](mailto:lincoln.stein@gmail.com)
## **Current core team**
* lstein (Lincoln Stein) - Co-maintainer
* blessedcoolant - Co-maintainer
* hipsterusername (Kent Keirsey) - Product Manager
* psychedelicious - Web Team Leader
* Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* damian0815 - Attention Systems and Gameplay Engineer
* mauwii (Matthias Wild) - Continuous integration and product maintenance engineer
* Netsvetaev (Artur Netsvetaev) - UI/UX Developer
* tildebyte - general gadfly and resident (self-appointed) know-it-all
* keturn - Lead for Diffusers port
## **Contributions by**
- [Sean McLellan](https://github.com/Oceanswave)

View File

@ -2,7 +2,6 @@ name: invokeai
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- albumentations=0.4.3
- cudatoolkit
@ -32,8 +31,8 @@ dependencies:
- pip:
- dependency_injector==4.40.0
- getpass_asterisk
- gfpgan
- omegaconf==2.1.1
- picklescan
- pyreadline3
- realesrgan
- taming-transformers-rom1504
@ -41,6 +40,6 @@ dependencies:
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.2#egg=gfpgan
- -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.4#egg=pypatchmatch
- -e .
variables:
PYTORCH_ENABLE_MPS_FALLBACK: 1

View File

@ -4,7 +4,7 @@ channels:
- conda-forge
- defaults
dependencies:
- python=3.9.*
- python>=3.9
- pip=22.2.2
- numpy=1.23.3
- pip:
@ -18,12 +18,12 @@ dependencies:
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- gfpgan
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
- omegaconf==2.2.3
- opencv-python==4.5.5.64
- picklescan
- pillow==9.2.0
- pudb==2019.2
- pyreadline3
@ -33,7 +33,6 @@ dependencies:
- streamlit==1.12.0
- taming-transformers-rom1504
- test-tube>=0.7.5
- tqdm
- torch
- torch-fidelity==0.3.0
- torchaudio
@ -41,8 +40,6 @@ dependencies:
- torchvision
- transformers==4.21.3
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.2#egg=gfpgan
- -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.4#egg=pypatchmatch
- -e .

View File

@ -4,7 +4,7 @@ channels:
- conda-forge
- defaults
dependencies:
- python=3.9.*
- python>=3.9
- pip=22.2.2
- numpy=1.23.3
- torchvision=0.13.1
@ -21,12 +21,12 @@ dependencies:
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- gfpgan
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
- omegaconf==2.2.3
- opencv-python==4.5.5.64
- picklescan
- pillow==9.2.0
- pudb==2019.2
- pyreadline3
@ -40,8 +40,6 @@ dependencies:
- torchmetrics==0.7.0
- transformers==4.21.3
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.2#egg=gfpgan
- -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.4#egg=pypatchmatch
- -e .

View File

@ -52,14 +52,13 @@ dependencies:
- transformers=4.23
- pip:
- getpass_asterisk
- picklescan
- taming-transformers-rom1504
- test-tube==0.7.5
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
- git+https://github.com/invoke-ai/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
- git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.2#egg=gfpgan
- -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.4#egg=pypatchmatch
- -e .
variables:
PYTORCH_ENABLE_MPS_FALLBACK: 1

View File

@ -4,7 +4,7 @@ channels:
- conda-forge
- defaults
dependencies:
- python=3.10.*
- python>=3.9
- pip=22.2.2
- numpy=1.23.3
- torchvision=0.13.1
@ -22,12 +22,12 @@ dependencies:
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- gfpgan
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
- omegaconf==2.2.3
- opencv-python==4.5.5.64
- picklescan
- pillow==9.2.0
- pudb==2019.2
- pyreadline3
@ -43,6 +43,4 @@ dependencies:
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.1#egg=gfpgan
- -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.4#egg=pypatchmatch
- -e .

View File

@ -4,12 +4,12 @@ dependency_injector==4.40.0
diffusers
einops
eventlet
facexlib
flask==2.1.3
flask_cors==3.0.10
flask_socketio==5.3.0
flaskwebgui==0.3.7
getpass_asterisk
gfpgan
huggingface-hub
imageio
imageio-ffmpeg
@ -23,19 +23,14 @@ pudb
pyreadline3
pytorch-lightning==1.7.7
realesrgan
requests==2.25.1
scikit-image>=0.19
send2trash
streamlit
taming-transformers-rom1504
test-tube>=0.7.5
test-tube
torch-fidelity
torchmetrics
transformers==4.21.*
picklescan
git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.1#egg=gfpgan ; platform_system == 'Windows'
git+https://github.com/invoke-ai/GFPGAN@basicsr-1.4.2#egg=gfpgan ; platform_system != 'Windows'
git+https://github.com/openai/CLIP.git@main#egg=clip
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
git+https://github.com/invoke-ai/PyPatchMatch@0.1.4#egg=pypatchmatch

View File

@ -1,5 +1,6 @@
-r environments-and-requirements/requirements-base.txt
# Get hardware-appropriate torch/torchvision
# Get hardware-appropriate torch/torchvision
--extra-index-url https://download.pytorch.org/whl/rocm5.1.1 --trusted-host https://download.pytorch.org
torch
torchvision

View File

@ -1,5 +1,5 @@
-r environments-and-requirements/requirements-base.txt
grpcio<1.51.0
protobuf==3.19.6
torch<1.13.0
torchvision<0.14.0

View File

@ -1,5 +1,6 @@
-r environments-and-requirements/requirements-base.txt
# Get hardware-appropriate torch/torchvision
# Get hardware-appropriate torch/torchvision
--extra-index-url https://download.pytorch.org/whl/cu116 --trusted-host https://download.pytorch.org
basicsr==1.4.1
torch==1.12.1

View File

@ -5,7 +5,7 @@
- `python scripts/dream.py --web` serves both frontend and backend at
http://localhost:9090
## Evironment
## Environment
Install [node](https://nodejs.org/en/download/) (includes npm) and optionally
[yarn](https://yarnpkg.com/getting-started/install).
@ -15,7 +15,7 @@ packages.
## Dev
1. From `frontend/`, run `npm dev` / `yarn dev` to start the dev server.
1. From `frontend/`, run `npm run dev` / `yarn dev` to start the dev server.
2. Run `python scripts/dream.py --web`.
3. Navigate to the dev server address e.g. `http://localhost:5173/`.

501
frontend/dist/assets/index.1fc0290b.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

829
frontend/dist/assets/index.4488003f.js vendored Normal file

File diff suppressed because one or more lines are too long

501
frontend/dist/assets/index.8eb7dfe4.js vendored Normal file

File diff suppressed because one or more lines are too long

690
frontend/dist/assets/index.ae92a637.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

517
frontend/dist/assets/index.cc049b93.js vendored Normal file

File diff suppressed because one or more lines are too long

517
frontend/dist/assets/index.e2832fd4.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -6,8 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="./assets/favicon.0d253ced.ico" />
<script type="module" crossorigin src="./assets/index.faf4c870.js"></script>
<link rel="stylesheet" href="./assets/index.c609c0c8.css">
<script type="module" crossorigin src="./assets/index.1fc0290b.js"></script>
<link rel="stylesheet" href="./assets/index.40a72c80.css">
</head>
<body>

View File

@ -1,23 +0,0 @@
{
"eslintConfig": {
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/recommended",
"plugin:react-hooks/recommended"
],
"parser": "@typescript-eslint/parser",
"plugins": ["@typescript-eslint", "eslint-plugin-react-hooks"],
"root": true,
"settings": {
"import/resolver": {
"node": {
"paths": ["src"],
"extensions": [".js", ".jsx", ".ts", ".tsx"]
}
}
},
"rules": {
"react/jsx-filename-extension": [1, { "extensions": [".tsx", ".ts"] }]
}
}
}

10651
frontend/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -7,13 +7,11 @@
"dev": "vite dev",
"build": "tsc && vite build",
"build-dev": "tsc && vite build -m development",
"preview": "vite preview",
"postinstall": "patch-package"
"preview": "vite preview"
},
"dependencies": {
"@chakra-ui/icons": "^2.0.10",
"@chakra-ui/react": "^2.3.1",
"@emotion/cache": "^11.10.5",
"@emotion/react": "^11.10.4",
"@emotion/styled": "^11.10.4",
"@radix-ui/react-context-menu": "^2.0.1",
@ -31,18 +29,14 @@
"react-colorful": "^5.6.1",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.2",
"react-hotkeys-hook": "4.0.2",
"react-hotkeys-hook": "^3.4.7",
"react-icons": "^4.4.0",
"react-konva": "^18.2.3",
"react-konva-utils": "^0.3.0",
"react-redux": "^8.0.2",
"react-transition-group": "^4.4.5",
"react-zoom-pan-pinch": "^2.1.3",
"redux-deep-persist": "^1.0.6",
"redux-persist": "^6.0.0",
"socket.io": "^4.5.2",
"socket.io-client": "^4.5.2",
"use-image": "^1.1.0",
"uuid": "^9.0.0",
"yarn": "^1.22.19"
},
@ -57,13 +51,10 @@
"eslint": "^8.23.0",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-react-hooks": "^4.6.0",
"patch-package": "^6.5.0",
"postinstall-postinstall": "^2.1.0",
"sass": "^1.55.0",
"tsc-watch": "^5.0.3",
"typescript": "^4.6.4",
"vite": "^3.0.7",
"vite-plugin-eslint": "^1.8.1",
"vite-tsconfig-paths": "^3.5.2"
"vite-plugin-eslint": "^1.8.1"
}
}

View File

@ -1,24 +0,0 @@
diff --git a/node_modules/redux-deep-persist/lib/types.d.ts b/node_modules/redux-deep-persist/lib/types.d.ts
index b67b8c2..7fc0fa1 100644
--- a/node_modules/redux-deep-persist/lib/types.d.ts
+++ b/node_modules/redux-deep-persist/lib/types.d.ts
@@ -35,6 +35,7 @@ export interface PersistConfig<S, RS = any, HSS = any, ESS = any> {
whitelist?: Array<string>;
transforms?: Array<Transform<HSS, ESS, S, RS>>;
throttle?: number;
+ debounce?: number;
migrate?: PersistMigrate;
stateReconciler?: false | StateReconciler<S>;
getStoredState?: (config: PersistConfig<S, RS, HSS, ESS>) => Promise<PersistedState>;
diff --git a/node_modules/redux-deep-persist/src/types.ts b/node_modules/redux-deep-persist/src/types.ts
index 398ac19..cbc5663 100644
--- a/node_modules/redux-deep-persist/src/types.ts
+++ b/node_modules/redux-deep-persist/src/types.ts
@@ -91,6 +91,7 @@ export interface PersistConfig<S, RS = any, HSS = any, ESS = any> {
whitelist?: Array<string>;
transforms?: Array<Transform<HSS, ESS, S, RS>>;
throttle?: number;
+ debounce?: number;
migrate?: PersistMigrate;
stateReconciler?: false | StateReconciler<S>;
/**

View File

@ -1,116 +0,0 @@
diff --git a/node_modules/redux-persist/es/createPersistoid.js b/node_modules/redux-persist/es/createPersistoid.js
index 8b43b9a..184faab 100644
--- a/node_modules/redux-persist/es/createPersistoid.js
+++ b/node_modules/redux-persist/es/createPersistoid.js
@@ -6,6 +6,7 @@ export default function createPersistoid(config) {
var whitelist = config.whitelist || null;
var transforms = config.transforms || [];
var throttle = config.throttle || 0;
+ var debounce = config.debounce || 0;
var storageKey = "".concat(config.keyPrefix !== undefined ? config.keyPrefix : KEY_PREFIX).concat(config.key);
var storage = config.storage;
var serialize;
@@ -28,30 +29,37 @@ export default function createPersistoid(config) {
var timeIterator = null;
var writePromise = null;
- var update = function update(state) {
- // add any changed keys to the queue
- Object.keys(state).forEach(function (key) {
- if (!passWhitelistBlacklist(key)) return; // is keyspace ignored? noop
+ // Timer for debounced `update()`
+ let timer = 0;
- if (lastState[key] === state[key]) return; // value unchanged? noop
+ function update(state) {
+ // Debounce the update
+ clearTimeout(timer);
+ timer = setTimeout(() => {
+ // add any changed keys to the queue
+ Object.keys(state).forEach(function (key) {
+ if (!passWhitelistBlacklist(key)) return; // is keyspace ignored? noop
- if (keysToProcess.indexOf(key) !== -1) return; // is key already queued? noop
+ if (lastState[key] === state[key]) return; // value unchanged? noop
- keysToProcess.push(key); // add key to queue
- }); //if any key is missing in the new state which was present in the lastState,
- //add it for processing too
+ if (keysToProcess.indexOf(key) !== -1) return; // is key already queued? noop
- Object.keys(lastState).forEach(function (key) {
- if (state[key] === undefined && passWhitelistBlacklist(key) && keysToProcess.indexOf(key) === -1 && lastState[key] !== undefined) {
- keysToProcess.push(key);
- }
- }); // start the time iterator if not running (read: throttle)
+ keysToProcess.push(key); // add key to queue
+ }); //if any key is missing in the new state which was present in the lastState,
+ //add it for processing too
- if (timeIterator === null) {
- timeIterator = setInterval(processNextKey, throttle);
- }
+ Object.keys(lastState).forEach(function (key) {
+ if (state[key] === undefined && passWhitelistBlacklist(key) && keysToProcess.indexOf(key) === -1 && lastState[key] !== undefined) {
+ keysToProcess.push(key);
+ }
+ }); // start the time iterator if not running (read: throttle)
+
+ if (timeIterator === null) {
+ timeIterator = setInterval(processNextKey, throttle);
+ }
- lastState = state;
+ lastState = state;
+ }, debounce)
};
function processNextKey() {
diff --git a/node_modules/redux-persist/es/types.js.flow b/node_modules/redux-persist/es/types.js.flow
index c50d3cd..39d8be2 100644
--- a/node_modules/redux-persist/es/types.js.flow
+++ b/node_modules/redux-persist/es/types.js.flow
@@ -19,6 +19,7 @@ export type PersistConfig = {
whitelist?: Array<string>,
transforms?: Array<Transform>,
throttle?: number,
+ debounce?: number,
migrate?: (PersistedState, number) => Promise<PersistedState>,
stateReconciler?: false | Function,
getStoredState?: PersistConfig => Promise<PersistedState>, // used for migrations
diff --git a/node_modules/redux-persist/lib/types.js.flow b/node_modules/redux-persist/lib/types.js.flow
index c50d3cd..39d8be2 100644
--- a/node_modules/redux-persist/lib/types.js.flow
+++ b/node_modules/redux-persist/lib/types.js.flow
@@ -19,6 +19,7 @@ export type PersistConfig = {
whitelist?: Array<string>,
transforms?: Array<Transform>,
throttle?: number,
+ debounce?: number,
migrate?: (PersistedState, number) => Promise<PersistedState>,
stateReconciler?: false | Function,
getStoredState?: PersistConfig => Promise<PersistedState>, // used for migrations
diff --git a/node_modules/redux-persist/src/types.js b/node_modules/redux-persist/src/types.js
index c50d3cd..39d8be2 100644
--- a/node_modules/redux-persist/src/types.js
+++ b/node_modules/redux-persist/src/types.js
@@ -19,6 +19,7 @@ export type PersistConfig = {
whitelist?: Array<string>,
transforms?: Array<Transform>,
throttle?: number,
+ debounce?: number,
migrate?: (PersistedState, number) => Promise<PersistedState>,
stateReconciler?: false | Function,
getStoredState?: PersistConfig => Promise<PersistedState>, // used for migrations
diff --git a/node_modules/redux-persist/types/types.d.ts b/node_modules/redux-persist/types/types.d.ts
index b3733bc..2a1696c 100644
--- a/node_modules/redux-persist/types/types.d.ts
+++ b/node_modules/redux-persist/types/types.d.ts
@@ -35,6 +35,7 @@ declare module "redux-persist/es/types" {
whitelist?: Array<string>;
transforms?: Array<Transform<HSS, ESS, S, RS>>;
throttle?: number;
+ debounce?: number;
migrate?: PersistMigrate;
stateReconciler?: false | StateReconciler<S>;
/**

View File

@ -1,9 +1,5 @@
@use '../styles/Mixins/' as *;
svg {
fill: var(--svg-color);
}
.App {
display: grid;
width: 100vw;

View File

@ -1,19 +1,89 @@
import ProgressBar from 'features/system/components/ProgressBar';
import SiteHeader from 'features/system/components/SiteHeader';
import Console from 'features/system/components/Console';
import { useEffect } from 'react';
import ProgressBar from '../features/system/ProgressBar';
import SiteHeader from '../features/system/SiteHeader';
import Console from '../features/system/Console';
import { useAppDispatch } from './store';
import { requestSystemConfig } from './socketio/actions';
import { keepGUIAlive } from './utils';
import InvokeTabs from 'features/tabs/components/InvokeTabs';
import ImageUploader from 'common/components/ImageUploader';
import InvokeTabs from '../features/tabs/InvokeTabs';
import ImageUploader from '../common/components/ImageUploader';
import { RootState, useAppSelector } from '../app/store';
import useToastWatcher from 'features/system/hooks/useToastWatcher';
import FloatingOptionsPanelButtons from 'features/tabs/components/FloatingOptionsPanelButtons';
import FloatingGalleryButton from 'features/tabs/components/FloatingGalleryButton';
import FloatingGalleryButton from '../features/tabs/FloatingGalleryButton';
import FloatingOptionsPanelButtons from '../features/tabs/FloatingOptionsPanelButtons';
import { createSelector } from '@reduxjs/toolkit';
import { GalleryState } from '../features/gallery/gallerySlice';
import { OptionsState } from '../features/options/optionsSlice';
import { activeTabNameSelector } from '../features/options/optionsSelectors';
import { SystemState } from '../features/system/systemSlice';
import _ from 'lodash';
import { Model } from './invokeai';
keepGUIAlive();
const appSelector = createSelector(
[
(state: RootState) => state.gallery,
(state: RootState) => state.options,
(state: RootState) => state.system,
activeTabNameSelector,
],
(
gallery: GalleryState,
options: OptionsState,
system: SystemState,
activeTabName
) => {
const { shouldShowGallery, shouldHoldGalleryOpen, shouldPinGallery } =
gallery;
const {
shouldShowOptionsPanel,
shouldHoldOptionsPanelOpen,
shouldPinOptionsPanel,
} = options;
const modelStatusText = _.reduce(
system.model_list,
(acc: string, cur: Model, key: string) => {
if (cur.status === 'active') acc = key;
return acc;
},
''
);
const shouldShowGalleryButton = !(
shouldShowGallery ||
(shouldHoldGalleryOpen && !shouldPinGallery)
);
const shouldShowOptionsPanelButton =
!(
shouldShowOptionsPanel ||
(shouldHoldOptionsPanelOpen && !shouldPinOptionsPanel)
) && ['txt2img', 'img2img', 'inpainting'].includes(activeTabName);
return {
modelStatusText,
shouldShowGalleryButton,
shouldShowOptionsPanelButton,
};
},
{
memoizeOptions: {
resultEqualityCheck: _.isEqual,
},
}
);
const App = () => {
useToastWatcher();
const dispatch = useAppDispatch();
const { shouldShowGalleryButton, shouldShowOptionsPanelButton } =
useAppSelector(appSelector);
useEffect(() => {
dispatch(requestSystemConfig());
}, [dispatch]);
return (
<div className="App">
@ -26,9 +96,9 @@ const App = () => {
<div className="app-console">
<Console />
</div>
{shouldShowGalleryButton && <FloatingGalleryButton />}
{shouldShowOptionsPanelButton && <FloatingOptionsPanelButtons />}
</ImageUploader>
<FloatingOptionsPanelButtons />
<FloatingGalleryButton />
</div>
);
};

View File

@ -1,6 +1,6 @@
// TODO: use Enums?
import { InProgressImageType } from 'features/system/store/systemSlice';
import { InProgressImageType } from '../features/system/systemSlice';
// Valid samplers
export const SAMPLERS: Array<string> = [
@ -9,8 +9,6 @@ export const SAMPLERS: Array<string> = [
'k_lms',
'k_dpm_2',
'k_dpm_2_a',
'k_dpmpp_2',
'k_dpmpp_2_a',
'k_euler',
'k_euler_a',
'k_heun',

View File

@ -13,13 +13,10 @@ export enum Feature {
UPSCALE,
FACE_CORRECTION,
IMAGE_TO_IMAGE,
BOUNDING_BOX,
SEAM_CORRECTION,
INFILL_AND_SCALING,
}
/** For each tooltip in the UI, the below feature definitions & props will pull relevant information into the tooltip.
*
* To-do: href & GuideImages are placeholders, and are not currently utilized, but will be updated (along with the tooltip UI) as feature and UI develop and we get a better idea on where things "forever homes" will be .
* To-do: href & GuideImages are placeholders, and are not currently utilized, but will be updated (along with the tooltip UI) as feature and UI development and we get a better idea on where things "forever homes" will be .
*/
export const FEATURES: Record<Feature, FeatureHelpInfo> = {
[Feature.PROMPT]: {
@ -58,22 +55,7 @@ export const FEATURES: Record<Feature, FeatureHelpInfo> = {
guideImage: 'asset/path.gif',
},
[Feature.IMAGE_TO_IMAGE]: {
text: 'Image to Image allows the upload of an initial image, which InvokeAI will use to guide the generation process, along with a prompt. A lower value for this setting will more closely resemble the original image. Values between 0-1 are accepted, and a range of .25-.75 is recommended ',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.BOUNDING_BOX]: {
text: 'The bounding box is analogous to the Width and Height settings for Text to Image or Image to Image. Only the area in the box will be processed.',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.SEAM_CORRECTION]: {
text: 'Control the handling of visible seams which may occur when a generated image is pasted back onto the canvas.',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.INFILL_AND_SCALING]: {
text: 'Manage infill methods (used on masked or erased areas of the canvas) and scaling (useful for small bounding box sizes).',
text: 'ImageToImage allows the upload of an initial image, which InvokeAI will use to guide the generation process, along with a prompt. A lower value for this setting will more closely resemble the original image. Values between 0-1 are accepted, and a range of .25-.75 is recommended ',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},

View File

@ -12,9 +12,7 @@
* 'gfpgan'.
*/
import { Category as GalleryCategory } from 'features/gallery/store/gallerySlice';
import { InvokeTabName } from 'features/tabs/components/InvokeTabs';
import { IRect } from 'konva/lib/types';
import { Category as GalleryCategory } from '../features/gallery/gallerySlice';
/**
* TODO:
@ -46,8 +44,6 @@ export declare type CommonGeneratedImageMetadata = {
| 'ddim'
| 'k_dpm_2_a'
| 'k_dpm_2'
| 'k_dpmpp_2_a'
| 'k_dpmpp_2'
| 'k_euler_a'
| 'k_euler'
| 'k_heun'
@ -107,7 +103,7 @@ export declare type PostProcessedImageMetadata =
| FacetoolMetadata;
// Metadata includes the system config and image metadata.
export declare type Metadata = SystemGenerationMetadata & {
export declare type Metadata = SystemConfig & {
image: GeneratedImageMetadata | PostProcessedImageMetadata;
};
@ -115,14 +111,12 @@ export declare type Metadata = SystemGenerationMetadata & {
export declare type Image = {
uuid: string;
url: string;
thumbnail: string;
mtime: number;
metadata?: Metadata;
width: number;
height: number;
category: GalleryCategory;
isBase64?: boolean;
dreamPrompt?: 'string';
category: GalleryCategory;
isBase64: boolean;
};
// GalleryImages is an array of Image.
@ -146,18 +140,13 @@ export declare type SystemStatus = {
hasError: boolean;
};
export declare type SystemGenerationMetadata = {
export declare type SystemConfig = {
model: string;
model_weights?: string;
model_id?: string;
model_id: string;
model_hash: string;
app_id: string;
app_version: string;
};
export declare type SystemConfig = SystemGenerationMetadata & {
model_list: ModelList;
infill_methods: string[];
};
export declare type ModelStatus = 'active' | 'cached' | 'not loaded';
@ -182,19 +171,10 @@ export declare type SystemStatusResponse = SystemStatus;
export declare type SystemConfigResponse = SystemConfig;
export declare type ImageResultResponse = Omit<Image, 'uuid'> & {
boundingBox?: IRect;
generationMode: InvokeTabName;
};
export declare type ImageResultResponse = Omit<Image, 'uuid'>;
export declare type ImageUploadResponse = {
// image: Omit<Image, 'uuid' | 'metadata' | 'category'>;
url: string;
mtime: number;
width: number;
height: number;
thumbnail: string;
// bbox: [number, number, number, number];
export declare type ImageUploadResponse = Omit<Image, 'uuid' | 'metadata'> & {
destination: 'img2img' | 'inpainting';
};
export declare type ErrorResponse = {
@ -218,12 +198,9 @@ export declare type ImageUrlResponse = {
url: string;
};
export declare type ImageUploadDestination = 'img2img' | 'inpainting';
export declare type UploadImagePayload = {
file: File;
destination?: ImageUploadDestination;
};
export declare type UploadOutpaintingMergeImagePayload = {
dataURL: string;
name: string;
};

View File

@ -1,35 +1,39 @@
import { createSelector } from '@reduxjs/toolkit';
import _ from 'lodash';
import { RootState } from 'app/store';
import { activeTabNameSelector } from 'features/options/store/optionsSelectors';
import { OptionsState } from 'features/options/store/optionsSlice';
import { SystemState } from 'features/system/store/systemSlice';
import { validateSeedWeights } from 'common/util/seedWeightPairs';
import { initialCanvasImageSelector } from 'features/canvas/store/canvasSelectors';
import { RootState } from '../store';
import { activeTabNameSelector } from '../../features/options/optionsSelectors';
import { OptionsState } from '../../features/options/optionsSlice';
import { SystemState } from '../../features/system/systemSlice';
import { InpaintingState } from '../../features/tabs/Inpainting/inpaintingSlice';
import { validateSeedWeights } from '../../common/util/seedWeightPairs';
export const readinessSelector = createSelector(
[
(state: RootState) => state.options,
(state: RootState) => state.system,
initialCanvasImageSelector,
(state: RootState) => state.inpainting,
activeTabNameSelector,
],
(
options: OptionsState,
system: SystemState,
initialCanvasImage,
inpainting: InpaintingState,
activeTabName
) => {
const {
prompt,
shouldGenerateVariations,
seedWeights,
// maskPath,
initialImage,
seed,
} = options;
const { isProcessing, isConnected } = system;
const { imageToInpaint } = inpainting;
let isReady = true;
const reasonsWhyNotReady: string[] = [];
@ -44,6 +48,20 @@ export const readinessSelector = createSelector(
reasonsWhyNotReady.push('No initial image selected');
}
if (activeTabName === 'inpainting' && !imageToInpaint) {
isReady = false;
reasonsWhyNotReady.push('No inpainting image selected');
}
// // We don't use mask paths now.
// // Cannot generate with a mask without img2img
// if (maskPath && !initialImage) {
// isReady = false;
// reasonsWhyNotReady.push(
// 'On ImageToImage tab, but no mask is provided.'
// );
// }
// TODO: job queue
// Cannot generate if already processing an image
if (isProcessing) {

View File

@ -1,7 +1,8 @@
import { createAction } from '@reduxjs/toolkit';
import { GalleryCategory } from 'features/gallery/store/gallerySlice';
import { InvokeTabName } from 'features/tabs/components/InvokeTabs';
import * as InvokeAI from 'app/invokeai';
import { GalleryCategory } from '../../features/gallery/gallerySlice';
import { InvokeTabName } from '../../features/tabs/InvokeTabs';
import * as InvokeAI from '../invokeai';
/**
* We can't use redux-toolkit's createSlice() to make these actions,
@ -25,6 +26,8 @@ export const requestNewImages = createAction<GalleryCategory>(
export const cancelProcessing = createAction<undefined>(
'socketio/cancelProcessing'
);
export const uploadImage = createAction<InvokeAI.UploadImagePayload>('socketio/uploadImage');
export const uploadMaskImage = createAction<File>('socketio/uploadMaskImage');
export const requestSystemConfig = createAction<undefined>(
'socketio/requestSystemConfig'
@ -33,11 +36,3 @@ export const requestSystemConfig = createAction<undefined>(
export const requestModelChange = createAction<string>(
'socketio/requestModelChange'
);
export const saveStagingAreaImageToGallery = createAction<string>(
'socketio/saveStagingAreaImageToGallery'
);
export const emptyTempFolder = createAction<undefined>(
'socketio/requestEmptyTempFolder'
);

View File

@ -4,22 +4,23 @@ import { Socket } from 'socket.io-client';
import {
frontendToBackendParameters,
FrontendToBackendParametersConfig,
} from 'common/util/parameterTranslation';
} from '../../common/util/parameterTranslation';
import {
GalleryCategory,
GalleryState,
removeImage,
} from 'features/gallery/store/gallerySlice';
import { OptionsState } from 'features/options/store/optionsSlice';
} from '../../features/gallery/gallerySlice';
import { OptionsState } from '../../features/options/optionsSlice';
import {
addLogEntry,
generationRequested,
errorOccurred,
modelChangeRequested,
setIsProcessing,
} from 'features/system/store/systemSlice';
import { InvokeTabName } from 'features/tabs/components/InvokeTabs';
import * as InvokeAI from 'app/invokeai';
import { RootState } from 'app/store';
} from '../../features/system/systemSlice';
import { inpaintingImageElementRef } from '../../features/tabs/Inpainting/InpaintingCanvas';
import { InvokeTabName } from '../../features/tabs/InvokeTabs';
import * as InvokeAI from '../invokeai';
import { RootState } from '../store';
/**
* Returns an object containing all functions which use `socketio.emit()`.
@ -41,7 +42,7 @@ const makeSocketIOEmitters = (
const {
options: optionsState,
system: systemState,
canvas: canvasState,
inpainting: inpaintingState,
gallery: galleryState,
} = state;
@ -49,13 +50,32 @@ const makeSocketIOEmitters = (
{
generationMode,
optionsState,
canvasState,
inpaintingState,
systemState,
};
dispatch(generationRequested());
if (generationMode === 'inpainting') {
if (
!inpaintingImageElementRef.current ||
!inpaintingState.imageToInpaint?.url
) {
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
message: 'Inpainting image not loaded, cannot generate image.',
level: 'error',
})
);
dispatch(errorOccurred());
return;
}
if (!['txt2img', 'img2img'].includes(generationMode)) {
frontendToBackendParametersConfig.imageToProcessUrl =
inpaintingState.imageToInpaint.url;
frontendToBackendParametersConfig.maskImageElement =
inpaintingImageElementRef.current;
} else if (!['txt2img', 'img2img'].includes(generationMode)) {
if (!galleryState.currentImage?.url) return;
frontendToBackendParametersConfig.imageToProcessUrl =
@ -76,12 +96,7 @@ const makeSocketIOEmitters = (
// TODO: handle maintaining masks for reproducibility in future
if (generationParameters.init_mask) {
generationParameters.init_mask = generationParameters.init_mask
.substr(0, 64)
.concat('...');
}
if (generationParameters.init_img) {
generationParameters.init_img = generationParameters.init_img
.substr(0, 64)
.substr(0, 20)
.concat('...');
}
@ -147,9 +162,9 @@ const makeSocketIOEmitters = (
);
},
emitDeleteImage: (imageToDelete: InvokeAI.Image) => {
const { url, uuid, category, thumbnail } = imageToDelete;
const { url, uuid, category } = imageToDelete;
dispatch(removeImage(imageToDelete));
socketio.emit('deleteImage', url, thumbnail, uuid, category);
socketio.emit('deleteImage', url, uuid, category);
},
emitRequestImages: (category: GalleryCategory) => {
const gallery: GalleryState = getState().gallery;
@ -164,6 +179,13 @@ const makeSocketIOEmitters = (
emitCancelProcessing: () => {
socketio.emit('cancel');
},
emitUploadImage: (payload: InvokeAI.UploadImagePayload) => {
const { file, destination } = payload;
socketio.emit('uploadImage', file, file.name, destination);
},
emitUploadMaskImage: (file: File) => {
socketio.emit('uploadMaskImage', file, file.name);
},
emitRequestSystemConfig: () => {
socketio.emit('requestSystemConfig');
},
@ -171,12 +193,6 @@ const makeSocketIOEmitters = (
dispatch(modelChangeRequested());
socketio.emit('requestModelChange', modelName);
},
emitSaveStagingAreaImageToGallery: (url: string) => {
socketio.emit('requestSaveStagingAreaImageToGallery', url);
},
emitRequestEmptyTempFolder: () => {
socketio.emit('requestEmptyTempFolder');
},
};
};

View File

@ -2,7 +2,7 @@ import { AnyAction, MiddlewareAPI, Dispatch } from '@reduxjs/toolkit';
import { v4 as uuidv4 } from 'uuid';
import dateFormat from 'dateformat';
import * as InvokeAI from 'app/invokeai';
import * as InvokeAI from '../invokeai';
import {
addLogEntry,
@ -15,8 +15,7 @@ import {
errorOccurred,
setModelList,
setIsCancelable,
addToast,
} from 'features/system/store/systemSlice';
} from '../../features/system/systemSlice';
import {
addGalleryImages,
@ -24,22 +23,21 @@ import {
clearIntermediateImage,
GalleryState,
removeImage,
setCurrentImage,
setIntermediateImage,
} from 'features/gallery/store/gallerySlice';
} from '../../features/gallery/gallerySlice';
import {
clearInitialImage,
setInfillMethod,
setInitialImage,
setMaskPath,
} from 'features/options/store/optionsSlice';
} from '../../features/options/optionsSlice';
import { requestImages, requestNewImages } from './actions';
import {
requestImages,
requestNewImages,
requestSystemConfig,
} from './actions';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import { tabMap } from 'features/tabs/components/InvokeTabs';
clearImageToInpaint,
setImageToInpaint,
} from '../../features/tabs/Inpainting/inpaintingSlice';
import { tabMap } from '../../features/tabs/InvokeTabs';
/**
* Returns an object containing listener callbacks for socketio events.
@ -58,20 +56,19 @@ const makeSocketIOListeners = (
try {
dispatch(setIsConnected(true));
dispatch(setCurrentStatus('Connected'));
dispatch(requestSystemConfig());
const gallery: GalleryState = getState().gallery;
if (gallery.categories.result.latest_mtime) {
dispatch(requestNewImages('result'));
} else {
dispatch(requestImages('result'));
}
if (gallery.categories.user.latest_mtime) {
dispatch(requestNewImages('user'));
} else {
dispatch(requestImages('user'));
}
if (gallery.categories.result.latest_mtime) {
dispatch(requestNewImages('result'));
} else {
dispatch(requestImages('result'));
}
} catch (e) {
console.error(e);
}
@ -100,42 +97,19 @@ const makeSocketIOListeners = (
*/
onGenerationResult: (data: InvokeAI.ImageResultResponse) => {
try {
const state = getState();
const { shouldLoopback, activeTab } = state.options;
const { boundingBox: _, generationMode, ...rest } = data;
const { shouldLoopback, activeTab } = getState().options;
const newImage = {
uuid: uuidv4(),
...rest,
...data,
category: 'result',
};
if (['txt2img', 'img2img'].includes(generationMode)) {
dispatch(
addImage({
category: 'result',
image: { ...newImage, category: 'result' },
})
);
}
if (generationMode === 'unifiedCanvas' && data.boundingBox) {
const { boundingBox } = data;
dispatch(
addImageToStagingArea({
image: { ...newImage, category: 'temp' },
boundingBox,
})
);
if (state.canvas.shouldAutoSave) {
dispatch(
addImage({
image: { ...newImage, category: 'result' },
category: 'result',
})
);
}
}
dispatch(
addImage({
category: 'result',
image: newImage,
})
);
if (shouldLoopback) {
const activeTabName = tabMap[activeTab];
@ -144,11 +118,13 @@ const makeSocketIOListeners = (
dispatch(setInitialImage(newImage));
break;
}
case 'inpainting': {
dispatch(setImageToInpaint(newImage));
break;
}
}
}
dispatch(clearIntermediateImage());
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
@ -168,7 +144,6 @@ const makeSocketIOListeners = (
setIntermediateImage({
uuid: uuidv4(),
...data,
category: 'result',
})
);
if (!data.isBase64) {
@ -324,11 +299,16 @@ const makeSocketIOListeners = (
// remove references to image in options
const { initialImage, maskPath } = getState().options;
const { imageToInpaint } = getState().inpainting;
if (initialImage?.url === url || initialImage === url) {
dispatch(clearInitialImage());
}
if (imageToInpaint?.url === url) {
dispatch(clearImageToInpaint());
}
if (maskPath === url) {
dispatch(setMaskPath(''));
}
@ -340,11 +320,56 @@ const makeSocketIOListeners = (
})
);
},
onImageUploaded: (data: InvokeAI.ImageUploadResponse) => {
const { destination, ...rest } = data;
const image = {
uuid: uuidv4(),
...rest,
};
try {
dispatch(addImage({ image, category: 'user' }));
switch (destination) {
case 'img2img': {
dispatch(setInitialImage(image));
break;
}
case 'inpainting': {
dispatch(setImageToInpaint(image));
break;
}
default: {
dispatch(setCurrentImage(image));
break;
}
}
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
message: `Image uploaded: ${data.url}`,
})
);
} catch (e) {
console.error(e);
}
},
/**
* Callback to run when we receive a 'maskImageUploaded' event.
*/
onMaskImageUploaded: (data: InvokeAI.ImageUrlResponse) => {
const { url } = data;
dispatch(setMaskPath(url));
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
message: `Mask image uploaded: ${url}`,
})
);
},
onSystemConfig: (data: InvokeAI.SystemConfig) => {
dispatch(setSystemConfig(data));
if (!data.infill_methods.includes('patchmatch')) {
dispatch(setInfillMethod(data.infill_methods[0]));
}
},
onModelChanged: (data: InvokeAI.ModelChangeResponse) => {
const { model_name, model_list } = data;
@ -374,16 +399,6 @@ const makeSocketIOListeners = (
})
);
},
onTempFolderEmptied: () => {
dispatch(
addToast({
title: 'Temp Folder Emptied',
status: 'success',
duration: 2500,
isClosable: true,
})
);
},
};
};

View File

@ -4,7 +4,7 @@ import { io } from 'socket.io-client';
import makeSocketIOListeners from './listeners';
import makeSocketIOEmitters from './emitters';
import * as InvokeAI from 'app/invokeai';
import * as InvokeAI from '../invokeai';
/**
* Creates a socketio middleware to handle communication with server.
@ -43,10 +43,11 @@ export const socketioMiddleware = () => {
onGalleryImages,
onProcessingCanceled,
onImageDeleted,
onImageUploaded,
onMaskImageUploaded,
onSystemConfig,
onModelChanged,
onModelChangeFailed,
onTempFolderEmptied,
} = makeSocketIOListeners(store);
const {
@ -57,10 +58,10 @@ export const socketioMiddleware = () => {
emitRequestImages,
emitRequestNewImages,
emitCancelProcessing,
emitUploadImage,
emitUploadMaskImage,
emitRequestSystemConfig,
emitRequestModelChange,
emitSaveStagingAreaImageToGallery,
emitRequestEmptyTempFolder,
} = makeSocketIOEmitters(store, socketio);
/**
@ -103,6 +104,17 @@ export const socketioMiddleware = () => {
onImageDeleted(data);
});
socketio.on(
'imageUploaded',
(data: InvokeAI.ImageUploadResponse) => {
onImageUploaded(data);
}
);
socketio.on('maskImageUploaded', (data: InvokeAI.ImageUrlResponse) => {
onMaskImageUploaded(data);
});
socketio.on('systemConfig', (data: InvokeAI.SystemConfig) => {
onSystemConfig(data);
});
@ -115,10 +127,6 @@ export const socketioMiddleware = () => {
onModelChangeFailed(data);
});
socketio.on('tempFolderEmptied', () => {
onTempFolderEmptied();
});
areListenersSet = true;
}
@ -161,6 +169,16 @@ export const socketioMiddleware = () => {
break;
}
case 'socketio/uploadImage': {
emitUploadImage(action.payload);
break;
}
case 'socketio/uploadMaskImage': {
emitUploadMaskImage(action.payload);
break;
}
case 'socketio/requestSystemConfig': {
emitRequestSystemConfig();
break;
@ -170,16 +188,6 @@ export const socketioMiddleware = () => {
emitRequestModelChange(action.payload);
break;
}
case 'socketio/saveStagingAreaImageToGallery': {
emitSaveStagingAreaImageToGallery(action.payload);
break;
}
case 'socketio/requestEmptyTempFolder': {
emitRequestEmptyTempFolder();
break;
}
}
next(action);

View File

@ -5,14 +5,16 @@ import type { TypedUseSelectorHook } from 'react-redux';
import { persistReducer } from 'redux-persist';
import storage from 'redux-persist/lib/storage'; // defaults to localStorage for web
import { getPersistConfig } from 'redux-deep-persist';
import optionsReducer from 'features/options/store/optionsSlice';
import galleryReducer from 'features/gallery/store/gallerySlice';
import systemReducer from 'features/system/store/systemSlice';
import canvasReducer from 'features/canvas/store/canvasSlice';
import optionsReducer, { OptionsState } from '../features/options/optionsSlice';
import galleryReducer, { GalleryState } from '../features/gallery/gallerySlice';
import inpaintingReducer, {
InpaintingState,
} from '../features/tabs/Inpainting/inpaintingSlice';
import systemReducer, { SystemState } from '../features/system/systemSlice';
import { socketioMiddleware } from './socketio/middleware';
import autoMergeLevel2 from 'redux-persist/es/stateReconciler/autoMergeLevel2';
import { PersistPartial } from 'redux-persist/es/persistReducer';
/**
* redux-persist provides an easy and reliable way to persist state across reloads.
@ -26,79 +28,87 @@ import { socketioMiddleware } from './socketio/middleware';
* These can be blacklisted in redux-persist.
*
* The necesssary nested persistors with blacklists are configured below.
*
* TODO: Do we blacklist initialImagePath? If the image is deleted from disk we get an
* ugly 404. But if we blacklist it, then this is a valuable parameter that is lost
* on reload. Need to figure out a good way to handle this.
*/
const canvasBlacklist = [
'cursorPosition',
'isCanvasInitialized',
'doesCanvasNeedScaling',
].map((blacklistItem) => `canvas.${blacklistItem}`);
const systemBlacklist = [
'currentIteration',
'currentStatus',
'currentStep',
'isCancelable',
'isConnected',
'isESRGANAvailable',
'isGFPGANAvailable',
'isProcessing',
'socketId',
'totalIterations',
'totalSteps',
].map((blacklistItem) => `system.${blacklistItem}`);
const galleryBlacklist = [
'categories',
'currentCategory',
'currentImage',
'currentImageUuid',
'shouldAutoSwitchToNewImages',
'shouldHoldGalleryOpen',
'intermediateImage',
].map((blacklistItem) => `gallery.${blacklistItem}`);
const rootReducer = combineReducers({
options: optionsReducer,
gallery: galleryReducer,
system: systemReducer,
canvas: canvasReducer,
});
const rootPersistConfig = getPersistConfig({
const rootPersistConfig = {
key: 'root',
storage,
rootReducer,
blacklist: [...canvasBlacklist, ...systemBlacklist, ...galleryBlacklist],
debounce: 300,
stateReconciler: autoMergeLevel2,
blacklist: ['gallery', 'system', 'inpainting'],
};
const systemPersistConfig = {
key: 'system',
storage,
stateReconciler: autoMergeLevel2,
blacklist: [
'isCancelable',
'isConnected',
'isProcessing',
'currentStep',
'socketId',
'isESRGANAvailable',
'isGFPGANAvailable',
'currentStep',
'totalSteps',
'currentIteration',
'totalIterations',
'currentStatus',
],
};
const galleryPersistConfig = {
key: 'gallery',
storage,
stateReconciler: autoMergeLevel2,
whitelist: [
'galleryWidth',
'shouldPinGallery',
'shouldShowGallery',
'galleryScrollPosition',
'galleryImageMinimumWidth',
'galleryImageObjectFit',
],
};
const inpaintingPersistConfig = {
key: 'inpainting',
storage,
stateReconciler: autoMergeLevel2,
blacklist: ['pastLines', 'futuresLines', 'cursorPosition'],
};
const reducers = combineReducers({
options: optionsReducer,
gallery: persistReducer<GalleryState>(galleryPersistConfig, galleryReducer),
system: persistReducer<SystemState>(systemPersistConfig, systemReducer),
inpainting: persistReducer<InpaintingState>(
inpaintingPersistConfig,
inpaintingReducer
),
});
const persistedReducer = persistReducer(rootPersistConfig, rootReducer);
const persistedReducer = persistReducer<{
options: OptionsState;
gallery: GalleryState & PersistPartial;
system: SystemState & PersistPartial;
inpainting: InpaintingState & PersistPartial;
}>(rootPersistConfig, reducers);
// Continue with store setup
export const store = configureStore({
reducer: persistedReducer,
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware({
immutableCheck: false,
// redux-persist sometimes needs to temporarily put a function in redux state, need to disable this check
serializableCheck: false,
}).concat(socketioMiddleware()),
devTools: {
// Uncommenting these very rapidly called actions makes the redux dev tools output much more readable
actionsDenylist: [
'canvas/setCursorPosition',
'canvas/setStageCoordinates',
'canvas/setStageScale',
'canvas/setIsDrawing',
// 'canvas/setBoundingBoxCoordinates',
// 'canvas/setBoundingBoxDimensions',
'canvas/setIsDrawing',
'canvas/addPointToCurrentLine',
],
},
});
export type AppGetState = typeof store.getState;
export type RootState = ReturnType<typeof store.getState>;
export type AppDispatch = typeof store.dispatch;

View File

@ -1,77 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="100%" height="100%" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xml:space="preserve" xmlns:serif="http://www.serif.com/" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:1.5;">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
<g transform="matrix(1,0,0,1,0,5)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,10)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,15)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,20)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,25)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,30)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,35)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,40)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,45)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,50)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,55)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,60)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-5)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-10)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-15)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-20)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-25)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-30)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-35)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-40)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-45)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-50)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-55)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
<g transform="matrix(1,0,0,1,0,-60)">
<path d="M-3.5,63.5L64,-4" style="fill:none;stroke:black;stroke-width:1px;"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 3.7 KiB

View File

@ -1,7 +1,7 @@
import { Box, forwardRef, Icon } from '@chakra-ui/react';
import { IconType } from 'react-icons';
import { MdHelp } from 'react-icons/md';
import { Feature } from 'app/features';
import { Feature } from '../../app/features';
import GuidePopover from './GuidePopover';
type GuideIconProps = {
@ -13,7 +13,7 @@ const GuideIcon = forwardRef(
({ feature, icon = MdHelp }: GuideIconProps, ref) => (
<GuidePopover feature={feature}>
<Box ref={ref}>
<Icon marginBottom={'-.15rem'} as={icon} />
<Icon as={icon} />
</Box>
</GuidePopover>
)

View File

@ -1,11 +1,11 @@
.guide-popover-arrow {
background-color: var(--tab-panel-bg);
box-shadow: none;
background-color: var(--tab-panel-bg) !important;
box-shadow: none !important;
}
.guide-popover-content {
background-color: var(--background-color-secondary);
border: none;
background-color: var(--background-color-secondary) !important;
border: none !important;
}
.guide-popover-guide-content {

Some files were not shown because too many files have changed in this diff Show More