Compare commits

..

24 Commits

Author SHA1 Message Date
8e95eb9e81 bump patch level 2022-12-31 17:43:09 +00:00
f56bf3ef54 fix formatting error during startup 2022-12-31 17:42:47 +00:00
0faae1b594 use invoke-ai version of GFPGAN
- This version is very slightly modified to allow weights files
  to be pre-downloaded by the configure script.
2022-12-31 17:41:00 +00:00
e46bbbc532 bump patchlevel 2022-12-31 16:53:02 +00:00
89072ee3b5 Merge branch 'main' of github.com:invoke-ai/InvokeAI into lstein-release-candidate-2-2-5
- also fix up setup.py to install frontend packages files recursively
2022-12-31 16:52:04 +00:00
339aa94cf3 add locales to frontend dist package
- bump to patchlevel 6
2022-12-31 15:46:57 +00:00
5057beddf5 bump rc# 2022-12-30 12:53:25 +00:00
ade9bbe185 rebuild frontend 2022-12-30 12:52:43 +00:00
83df5c211c create_installer now adds version number 2022-12-29 14:37:01 +00:00
75f07dd22e Merge branch 'main' into lstein-release-candidate-2-2-5 2022-12-29 09:01:08 -05:00
060eff5dad bump rc version 2022-12-28 20:37:36 -05:00
5d00831f71 Merge branch 'main' into lstein-release-candidate-2-2-5 2022-12-28 20:33:39 -05:00
d74ed7e974 bring installers up to date with 2.2.5-rc2 2022-12-29 01:21:55 +00:00
451750229d model_cache applies rootdir to config path 2022-12-28 17:59:53 +00:00
080fe48106 Merge branch 'lstein-release-candidate-2-2-5' of github.com:invoke-ai/InvokeAI into lstein-release-candidate-2-2-5 2022-12-28 17:59:15 +00:00
ff0eb56c96 remove extraneous whitespace 2022-12-28 13:44:29 +00:00
006123aa32 rc2.2.5 (install.sh) relative path fixes (#2155)
* (installer) fix bug in resolution of relative paths in linux install script

point installer at 2.2.5-rc1

selecting ~/Data/myapps/ as location  would create a ./~/Data/myapps
instead of expanding the ~/ to the value of ${HOME}

also, squash the trailing slash in path, if it was entered by the user

* (installer) add option to automatically start the app after install

also: when exiting, print the command to get back into the app
2022-12-28 08:00:35 -05:00
540da32bd5 give Linux user option of installing ROCm or CUDA 2022-12-26 02:37:16 +00:00
aa084b205f Merge branch 'main' into lstein-release-candidate-2-2-5 2022-12-25 19:02:01 -05:00
49f97f994a fix permissions on create_installer.sh 2022-12-25 19:00:41 -05:00
211d7be03d bump version number 2022-12-25 18:28:06 -05:00
7d99416cc9 update pulls from "latest" now 2022-12-25 23:11:35 +00:00
f60bf9e1e6 update.bat.in debugged and working 2022-12-25 18:13:06 +00:00
fce7b5466a installer tweaks in preparation for v2.2.5
- pin numpy to 1.23.* to avoid requirements conflict with numba
- update.sh and update.bat now accept a tag or branch string, not a URL
- update scripts download latest requirements-base before updating.
2022-12-25 17:36:59 +00:00
1324 changed files with 69382 additions and 283919 deletions

View File

@ -1,25 +1,19 @@
# use this file as a whitelist
*
!invokeai
!backend
!environments-and-requirements
!frontend
!ldm
!pyproject.toml
# ignore frontend/web but whitelist dist
invokeai/frontend/web/
!invokeai/frontend/web/dist/
# ignore invokeai/assets but whitelist invokeai/assets/web
invokeai/assets/
!invokeai/assets/web/
!main.py
!scripts
!server
!static
!setup.py
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# Byte-compiled / optimized / DLL files
**/__pycache__/
**/*.py[cod]
# unignore configs, but only ignore the custom models.yaml, in case it exists
!configs
configs/models.yaml
# Distribution / packaging
**/*.egg-info/
**/*.egg
**/__pycache__

View File

@ -1 +0,0 @@
b3dccfaeb636599c02effc377cdd8a87d658256c

41
.github/CODEOWNERS vendored
View File

@ -1,34 +1,7 @@
# continuous integration
/.github/workflows/ @mauwii @lstein @blessedcoolant
# documentation
/docs/ @lstein @mauwii @tildebyte @blessedcoolant
/mkdocs.yml @lstein @mauwii @blessedcoolant
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant
# installation and configuration
/pyproject.toml @mauwii @lstein @blessedcoolant
/docker/ @mauwii @lstein @blessedcoolant
/scripts/ @ebr @lstein
/installer/ @lstein @ebr
/invokeai/assets @lstein @ebr
/invokeai/configs @lstein
/invokeai/version @lstein @blessedcoolant
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein
/invokeai/backend @blessedcoolant @psychedelicious @lstein
# generation, model management, postprocessing
/invokeai/backend @keturn @damian0815 @lstein @blessedcoolant @jpphoto
# front ends
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein @ebr @mauwii
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant
ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
installer/ @tildebyte
.github/workflows/ @mauwii
docker_build/ @mauwii

View File

@ -65,16 +65,6 @@ body:
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
validations:
required: true
- type: textarea
id: what-happened

87
.github/workflows/build-cloud-img.yml vendored Normal file
View File

@ -0,0 +1,87 @@
name: Build and push cloud image
on:
workflow_dispatch:
# push:
# branches:
# - main
# tags:
# - v*
# # we will NOT push the image on pull requests, only test buildability.
# pull_request:
# branches:
# - main
permissions:
contents: read
packages: write
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
docker:
strategy:
fail-fast: false
matrix:
arch:
- x86_64
# requires resolving a patchmatch issue
# - aarch64
runs-on: ubuntu-latest
name: ${{ matrix.arch }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
if: matrix.arch == 'aarch64'
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# see https://github.com/docker/metadata-action
# will push the following tags:
# :edge
# :main (+ any other branches enabled in the workflow)
# :<tag>
# :1.2.3 (for semver tags)
# :1.2 (for semver tags)
# :<sha>
tags: |
type=edge,branch=main
type=ref,event=branch
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
# suffix image tags with architecture
flavor: |
latest=auto
suffix=-${{ matrix.arch }},latest=true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
# do not login to container registry on PRs
- if: github.event_name != 'pull_request'
name: Docker login
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push cloud image
uses: docker/build-push-action@v3
with:
context: .
file: docker-build/Dockerfile.cloud
platforms: Linux/${{ matrix.arch }}
# do not push the image on PRs
push: false
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@ -3,112 +3,72 @@ on:
push:
branches:
- 'main'
- 'update/ci/docker/*'
- 'update/docker/*'
- 'dev/ci/docker/*'
- 'dev/docker/*'
paths:
- 'pyproject.toml'
- '.dockerignore'
- 'invokeai/**'
- 'docker/Dockerfile'
tags:
- 'v*.*.*'
workflow_dispatch:
permissions:
contents: write
packages: write
jobs:
docker:
if: github.event.pull_request.draft == false
strategy:
fail-fast: false
matrix:
registry:
- ghcr.io
flavor:
- rocm
- amd
- cuda
- cpu
# - cloud
include:
- flavor: rocm
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
- flavor: amd
pip-requirements: requirements-lin-amd.txt
dockerfile: docker-build/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: cuda
pip-extra-index-url: ''
- flavor: cpu
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
pip-requirements: requirements-lin-cuda.txt
dockerfile: docker-build/Dockerfile
platforms: linux/amd64,linux/arm64
# - flavor: cloud
# pip-requirements: requirements-lin-cuda.txt
# dockerfile: docker-build/Dockerfile.cloud
# platforms: linux/amd64
runs-on: ubuntu-latest
name: ${{ matrix.flavor }}
env:
PLATFORMS: 'linux/amd64,linux/arm64'
DOCKERFILE: 'docker/Dockerfile'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ vars.DOCKERHUB_REPOSITORY }}
images: ${{ matrix.registry }}/${{ github.repository }}-${{ matrix.flavor }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=pep440,pattern={{version}}
type=pep440,pattern={{major}}.{{minor}}
type=pep440,pattern={{major}}
type=sha,enable=true,prefix=sha-,format=short
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.flavor }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
latest=true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
- if: github.event_name != 'pull_request'
name: Docker login
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
registry: ${{ matrix.registry }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
id: docker_build
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
context: .
file: ${{ env.DOCKERFILE }}
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.flavor }}
type=gha,scope=main-${{ matrix.flavor }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.flavor }}
- name: Docker Hub Description
if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ vars.DOCKERHUB_REPOSITORY }}
short-description: ${{ github.event.repository.description }}
build-args: pip_requirements=${{ matrix.pip-requirements }}

View File

@ -1,34 +0,0 @@
name: cleanup caches by a branch
on:
pull_request:
types:
- closed
workflow_dispatch:
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH=${{ github.ref }}
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,27 +0,0 @@
name: Close inactive issues
on:
schedule:
- cron: "00 6 * * *"
env:
DAYS_BEFORE_ISSUE_STALE: 14
DAYS_BEFORE_ISSUE_CLOSE: 28
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: ${{ env.DAYS_BEFORE_ISSUE_STALE }}
days-before-issue-close: ${{ env.DAYS_BEFORE_ISSUE_CLOSE }}
stale-issue-label: "Inactive Issue"
stale-issue-message: "There has been no activity in this issue for ${{ env.DAYS_BEFORE_ISSUE_STALE }} days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release."
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

View File

@ -3,26 +3,17 @@ name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
- 'frontend/**'
push:
branches:
- 'main'
paths:
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
- 'frontend/**'
defaults:
run:
working-directory: invokeai/frontend/web
working-directory: frontend
jobs:
lint-frontend:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18
@ -31,7 +22,7 @@ jobs:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'
- run: 'yarn tsc'
- run: 'yarn run madge'
- run: 'yarn run lint --max-warnings=0'
- run: 'yarn run prettier --check'

View File

@ -5,12 +5,8 @@ on:
- 'main'
- 'development'
permissions:
contents: write
jobs:
mkdocs-material:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- name: checkout sources

View File

@ -9,7 +9,6 @@ on:
jobs:
pyflakes:
name: runner / pyflakes
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

View File

@ -1,41 +0,0 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
jobs:
release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: install deps
run: pip install --upgrade build twine
- name: build package
run: python3 -m build
- name: check distribution
run: twine check dist/*
- name: check PyPI versions
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/v2.3'
run: |
pip install --upgrade requests
python -c "\
import scripts.pypi_helper; \
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
run: twine upload dist/*

161
.github/workflows/test-invoke-conda.yml vendored Normal file
View File

@ -0,0 +1,161 @@
name: Test invoke.py
on:
push:
branches:
- 'main'
pull_request:
branches:
- 'main'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
- 'converted_to_draft'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
fail_if_pull_request_is_draft:
if: github.event.pull_request.draft == true
runs-on: ubuntu-22.04
steps:
- name: Fails in order to indicate that pull request needs to be marked as ready to review and unit tests workflow needs to pass.
run: exit 1
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
stable-diffusion-model:
- 'stable-diffusion-1.5'
environment-yaml:
- environment-lin-amd.yml
- environment-lin-cuda.yml
- environment-mac.yml
- environment-win-cuda.yml
include:
- environment-yaml: environment-lin-amd.yml
os: ubuntu-22.04
curl-command: curl
github-env: $GITHUB_ENV
default-shell: bash -l {0}
- environment-yaml: environment-lin-cuda.yml
os: ubuntu-22.04
curl-command: curl
github-env: $GITHUB_ENV
default-shell: bash -l {0}
- environment-yaml: environment-mac.yml
os: macos-12
curl-command: curl
github-env: $GITHUB_ENV
default-shell: bash -l {0}
- environment-yaml: environment-win-cuda.yml
os: windows-2022
curl-command: curl.exe
github-env: $env:GITHUB_ENV
default-shell: pwsh
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.environment-yaml }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: create environment.yml
run: cp "environments-and-requirements/${{ matrix.environment-yaml }}" environment.yml
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-yaml) }}
- name: Activate Conda Env
id: activate-conda-env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: environment.yml
miniconda-version: latest
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
${{ matrix.curl-command }} -H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" -o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" -L ${{ matrix.stable-diffusion-model-url }}
- name: run configure_invokeai.py
id: run-preload-models
run: |
python scripts/configure_invokeai.py --skip-sd-weights --yes
- name: cat invokeai.init
id: cat-invokeai
run: cat ${{ env.INVOKEAI_ROOT }}/invokeai.init
- name: Run the tests
id: run-tests
if: matrix.os != 'windows-2022'
run: |
time python scripts/invoke.py \
--no-patchmatch \
--no-nsfw_checker \
--model ${{ matrix.stable-diffusion-model }} \
--from_file ${{ env.TEST_PROMPTS }} \
--root="${{ env.INVOKEAI_ROOT }}" \
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
- name: export conda env
id: export-conda-env
if: matrix.os != 'windows-2022'
run: |
mkdir -p outputs/img-samples
conda env export --name ${{ env.CONDA_ENV_NAME }} > ${{ env.INVOKEAI_ROOT }}/outputs/environment-${{ runner.os }}-${{ runner.arch }}.yml
- name: Archive results
if: matrix.os != 'windows-2022'
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

View File

@ -1,66 +0,0 @@
name: Test invoke.py pip
on:
pull_request:
paths:
- '**'
- '!pyproject.toml'
- '!invokeai/**'
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- run: 'echo "No build required"'

View File

@ -3,142 +3,142 @@ on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
pull_request:
paths:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
branches:
- 'main'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
merge_group:
workflow_dispatch:
- 'converted_to_draft'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
fail_if_pull_request_is_draft:
if: github.event.pull_request.draft == true
runs-on: ubuntu-18.04
steps:
- name: Fails in order to indicate that pull request needs to be marked as ready to review and unit tests workflow needs to pass.
run: exit 1
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
stable-diffusion-model:
- stable-diffusion-1.5
requirements-file:
- requirements-lin-cuda.txt
- requirements-lin-amd.txt
- requirements-mac-mps-cpu.txt
- requirements-win-colab-cuda.txt
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
- requirements-file: requirements-lin-cuda.txt
os: ubuntu-22.04
curl-command: curl
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
- requirements-file: requirements-lin-amd.txt
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
curl-command: curl
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
- requirements-file: requirements-mac-mps-cpu.txt
os: macOS-12
curl-command: curl
github-env: $GITHUB_ENV
- pytorch: windows-cpu
- requirements-file: requirements-win-colab-cuda.txt
os: windows-2022
curl-command: curl.exe
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.requirements-file }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
env:
PIP_USE_PEP517: '1'
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: set INVOKEAI_ROOT Windows
if: matrix.os == 'windows-2022'
run: |
echo "INVOKEAI_ROOT=${{ github.workspace }}\invokeai" >> ${{ matrix.github-env }}
echo "INVOKEAI_OUTDIR=${{ github.workspace }}\invokeai\outputs" >> ${{ matrix.github-env }}
- name: set INVOKEAI_ROOT others
if: matrix.os != 'windows-2022'
run: |
echo "INVOKEAI_ROOT=${{ github.workspace }}/invokeai" >> ${{ matrix.github-env }}
echo "INVOKEAI_OUTDIR=${{ github.workspace }}/invokeai/outputs" >> ${{ matrix.github-env }}
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' }}
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: create requirements.txt
run: cp 'environments-and-requirements/${{ matrix.requirements-file }}' '${{ matrix.requirements-file }}'
- name: setup python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
# cache: 'pip'
# cache-dependency-path: ${{ matrix.requirements-file }}
- name: install invokeai
- name: install dependencies
run: pip3 install --upgrade pip setuptools wheel
- name: install requirements
run: pip3 install -r '${{ matrix.requirements-file }}'
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install
--editable=".[test]"
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: run pytest
id: run-pytest
run: pytest
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
${{ matrix.curl-command }} -H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" -o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" -L ${{ matrix.stable-diffusion-model-url }}
- name: set INVOKEAI_OUTDIR
run: >
python -c
"import os;from invokeai.backend.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
>> ${{ matrix.github-env }}
- name: run invokeai-configure
- name: run configure_invokeai.py
id: run-preload-models
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
run: >
invokeai-configure
--yes
--default_only
--full-precision
# can't use fp16 weights without a GPU
run: python3 scripts/configure_invokeai.py --skip-sd-weights --yes
- name: run invokeai
id: run-invokeai
env:
# Set offline mode to make sure configure preloaded successfully.
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--from_file ${{ env.TEST_PROMPTS }}
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
- name: Run the tests
id: run-tests
if: matrix.os != 'windows-2022'
run: python3 scripts/invoke.py --no-patchmatch --no-nsfw_checker --model ${{ matrix.stable-diffusion-model }} --from_file ${{ env.TEST_PROMPTS }} --root="${{ env.INVOKEAI_ROOT }}" --outdir="${{ env.INVOKEAI_OUTDIR }}"
- name: Archive results
id: archive-results
if: matrix.os != 'windows-2022'
uses: actions/upload-artifact@v3
with:
name: results
path: ${{ env.INVOKEAI_OUTDIR }}
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

19
.gitignore vendored
View File

@ -1,6 +1,4 @@
# ignore default image save location and model symbolic link
.idea/
embeddings/
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
**/restoration/codeformer/weights
@ -63,20 +61,16 @@ pip-delete-this-directory.txt
htmlcov/
.tox/
.nox/
.coveragerc
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
cov.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.pytest.ini
cover/
junit/
# Translations
*.mo
@ -200,7 +194,7 @@ checkpoints
.DS_Store
# Let the frontend manage its own gitignore
!invokeai/frontend/web/*
!frontend/*
# Scratch folder
.scratch/
@ -215,6 +209,11 @@ gfpgan/
# config file (will be created by installer)
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt
models/clipseg
models/gfpgan
# ignore initfile
.invokeai
@ -229,3 +228,9 @@ installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh
# this may be present if the user created a venv
invokeai
# no longer stored in source directory
models

256
README.md
View File

@ -1,6 +1,6 @@
<div align="center">
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
![project logo](docs/assets/invoke_ai_banner.png)
# InvokeAI: A Stable Diffusion Toolkit
@ -8,12 +8,14 @@
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@ -24,155 +26,57 @@
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, macOS and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface.
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
**Quick links**: [[How to Install](#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
</div>
## Table of Contents
1. [Quick Start](#getting-started-with-invokeai)
2. [Installation](#detailed-installation-instructions)
3. [Hardware Requirements](#hardware-requirements)
4. [Features](#features)
5. [Latest Changes](#latest-changes)
6. [Troubleshooting](#troubleshooting)
7. [Contributing](#contributing)
8. [Contributors](#contributors)
9. [Support](#support)
10. [Further Reading](#further-reading)
## Getting Started with InvokeAI
# Getting Started with InvokeAI
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
### Automatic Installer (suggested for 1st time users)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
2. Download the .zip file for your OS (Windows/macOS/Linux).
3. Unzip the file.
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
5. Wait a while, until it is done.
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
8. Type `banana sushi` in the box on the top left and click `Invoke`:
4. If you are on Windows, double-click on the `install.bat` script. On
macOS, open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. On Linux, run `install.sh`.
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generaiton models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
## Table of Contents
8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`
1. [Installation](#installation)
2. [Hardware Requirements](#hardware-requirements)
3. [Features](#features)
4. [Latest Changes](#latest-changes)
5. [Troubleshooting](#troubleshooting)
6. [Contributing](#contributing)
7. [Contributors](#contributors)
8. [Support](#support)
9. [Further Reading](#further-reading)
9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
not supported.
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
```terminal
mkdir invokeai
````
3. Create a virtual environment named `.venv` inside this directory and activate it:
```terminal
cd invokeai
python -m venv .venv --prompt InvokeAI
```
4. Activate the virtual environment (do it every time you run InvokeAI)
_For Linux/Mac users:_
```sh
source .venv/bin/activate
```
_For Windows users:_
```ps
.venv\Scripts\activate
```
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For Macintoshes, either Intel or M1/M2:_
```sh
pip install InvokeAI --use-pep517
```
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure
```
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai --web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
### Detailed Installation Instructions
### Installation
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
@ -180,90 +84,90 @@ AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
## Hardware Requirements
### Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
#### System
### System
You will need one of the following:
You wil need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
### Memory
#### Memory
- At least 12 GB Main Memory RAM.
### Disk
#### Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
**Note**
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
### *Web Server & UI*
Similarly, specify full-precision mode on Apple M1 hardware.
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `invoke.py` with the `--precision=float32` flag to your initialization command
### *Unified Canvas*
```bash
(invokeai) ~/InvokeAI$ python scripts/invoke.py --precision=float32
```
Or by updating your InvokeAI configuration file with this argument.
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### Features
### *Advanced Prompt Syntax*
#### Major Features
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
- [Web Server](https://invoke-ai.github.io/InvokeAI/features/WEB/)
- [Interactive Command Line Interface](https://invoke-ai.github.io/InvokeAI/features/CLI/)
- [Image To Image](https://invoke-ai.github.io/InvokeAI/features/IMG2IMG/)
- [Inpainting Support](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
- [Outpainting Support](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/)
- [Upscaling, face-restoration and outpainting](https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/)
- [Reading Prompts From File](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#reading-prompts-from-a-file)
- [Prompt Blending](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-blending)
- [Thresholding and Perlin Noise Initialization Options](https://invoke-ai.github.io/InvokeAI/features/OTHER/#thresholding-and-perlin-noise-initialization-options)
- [Negative/Unconditioned Prompts](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts)
- [Variations](https://invoke-ai.github.io/InvokeAI/features/VARIATIONS/)
- [Personalizing Text-to-Image Generation](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
- [Simplified API for text to image generation](https://invoke-ai.github.io/InvokeAI/features/OTHER/#simplified-api)
### *Command Line Interface*
#### Other Features
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
- *Popular Sampler Support*
- *Upscaling & Face Restoration Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
### Coming Soon
- *Node-Based Architecture & UI*
- And more...
- [Google Colab](https://invoke-ai.github.io/InvokeAI/features/OTHER/#google-colab)
- [Seamless Tiling](https://invoke-ai.github.io/InvokeAI/features/OTHER/#seamless-tiling)
- [Shortcut: Reusing Seeds](https://invoke-ai.github.io/InvokeAI/features/OTHER/#shortcuts-reusing-seeds)
- [Preload Models](https://invoke-ai.github.io/InvokeAI/features/OTHER/#preload-models)
### Latest Changes
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
For our latest changes, view our [Release Notes](https://github.com/invoke-ai/InvokeAI/releases)
## Troubleshooting
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
## Contributing
# Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
@ -277,11 +181,15 @@ This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
email if you use and like the script.
Original portions of the software are Copyright (c) 2023 by respective contributors.
Original portions of the software are Copyright (c) 2022
[Lincoln D. Stein](https://github.com/lstein)
### Further Reading
Please see the original README for more information on this software and underlying algorithm,
located in the file [README-CompViz.md](https://invoke-ai.github.io/InvokeAI/other/README-CompViz/).

View File

Before

Width:  |  Height:  |  Size: 651 KiB

After

Width:  |  Height:  |  Size: 651 KiB

View File

Before

Width:  |  Height:  |  Size: 596 KiB

After

Width:  |  Height:  |  Size: 596 KiB

View File

Before

Width:  |  Height:  |  Size: 609 KiB

After

Width:  |  Height:  |  Size: 609 KiB

View File

Before

Width:  |  Height:  |  Size: 548 KiB

After

Width:  |  Height:  |  Size: 548 KiB

View File

Before

Width:  |  Height:  |  Size: 705 KiB

After

Width:  |  Height:  |  Size: 705 KiB

View File

Before

Width:  |  Height:  |  Size: 757 KiB

After

Width:  |  Height:  |  Size: 757 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 466 KiB

After

Width:  |  Height:  |  Size: 466 KiB

View File

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

Before

Width:  |  Height:  |  Size: 539 KiB

After

Width:  |  Height:  |  Size: 539 KiB

View File

Before

Width:  |  Height:  |  Size: 7.6 KiB

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

Before

Width:  |  Height:  |  Size: 450 KiB

After

Width:  |  Height:  |  Size: 450 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 553 KiB

After

Width:  |  Height:  |  Size: 553 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 418 KiB

After

Width:  |  Height:  |  Size: 418 KiB

View File

Before

Width:  |  Height:  |  Size: 542 KiB

After

Width:  |  Height:  |  Size: 542 KiB

View File

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Before

Width:  |  Height:  |  Size: 612 KiB

After

Width:  |  Height:  |  Size: 612 KiB

View File

Before

Width:  |  Height:  |  Size: 312 KiB

After

Width:  |  Height:  |  Size: 312 KiB

View File

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 319 KiB

View File

Before

Width:  |  Height:  |  Size: 788 KiB

After

Width:  |  Height:  |  Size: 788 KiB

View File

Before

Width:  |  Height:  |  Size: 958 KiB

After

Width:  |  Height:  |  Size: 958 KiB

View File

Before

Width:  |  Height:  |  Size: 9.4 MiB

After

Width:  |  Height:  |  Size: 9.4 MiB

View File

Before

Width:  |  Height:  |  Size: 610 KiB

After

Width:  |  Height:  |  Size: 610 KiB

View File

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

Before

Width:  |  Height:  |  Size: 945 KiB

After

Width:  |  Height:  |  Size: 945 KiB

View File

Before

Width:  |  Height:  |  Size: 972 KiB

After

Width:  |  Height:  |  Size: 972 KiB

View File

Before

Width:  |  Height:  |  Size: 662 KiB

After

Width:  |  Height:  |  Size: 662 KiB

View File

Before

Width:  |  Height:  |  Size: 302 KiB

After

Width:  |  Height:  |  Size: 302 KiB

View File

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 2.2 MiB

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,55 @@
import argparse
import os
from ldm.invoke.args import PRECISION_CHOICES
def create_cmd_parser():
parser = argparse.ArgumentParser(description="InvokeAI web UI")
parser.add_argument(
"--host",
type=str,
help="The host to serve on",
default="localhost",
)
parser.add_argument("--port", type=int, help="The port to serve on", default=9090)
parser.add_argument(
"--cors",
nargs="*",
type=str,
help="Additional allowed origins, comma-separated",
)
parser.add_argument(
"--embedding_path",
type=str,
help="Path to a pre-trained embedding manager checkpoint - can only be set on command line",
)
# TODO: Can't get flask to serve images from any dir (saving to the dir does work when specified)
# parser.add_argument(
# "--output_dir",
# default="outputs/",
# type=str,
# help="Directory for output images",
# )
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="Enables verbose logging",
)
parser.add_argument(
"--precision",
dest="precision",
type=str,
choices=PRECISION_CHOICES,
metavar="PRECISION",
help=f'Set model precision. Defaults to auto selected based on device. Options: {", ".join(PRECISION_CHOICES)}',
default="auto",
)
parser.add_argument(
'--free_gpu_mem',
dest='free_gpu_mem',
action='store_true',
help='Force free gpu memory before final decoding',
)
return parser

View File

@ -0,0 +1,117 @@
from PIL import Image, ImageChops
from PIL.Image import Image as ImageType
from typing import Union, Literal
# https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent
def check_for_any_transparency(img: Union[ImageType, str]) -> bool:
if type(img) is str:
img = Image.open(str)
if img.info.get("transparency", None) is not None:
return True
if img.mode == "P":
transparent = img.info.get("transparency", -1)
for _, index in img.getcolors():
if index == transparent:
return True
elif img.mode == "RGBA":
extrema = img.getextrema()
if extrema[3][0] < 255:
return True
return False
def get_canvas_generation_mode(
init_img: Union[ImageType, str], init_mask: Union[ImageType, str]
) -> Literal["txt2img", "outpainting", "inpainting", "img2img",]:
if type(init_img) is str:
init_img = Image.open(init_img)
if type(init_mask) is str:
init_mask = Image.open(init_mask)
init_img = init_img.convert("RGBA")
# Get alpha from init_img
init_img_alpha = init_img.split()[-1]
init_img_alpha_mask = init_img_alpha.convert("L")
init_img_has_transparency = check_for_any_transparency(init_img)
if init_img_has_transparency:
init_img_is_fully_transparent = (
True if init_img_alpha_mask.getbbox() is None else False
)
"""
Mask images are white in areas where no change should be made, black where changes
should be made.
"""
# Fit the mask to init_img's size and convert it to greyscale
init_mask = init_mask.resize(init_img.size).convert("L")
"""
PIL.Image.getbbox() returns the bounding box of non-zero areas of the image, so we first
invert the mask image so that masked areas are white and other areas black == zero.
getbbox() now tells us if the are any masked areas.
"""
init_mask_bbox = ImageChops.invert(init_mask).getbbox()
init_mask_exists = False if init_mask_bbox is None else True
if init_img_has_transparency:
if init_img_is_fully_transparent:
return "txt2img"
else:
return "outpainting"
else:
if init_mask_exists:
return "inpainting"
else:
return "img2img"
def main():
# Testing
init_img_opaque = "test_images/init-img_opaque.png"
init_img_partial_transparency = "test_images/init-img_partial_transparency.png"
init_img_full_transparency = "test_images/init-img_full_transparency.png"
init_mask_no_mask = "test_images/init-mask_no_mask.png"
init_mask_has_mask = "test_images/init-mask_has_mask.png"
print(
"OPAQUE IMAGE, NO MASK, expect img2img, got ",
get_canvas_generation_mode(init_img_opaque, init_mask_no_mask),
)
print(
"IMAGE WITH TRANSPARENCY, NO MASK, expect outpainting, got ",
get_canvas_generation_mode(
init_img_partial_transparency, init_mask_no_mask
),
)
print(
"FULLY TRANSPARENT IMAGE NO MASK, expect txt2img, got ",
get_canvas_generation_mode(init_img_full_transparency, init_mask_no_mask),
)
print(
"OPAQUE IMAGE, WITH MASK, expect inpainting, got ",
get_canvas_generation_mode(init_img_opaque, init_mask_has_mask),
)
print(
"IMAGE WITH TRANSPARENCY, WITH MASK, expect outpainting, got ",
get_canvas_generation_mode(
init_img_partial_transparency, init_mask_has_mask
),
)
print(
"FULLY TRANSPARENT IMAGE WITH MASK, expect txt2img, got ",
get_canvas_generation_mode(init_img_full_transparency, init_mask_has_mask),
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,71 @@
from backend.modules.parse_seed_weights import parse_seed_weights
import argparse
SAMPLER_CHOICES = [
"ddim",
"k_dpm_2_a",
"k_dpm_2",
"k_dpmpp_2_a",
"k_dpmpp_2",
"k_euler_a",
"k_euler",
"k_heun",
"k_lms",
"plms",
]
def parameters_to_command(params):
"""
Converts dict of parameters into a `invoke.py` REPL command.
"""
switches = list()
if "prompt" in params:
switches.append(f'"{params["prompt"]}"')
if "steps" in params:
switches.append(f'-s {params["steps"]}')
if "seed" in params:
switches.append(f'-S {params["seed"]}')
if "width" in params:
switches.append(f'-W {params["width"]}')
if "height" in params:
switches.append(f'-H {params["height"]}')
if "cfg_scale" in params:
switches.append(f'-C {params["cfg_scale"]}')
if "sampler_name" in params:
switches.append(f'-A {params["sampler_name"]}')
if "seamless" in params and params["seamless"] == True:
switches.append(f"--seamless")
if "hires_fix" in params and params["hires_fix"] == True:
switches.append(f"--hires")
if "init_img" in params and len(params["init_img"]) > 0:
switches.append(f'-I {params["init_img"]}')
if "init_mask" in params and len(params["init_mask"]) > 0:
switches.append(f'-M {params["init_mask"]}')
if "init_color" in params and len(params["init_color"]) > 0:
switches.append(f'--init_color {params["init_color"]}')
if "strength" in params and "init_img" in params:
switches.append(f'-f {params["strength"]}')
if "fit" in params and params["fit"] == True:
switches.append(f"--fit")
if "facetool" in params:
switches.append(f'-ft {params["facetool"]}')
if "facetool_strength" in params and params["facetool_strength"]:
switches.append(f'-G {params["facetool_strength"]}')
elif "gfpgan_strength" in params and params["gfpgan_strength"]:
switches.append(f'-G {params["gfpgan_strength"]}')
if "codeformer_fidelity" in params:
switches.append(f'-cf {params["codeformer_fidelity"]}')
if "upscale" in params and params["upscale"]:
switches.append(f'-U {params["upscale"][0]} {params["upscale"][1]}')
if "variation_amount" in params and params["variation_amount"] > 0:
switches.append(f'-v {params["variation_amount"]}')
if "with_variations" in params:
seed_weight_pairs = ",".join(
f"{seed}:{weight}" for seed, weight in params["with_variations"]
)
switches.append(f"-V {seed_weight_pairs}")
return " ".join(switches)

View File

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

Before

Width:  |  Height:  |  Size: 292 KiB

After

Width:  |  Height:  |  Size: 292 KiB

View File

Before

Width:  |  Height:  |  Size: 164 KiB

After

Width:  |  Height:  |  Size: 164 KiB

View File

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.4 KiB

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@ -147,7 +147,7 @@ echo ***** Installed invoke launcher script ******
rd /s /q binary_installer installer_files
@rem preload the models
call .venv\Scripts\python ldm\invoke\config\invokeai_configure.py
call .venv\Scripts\python scripts\configure_invokeai.py
set err_msg=----- model download clone failed -----
if %errorlevel% neq 0 goto err_exit
deactivate

View File

@ -2,10 +2,9 @@
--extra-index-url https://download.pytorch.org/whl/torch_stable.html
--extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https://download.pytorch.org
accelerate~=0.15
accelerate~=0.14
albumentations
diffusers[torch]~=0.11
einops
diffusers
eventlet
flask_cors
flask_socketio

View File

@ -0,0 +1,80 @@
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
config: v1-inference.yaml
file: v1-5-pruned-emaonly.ckpt
recommended: true
width: 512
height: 512
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
config: v1-inpainting-inference.yaml
file: sd-v1-5-inpainting.ckpt
recommended: True
width: 512
height: 512
ft-mse-improved-autoencoder-840000:
description: StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB)
repo_id: stabilityai/sd-vae-ft-mse-original
config: VAE/default
file: vae-ft-mse-840000-ema-pruned.ckpt
recommended: True
width: 512
height: 512
stable-diffusion-1.4:
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
repo_id: CompVis/stable-diffusion-v-1-4-original
config: v1-inference.yaml
file: sd-v1-4.ckpt
recommended: False
width: 512
height: 512
waifu-diffusion-1.3:
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27 GB)
repo_id: hakurei/waifu-diffusion-v1-3
config: v1-inference.yaml
file: model-epoch09-float32.ckpt
recommended: False
width: 512
height: 512
trinart-2.0:
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
config: v1-inference.yaml
file: trinart2_step95000.ckpt
recommended: False
width: 512
height: 512
trinart_characters-1.0:
description: An SD model finetuned with 19.2M anime/manga style images (2.13 GB)
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: v1-inference.yaml
file: trinart_characters_it4_v1.ckpt
recommended: False
width: 512
height: 512
trinart_vae:
description: Custom autoencoder for trinart_characters
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: VAE/trinart
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
recommended: False
width: 512
height: 512
papercut-1.0:
description: SD 1.5 fine-tuned for papercut art (use "PaperCut" in your prompts) (2.13 GB)
repo_id: Fictiverse/Stable_Diffusion_PaperCut_Model
config: v1-inference.yaml
file: PaperCut_v1.ckpt
recommended: False
width: 512
height: 512
voxel_art-1.0:
description: Stable Diffusion trained on voxel art (use "VoxelArt" in your prompts) (4.27 GB)
repo_id: Fictiverse/Stable_Diffusion_VoxelArt_Model
config: v1-inference.yaml
file: VoxelArt_v1.ckpt
recommended: False
width: 512
height: 512

View File

@ -0,0 +1,29 @@
# This file describes the alternative machine learning models
# available to InvokeAI script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: true
stable-diffusion-1.4:
description: Stable Diffusion inference model version 1.4
config: configs/stable-diffusion/v1-inference.yaml
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
inpainting-1.5:
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
description: RunwayML SD 1.5 model optimized for inpainting
width: 512
height: 512

803
configs/sd-concepts.txt Normal file
View File

@ -0,0 +1,803 @@
sd-concepts-library/001glitch-core
sd-concepts-library/2814-roth
sd-concepts-library/3d-female-cyborgs
sd-concepts-library/4tnght
sd-concepts-library/80s-anime-ai
sd-concepts-library/80s-anime-ai-being
sd-concepts-library/852style-girl
sd-concepts-library/8bit
sd-concepts-library/8sconception
sd-concepts-library/Aflac-duck
sd-concepts-library/Akitsuki
sd-concepts-library/Atako
sd-concepts-library/Exodus-Styling
sd-concepts-library/RINGAO
sd-concepts-library/a-female-hero-from-the-legend-of-mir
sd-concepts-library/a-hat-kid
sd-concepts-library/a-tale-of-two-empires
sd-concepts-library/aadhav-face
sd-concepts-library/aavegotchi
sd-concepts-library/abby-face
sd-concepts-library/abstract-concepts
sd-concepts-library/accurate-angel
sd-concepts-library/agm-style-nao
sd-concepts-library/aj-fosik
sd-concepts-library/alberto-mielgo
sd-concepts-library/alex-portugal
sd-concepts-library/alex-thumbnail-object-2000-steps
sd-concepts-library/aleyna-tilki
sd-concepts-library/alf
sd-concepts-library/alicebeta
sd-concepts-library/alien-avatar
sd-concepts-library/alisa
sd-concepts-library/all-rings-albuns
sd-concepts-library/altvent
sd-concepts-library/altyn-helmet
sd-concepts-library/amine
sd-concepts-library/amogus
sd-concepts-library/anders-zorn
sd-concepts-library/angus-mcbride-style
sd-concepts-library/animalve3-1500seq
sd-concepts-library/anime-background-style
sd-concepts-library/anime-background-style-v2
sd-concepts-library/anime-boy
sd-concepts-library/anime-girl
sd-concepts-library/anyXtronXredshift
sd-concepts-library/anya-forger
sd-concepts-library/apex-wingman
sd-concepts-library/apulian-rooster-v0-1
sd-concepts-library/arcane-face
sd-concepts-library/arcane-style-jv
sd-concepts-library/arcimboldo-style
sd-concepts-library/armando-reveron-style
sd-concepts-library/armor-concept
sd-concepts-library/arq-render
sd-concepts-library/art-brut
sd-concepts-library/arthur1
sd-concepts-library/artist-yukiko-kanagai
sd-concepts-library/arwijn
sd-concepts-library/ashiok
sd-concepts-library/at-wolf-boy-object
sd-concepts-library/atm-ant
sd-concepts-library/atm-ant-2
sd-concepts-library/axe-tattoo
sd-concepts-library/ayush-spider-spr
sd-concepts-library/azura-from-vibrant-venture
sd-concepts-library/ba-shiroko
sd-concepts-library/babau
sd-concepts-library/babs-bunny
sd-concepts-library/babushork
sd-concepts-library/backrooms
sd-concepts-library/bad_Hub_Hugh
sd-concepts-library/bada-club
sd-concepts-library/baldi
sd-concepts-library/baluchitherian
sd-concepts-library/bamse
sd-concepts-library/bamse-og-kylling
sd-concepts-library/bee
sd-concepts-library/beholder
sd-concepts-library/beldam
sd-concepts-library/belen
sd-concepts-library/bella-goth
sd-concepts-library/belle-delphine
sd-concepts-library/bert-muppet
sd-concepts-library/better-collage3
sd-concepts-library/between2-mt-fade
sd-concepts-library/birb-style
sd-concepts-library/black-and-white-design
sd-concepts-library/black-waifu
sd-concepts-library/bloo
sd-concepts-library/blue-haired-boy
sd-concepts-library/blue-zombie
sd-concepts-library/blue-zombiee
sd-concepts-library/bluebey
sd-concepts-library/bluebey-2
sd-concepts-library/bobs-burgers
sd-concepts-library/boissonnard
sd-concepts-library/bonzi-monkey
sd-concepts-library/borderlands
sd-concepts-library/bored-ape-textual-inversion
sd-concepts-library/boris-anderson
sd-concepts-library/bozo-22
sd-concepts-library/breakcore
sd-concepts-library/brittney-williams-art
sd-concepts-library/bruma
sd-concepts-library/brunnya
sd-concepts-library/buddha-statue
sd-concepts-library/bullvbear
sd-concepts-library/button-eyes
sd-concepts-library/canadian-goose
sd-concepts-library/canary-cap
sd-concepts-library/cancer_style
sd-concepts-library/captain-haddock
sd-concepts-library/captainkirb
sd-concepts-library/car-toy-rk
sd-concepts-library/carasibana
sd-concepts-library/carlitos-el-mago
sd-concepts-library/carrascharacter
sd-concepts-library/cartoona-animals
sd-concepts-library/cat-toy
sd-concepts-library/centaur
sd-concepts-library/cgdonny1
sd-concepts-library/cham
sd-concepts-library/chandra-nalaar
sd-concepts-library/char-con
sd-concepts-library/character-pingu
sd-concepts-library/cheburashka
sd-concepts-library/chen-1
sd-concepts-library/child-zombie
sd-concepts-library/chillpill
sd-concepts-library/chonkfrog
sd-concepts-library/chop
sd-concepts-library/christo-person
sd-concepts-library/chuck-walton
sd-concepts-library/chucky
sd-concepts-library/chungus-poodl-pet
sd-concepts-library/cindlop
sd-concepts-library/collage-cutouts
sd-concepts-library/collage14
sd-concepts-library/collage3
sd-concepts-library/collage3-hubcity
sd-concepts-library/cologne
sd-concepts-library/color-page
sd-concepts-library/colossus
sd-concepts-library/command-and-conquer-remastered-cameos
sd-concepts-library/concept-art
sd-concepts-library/conner-fawcett-style
sd-concepts-library/conway-pirate
sd-concepts-library/coop-himmelblau
sd-concepts-library/coraline
sd-concepts-library/cornell-box
sd-concepts-library/cortana
sd-concepts-library/covid-19-rapid-test
sd-concepts-library/cow-uwu
sd-concepts-library/cowboy
sd-concepts-library/crazy-1
sd-concepts-library/crazy-2
sd-concepts-library/crb-portraits
sd-concepts-library/crb-surrealz
sd-concepts-library/crbart
sd-concepts-library/crested-gecko
sd-concepts-library/crinos-form-garou
sd-concepts-library/cry-baby-style
sd-concepts-library/crybaby-style-2-0
sd-concepts-library/csgo-awp-object
sd-concepts-library/csgo-awp-texture-map
sd-concepts-library/cubex
sd-concepts-library/cumbia-peruana
sd-concepts-library/cute-bear
sd-concepts-library/cute-cat
sd-concepts-library/cute-game-style
sd-concepts-library/cyberpunk-lucy
sd-concepts-library/dabotap
sd-concepts-library/dan-mumford
sd-concepts-library/dan-seagrave-art-style
sd-concepts-library/dark-penguin-pinguinanimations
sd-concepts-library/darkpenguinanimatronic
sd-concepts-library/darkplane
sd-concepts-library/david-firth-artstyle
sd-concepts-library/david-martinez-cyberpunk
sd-concepts-library/david-martinez-edgerunners
sd-concepts-library/david-moreno-architecture
sd-concepts-library/daycare-attendant-sun-fnaf
sd-concepts-library/ddattender
sd-concepts-library/degods
sd-concepts-library/degodsheavy
sd-concepts-library/depthmap
sd-concepts-library/depthmap-style
sd-concepts-library/design
sd-concepts-library/detectivedinosaur1
sd-concepts-library/diaosu-toy
sd-concepts-library/dicoo
sd-concepts-library/dicoo2
sd-concepts-library/dishonored-portrait-styles
sd-concepts-library/disquieting-muses
sd-concepts-library/ditko
sd-concepts-library/dlooak
sd-concepts-library/doc
sd-concepts-library/doener-red-line-art
sd-concepts-library/dog
sd-concepts-library/dog-django
sd-concepts-library/doge-pound
sd-concepts-library/dong-ho
sd-concepts-library/dong-ho2
sd-concepts-library/doose-s-realistic-art-style
sd-concepts-library/dq10-anrushia
sd-concepts-library/dr-livesey
sd-concepts-library/dr-strange
sd-concepts-library/dragonborn
sd-concepts-library/dreamcore
sd-concepts-library/dreamy-painting
sd-concepts-library/drive-scorpion-jacket
sd-concepts-library/dsmuses
sd-concepts-library/dtv-pkmn
sd-concepts-library/dullboy-caricature
sd-concepts-library/duranduran
sd-concepts-library/durer-style
sd-concepts-library/dyoudim-style
sd-concepts-library/early-mishima-kurone
sd-concepts-library/eastward
sd-concepts-library/eddie
sd-concepts-library/edgerunners-style
sd-concepts-library/edgerunners-style-v2
sd-concepts-library/el-salvador-style-style
sd-concepts-library/elegant-flower
sd-concepts-library/elspeth-tirel
sd-concepts-library/eru-chitanda-casual
sd-concepts-library/erwin-olaf-style
sd-concepts-library/ettblackteapot
sd-concepts-library/explosions-cat
sd-concepts-library/eye-of-agamotto
sd-concepts-library/f-22
sd-concepts-library/facadeplace
sd-concepts-library/fairy-tale-painting-style
sd-concepts-library/fairytale
sd-concepts-library/fang-yuan-001
sd-concepts-library/faraon-love-shady
sd-concepts-library/fasina
sd-concepts-library/felps
sd-concepts-library/female-kpop-singer
sd-concepts-library/fergal-cat
sd-concepts-library/filename-2
sd-concepts-library/fileteado-porteno
sd-concepts-library/final-fantasy-logo
sd-concepts-library/fireworks-over-water
sd-concepts-library/fish
sd-concepts-library/flag-ussr
sd-concepts-library/flatic
sd-concepts-library/floral
sd-concepts-library/fluid-acrylic-jellyfish-creatures-style-of-carl-ingram-art
sd-concepts-library/fnf-boyfriend
sd-concepts-library/fold-structure
sd-concepts-library/fox-purple
sd-concepts-library/fractal
sd-concepts-library/fractal-flame
sd-concepts-library/fractal-temple-style
sd-concepts-library/frank-frazetta
sd-concepts-library/franz-unterberger
sd-concepts-library/freddy-fazbear
sd-concepts-library/freefonix-style
sd-concepts-library/furrpopasthetic
sd-concepts-library/fursona
sd-concepts-library/fzk
sd-concepts-library/galaxy-explorer
sd-concepts-library/ganyu-genshin-impact
sd-concepts-library/garcon-the-cat
sd-concepts-library/garfield-pizza-plush
sd-concepts-library/garfield-pizza-plush-v2
sd-concepts-library/gba-fe-class-cards
sd-concepts-library/gba-pokemon-sprites
sd-concepts-library/geggin
sd-concepts-library/ggplot2
sd-concepts-library/ghost-style
sd-concepts-library/ghostproject-men
sd-concepts-library/gibasachan-v0
sd-concepts-library/gim
sd-concepts-library/gio
sd-concepts-library/giygas
sd-concepts-library/glass-pipe
sd-concepts-library/glass-prism-cube
sd-concepts-library/glow-forest
sd-concepts-library/goku
sd-concepts-library/gram-tops
sd-concepts-library/green-blue-shanshui
sd-concepts-library/green-tent
sd-concepts-library/grifter
sd-concepts-library/grisstyle
sd-concepts-library/grit-toy
sd-concepts-library/gt-color-paint-2
sd-concepts-library/gta5-artwork
sd-concepts-library/guttestreker
sd-concepts-library/gymnastics-leotard-v2
sd-concepts-library/half-life-2-dog
sd-concepts-library/handstand
sd-concepts-library/hanfu-anime-style
sd-concepts-library/happy-chaos
sd-concepts-library/happy-person12345
sd-concepts-library/happy-person12345-assets
sd-concepts-library/harley-quinn
sd-concepts-library/harmless-ai-1
sd-concepts-library/harmless-ai-house-style-1
sd-concepts-library/hd-emoji
sd-concepts-library/heather
sd-concepts-library/henjo-techno-show
sd-concepts-library/herge-style
sd-concepts-library/hiten-style-nao
sd-concepts-library/hitokomoru-style-nao
sd-concepts-library/hiyuki-chan
sd-concepts-library/hk-bamboo
sd-concepts-library/hk-betweenislands
sd-concepts-library/hk-bicycle
sd-concepts-library/hk-blackandwhite
sd-concepts-library/hk-breakfast
sd-concepts-library/hk-buses
sd-concepts-library/hk-clouds
sd-concepts-library/hk-goldbuddha
sd-concepts-library/hk-goldenlantern
sd-concepts-library/hk-hkisland
sd-concepts-library/hk-leaves
sd-concepts-library/hk-market
sd-concepts-library/hk-oldcamera
sd-concepts-library/hk-opencamera
sd-concepts-library/hk-peach
sd-concepts-library/hk-phonevax
sd-concepts-library/hk-streetpeople
sd-concepts-library/hk-vintage
sd-concepts-library/hoi4
sd-concepts-library/hoi4-leaders
sd-concepts-library/homestuck-sprite
sd-concepts-library/homestuck-troll
sd-concepts-library/hours-sentry-fade
sd-concepts-library/hours-style
sd-concepts-library/hrgiger-drmacabre
sd-concepts-library/huang-guang-jian
sd-concepts-library/huatli
sd-concepts-library/huayecai820-greyscale
sd-concepts-library/hub-city
sd-concepts-library/hubris-oshri
sd-concepts-library/huckleberry
sd-concepts-library/hydrasuit
sd-concepts-library/i-love-chaos
sd-concepts-library/ibere-thenorio
sd-concepts-library/ic0n
sd-concepts-library/ie-gravestone
sd-concepts-library/ikea-fabler
sd-concepts-library/illustration-style
sd-concepts-library/ilo-kunst
sd-concepts-library/ilya-shkipin
sd-concepts-library/im-poppy
sd-concepts-library/ina-art
sd-concepts-library/indian-watercolor-portraits
sd-concepts-library/indiana
sd-concepts-library/ingmar-bergman
sd-concepts-library/insidewhale
sd-concepts-library/interchanges
sd-concepts-library/inuyama-muneto-style-nao
sd-concepts-library/irasutoya
sd-concepts-library/iridescent-illustration-style
sd-concepts-library/iridescent-photo-style
sd-concepts-library/isabell-schulte-pv-pvii-3000steps
sd-concepts-library/isabell-schulte-pviii-1-image-style
sd-concepts-library/isabell-schulte-pviii-1024px-1500-steps-style
sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-1-lr-3000-steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-3-lr-5000-steps-style
sd-concepts-library/isabell-schulte-pviii-4tiles-500steps
sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps
sd-concepts-library/isabell-schulte-pviii-style
sd-concepts-library/isometric-tile-test
sd-concepts-library/jacqueline-the-unicorn
sd-concepts-library/james-web-space-telescope
sd-concepts-library/jamie-hewlett-style
sd-concepts-library/jamiels
sd-concepts-library/jang-sung-rak-style
sd-concepts-library/jetsetdreamcastcovers
sd-concepts-library/jin-kisaragi
sd-concepts-library/jinjoon-lee-they
sd-concepts-library/jm-bergling-monogram
sd-concepts-library/joe-mad
sd-concepts-library/joe-whiteford-art-style
sd-concepts-library/joemad
sd-concepts-library/john-blanche
sd-concepts-library/johnny-silverhand
sd-concepts-library/jojo-bizzare-adventure-manga-lineart
sd-concepts-library/jos-de-kat
sd-concepts-library/junji-ito-artstyle
sd-concepts-library/kaleido
sd-concepts-library/kaneoya-sachiko
sd-concepts-library/kanovt
sd-concepts-library/kanv1
sd-concepts-library/karan-gloomy
sd-concepts-library/karl-s-lzx-1
sd-concepts-library/kasumin
sd-concepts-library/kawaii-colors
sd-concepts-library/kawaii-girl-plus-object
sd-concepts-library/kawaii-girl-plus-style
sd-concepts-library/kawaii-girl-plus-style-v1-1
sd-concepts-library/kay
sd-concepts-library/kaya-ghost-assasin
sd-concepts-library/ki
sd-concepts-library/kinda-sus
sd-concepts-library/kings-quest-agd
sd-concepts-library/kiora
sd-concepts-library/kira-sensei
sd-concepts-library/kirby
sd-concepts-library/klance
sd-concepts-library/kodakvision500t
sd-concepts-library/kogatan-shiny
sd-concepts-library/kogecha
sd-concepts-library/kojima-ayami
sd-concepts-library/koko-dog
sd-concepts-library/kuvshinov
sd-concepts-library/kysa-v-style
sd-concepts-library/laala-character
sd-concepts-library/larrette
sd-concepts-library/lavko
sd-concepts-library/lazytown-stephanie
sd-concepts-library/ldr
sd-concepts-library/ldrs
sd-concepts-library/led-toy
sd-concepts-library/lego-astronaut
sd-concepts-library/leica
sd-concepts-library/leif-jones
sd-concepts-library/lex
sd-concepts-library/liliana
sd-concepts-library/liliana-vess
sd-concepts-library/liminal-spaces-2-0
sd-concepts-library/liminalspaces
sd-concepts-library/line-art
sd-concepts-library/line-style
sd-concepts-library/linnopoke
sd-concepts-library/liquid-light
sd-concepts-library/liqwid-aquafarmer
sd-concepts-library/lizardman
sd-concepts-library/loab-character
sd-concepts-library/loab-style
sd-concepts-library/lofa
sd-concepts-library/logo-with-face-on-shield
sd-concepts-library/lolo
sd-concepts-library/looney-anime
sd-concepts-library/lost-rapper
sd-concepts-library/lphr-style
sd-concepts-library/lucario
sd-concepts-library/lucky-luke
sd-concepts-library/lugal-ki-en
sd-concepts-library/luinv2
sd-concepts-library/lula-13
sd-concepts-library/lumio
sd-concepts-library/lxj-o4
sd-concepts-library/m-geo
sd-concepts-library/m-geoo
sd-concepts-library/madhubani-art
sd-concepts-library/mafalda-character
sd-concepts-library/magic-pengel
sd-concepts-library/malika-favre-art-style
sd-concepts-library/manga-style
sd-concepts-library/marbling-art
sd-concepts-library/margo
sd-concepts-library/marty
sd-concepts-library/marty6
sd-concepts-library/mass
sd-concepts-library/masyanya
sd-concepts-library/masyunya
sd-concepts-library/mate
sd-concepts-library/matthew-stone
sd-concepts-library/mattvidpro
sd-concepts-library/maurice-quentin-de-la-tour-style
sd-concepts-library/maus
sd-concepts-library/max-foley
sd-concepts-library/mayor-richard-irvin
sd-concepts-library/mechasoulall
sd-concepts-library/medazzaland
sd-concepts-library/memnarch-mtg
sd-concepts-library/metagabe
sd-concepts-library/meyoco
sd-concepts-library/meze-audio-elite-headphones
sd-concepts-library/midjourney-style
sd-concepts-library/mikako-method
sd-concepts-library/mikako-methodi2i
sd-concepts-library/miko-3-robot
sd-concepts-library/milady
sd-concepts-library/mildemelwe-style
sd-concepts-library/million-live-akane-15k
sd-concepts-library/million-live-akane-3k
sd-concepts-library/million-live-akane-shifuku-3k
sd-concepts-library/million-live-spade-q-object-3k
sd-concepts-library/million-live-spade-q-style-3k
sd-concepts-library/minecraft-concept-art
sd-concepts-library/mishima-kurone
sd-concepts-library/mizkif
sd-concepts-library/moeb-style
sd-concepts-library/moebius
sd-concepts-library/mokoko
sd-concepts-library/mokoko-seed
sd-concepts-library/monster-girl
sd-concepts-library/monster-toy
sd-concepts-library/monte-novo
sd-concepts-library/moo-moo
sd-concepts-library/morino-hon-style
sd-concepts-library/moxxi
sd-concepts-library/msg
sd-concepts-library/mtg-card
sd-concepts-library/mtl-longsky
sd-concepts-library/mu-sadr
sd-concepts-library/munch-leaks-style
sd-concepts-library/museum-by-coop-himmelblau
sd-concepts-library/muxoyara
sd-concepts-library/my-hero-academia-style
sd-concepts-library/my-mug
sd-concepts-library/mycat
sd-concepts-library/mystical-nature
sd-concepts-library/naf
sd-concepts-library/nahiri
sd-concepts-library/namine-ritsu
sd-concepts-library/naoki-saito
sd-concepts-library/nard-style
sd-concepts-library/naruto
sd-concepts-library/natasha-johnston
sd-concepts-library/nathan-wyatt
sd-concepts-library/naval-portrait
sd-concepts-library/nazuna
sd-concepts-library/nebula
sd-concepts-library/ned-flanders
sd-concepts-library/neon-pastel
sd-concepts-library/new-priests
sd-concepts-library/nic-papercuts
sd-concepts-library/nikodim
sd-concepts-library/nissa-revane
sd-concepts-library/nixeu
sd-concepts-library/noggles
sd-concepts-library/nomad
sd-concepts-library/nouns-glasses
sd-concepts-library/obama-based-on-xi
sd-concepts-library/obama-self-2
sd-concepts-library/og-mox-style
sd-concepts-library/ohisashiburi-style
sd-concepts-library/oleg-kuvaev
sd-concepts-library/olli-olli
sd-concepts-library/on-kawara
sd-concepts-library/one-line-drawing
sd-concepts-library/onepunchman
sd-concepts-library/onzpo
sd-concepts-library/orangejacket
sd-concepts-library/ori
sd-concepts-library/ori-toor
sd-concepts-library/orientalist-art
sd-concepts-library/osaka-jyo
sd-concepts-library/osaka-jyo2
sd-concepts-library/osrsmini2
sd-concepts-library/osrstiny
sd-concepts-library/other-mother
sd-concepts-library/ouroboros
sd-concepts-library/outfit-items
sd-concepts-library/overprettified
sd-concepts-library/owl-house
sd-concepts-library/painted-by-silver-of-999
sd-concepts-library/painted-by-silver-of-999-2
sd-concepts-library/painted-student
sd-concepts-library/painting
sd-concepts-library/pantone-milk
sd-concepts-library/paolo-bonolis
sd-concepts-library/party-girl
sd-concepts-library/pascalsibertin
sd-concepts-library/pastelartstyle
sd-concepts-library/paul-noir
sd-concepts-library/pen-ink-portraits-bennorthen
sd-concepts-library/phan
sd-concepts-library/phan-s-collage
sd-concepts-library/phc
sd-concepts-library/phoenix-01
sd-concepts-library/pineda-david
sd-concepts-library/pink-beast-pastelae-style
sd-concepts-library/pintu
sd-concepts-library/pion-by-august-semionov
sd-concepts-library/piotr-jablonski
sd-concepts-library/pixel-mania
sd-concepts-library/pixel-toy
sd-concepts-library/pjablonski-style
sd-concepts-library/plant-style
sd-concepts-library/plen-ki-mun
sd-concepts-library/pokemon-conquest-sprites
sd-concepts-library/pool-test
sd-concepts-library/poolrooms
sd-concepts-library/poring-ragnarok-online
sd-concepts-library/poutine-dish
sd-concepts-library/princess-knight-art
sd-concepts-library/progress-chip
sd-concepts-library/puerquis-toy
sd-concepts-library/purplefishli
sd-concepts-library/pyramidheadcosplay
sd-concepts-library/qpt-atrium
sd-concepts-library/quiesel
sd-concepts-library/r-crumb-style
sd-concepts-library/rahkshi-bionicle
sd-concepts-library/raichu
sd-concepts-library/rail-scene
sd-concepts-library/rail-scene-style
sd-concepts-library/ralph-mcquarrie
sd-concepts-library/ransom
sd-concepts-library/rayne-weynolds
sd-concepts-library/rcrumb-portraits-style
sd-concepts-library/rd-chaos
sd-concepts-library/rd-paintings
sd-concepts-library/red-glasses
sd-concepts-library/reeducation-camp
sd-concepts-library/reksio-dog
sd-concepts-library/rektguy
sd-concepts-library/remert
sd-concepts-library/renalla
sd-concepts-library/repeat
sd-concepts-library/retro-girl
sd-concepts-library/retro-mecha-rangers
sd-concepts-library/retropixelart-pinguin
sd-concepts-library/rex-deno
sd-concepts-library/rhizomuse-machine-bionic-sculpture
sd-concepts-library/ricar
sd-concepts-library/rickyart
sd-concepts-library/rico-face
sd-concepts-library/riker-doll
sd-concepts-library/rikiart
sd-concepts-library/rikiboy-art
sd-concepts-library/rilakkuma
sd-concepts-library/rishusei-style
sd-concepts-library/rj-palmer
sd-concepts-library/rl-pkmn-test
sd-concepts-library/road-to-ruin
sd-concepts-library/robertnava
sd-concepts-library/roblox-avatar
sd-concepts-library/roy-lichtenstein
sd-concepts-library/ruan-jia
sd-concepts-library/russian
sd-concepts-library/s1m-naoto-ohshima
sd-concepts-library/saheeli-rai
sd-concepts-library/sakimi-style
sd-concepts-library/salmonid
sd-concepts-library/sam-yang
sd-concepts-library/sanguo-guanyu
sd-concepts-library/sas-style
sd-concepts-library/scarlet-witch
sd-concepts-library/schloss-mosigkau
sd-concepts-library/scrap-style
sd-concepts-library/scratch-project
sd-concepts-library/sculptural-style
sd-concepts-library/sd-concepts-library-uma-meme
sd-concepts-library/seamless-ground
sd-concepts-library/selezneva-alisa
sd-concepts-library/sem-mac2n
sd-concepts-library/senneca
sd-concepts-library/seraphimmoonshadow-art
sd-concepts-library/sewerslvt
sd-concepts-library/she-hulk-law-art
sd-concepts-library/she-mask
sd-concepts-library/sherhook-painting
sd-concepts-library/sherhook-painting-v2
sd-concepts-library/shev-linocut
sd-concepts-library/shigure-ui-style
sd-concepts-library/shiny-polyman
sd-concepts-library/shrunken-head
sd-concepts-library/shu-doll
sd-concepts-library/shvoren-style
sd-concepts-library/sims-2-portrait
sd-concepts-library/singsing
sd-concepts-library/singsing-doll
sd-concepts-library/sintez-ico
sd-concepts-library/skyfalls
sd-concepts-library/slm
sd-concepts-library/smarties
sd-concepts-library/smiling-friend-style
sd-concepts-library/smooth-pencils
sd-concepts-library/smurf-style
sd-concepts-library/smw-map
sd-concepts-library/society-finch
sd-concepts-library/sorami-style
sd-concepts-library/spider-gwen
sd-concepts-library/spritual-monsters
sd-concepts-library/stable-diffusion-conceptualizer
sd-concepts-library/star-tours-posters
sd-concepts-library/stardew-valley-pixel-art
sd-concepts-library/starhavenmachinegods
sd-concepts-library/sterling-archer
sd-concepts-library/stretch-re1-robot
sd-concepts-library/stuffed-penguin-toy
sd-concepts-library/style-of-marc-allante
sd-concepts-library/summie-style
sd-concepts-library/sunfish
sd-concepts-library/super-nintendo-cartridge
sd-concepts-library/supitcha-mask
sd-concepts-library/sushi-pixel
sd-concepts-library/swamp-choe-2
sd-concepts-library/t-skrang
sd-concepts-library/takuji-kawano
sd-concepts-library/tamiyo
sd-concepts-library/tangles
sd-concepts-library/tb303
sd-concepts-library/tcirle
sd-concepts-library/teelip-ir-landscape
sd-concepts-library/teferi
sd-concepts-library/tela-lenca
sd-concepts-library/tela-lenca2
sd-concepts-library/terraria-style
sd-concepts-library/tesla-bot
sd-concepts-library/test
sd-concepts-library/test-epson
sd-concepts-library/test2
sd-concepts-library/testing
sd-concepts-library/thalasin
sd-concepts-library/thegeneral
sd-concepts-library/thorneworks
sd-concepts-library/threestooges
sd-concepts-library/thunderdome-cover
sd-concepts-library/thunderdome-covers
sd-concepts-library/ti-junglepunk-v0
sd-concepts-library/tili-concept
sd-concepts-library/titan-robot
sd-concepts-library/tnj
sd-concepts-library/toho-pixel
sd-concepts-library/tomcat
sd-concepts-library/tonal1
sd-concepts-library/tony-diterlizzi-s-planescape-art
sd-concepts-library/towerplace
sd-concepts-library/toy
sd-concepts-library/toy-bonnie-plush
sd-concepts-library/toyota-sera
sd-concepts-library/transmutation-circles
sd-concepts-library/trash-polka-artstyle
sd-concepts-library/travis-bedel
sd-concepts-library/trigger-studio
sd-concepts-library/trust-support
sd-concepts-library/trypophobia
sd-concepts-library/ttte
sd-concepts-library/tubby
sd-concepts-library/tubby-cats
sd-concepts-library/tudisco
sd-concepts-library/turtlepics
sd-concepts-library/type
sd-concepts-library/ugly-sonic
sd-concepts-library/uliana-kudinova
sd-concepts-library/uma
sd-concepts-library/uma-clean-object
sd-concepts-library/uma-meme
sd-concepts-library/uma-meme-style
sd-concepts-library/uma-style-classic
sd-concepts-library/unfinished-building
sd-concepts-library/urivoldemort
sd-concepts-library/uzumaki
sd-concepts-library/valorantstyle
sd-concepts-library/vb-mox
sd-concepts-library/vcr-classique
sd-concepts-library/venice
sd-concepts-library/vespertine
sd-concepts-library/victor-narm
sd-concepts-library/vietstoneking
sd-concepts-library/vivien-reid
sd-concepts-library/vkuoo1
sd-concepts-library/vraska
sd-concepts-library/w3u
sd-concepts-library/walter-wick-photography
sd-concepts-library/warhammer-40k-drawing-style
sd-concepts-library/waterfallshadow
sd-concepts-library/wayne-reynolds-character
sd-concepts-library/wedding
sd-concepts-library/wedding-HandPainted
sd-concepts-library/werebloops
sd-concepts-library/wheatland
sd-concepts-library/wheatland-arknight
sd-concepts-library/wheelchair
sd-concepts-library/wildkat
sd-concepts-library/willy-hd
sd-concepts-library/wire-angels
sd-concepts-library/wish-artist-stile
sd-concepts-library/wlop-style
sd-concepts-library/wojak
sd-concepts-library/wojaks-now
sd-concepts-library/wojaks-now-now-now
sd-concepts-library/xatu
sd-concepts-library/xatu2
sd-concepts-library/xbh
sd-concepts-library/xi
sd-concepts-library/xidiversity
sd-concepts-library/xioboma
sd-concepts-library/xuna
sd-concepts-library/xyz
sd-concepts-library/yb-anime
sd-concepts-library/yerba-mate
sd-concepts-library/yesdelete
sd-concepts-library/yf21
sd-concepts-library/yilanov2
sd-concepts-library/yinit
sd-concepts-library/yoji-shinkawa-style
sd-concepts-library/yolandi-visser
sd-concepts-library/yoshi
sd-concepts-library/youpi2
sd-concepts-library/youtooz-candy
sd-concepts-library/yuji-himukai-style
sd-concepts-library/zaney
sd-concepts-library/zaneypixelz
sd-concepts-library/zdenek-art
sd-concepts-library/zero
sd-concepts-library/zero-bottle
sd-concepts-library/zero-suit-samus
sd-concepts-library/zillertal-can
sd-concepts-library/zizigooloo
sd-concepts-library/zk
sd-concepts-library/zoroark

View File

@ -0,0 +1,110 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ["sculpture"]
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 2
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
modelcheckpoint:
params:
every_n_train_steps: 500
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 8
increase_log_steps: False
trainer:
benchmark: True
max_steps: 4000000
# max_steps: 4000

View File

@ -0,0 +1,103 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ["painting"]
per_image_tokens: false
num_vectors_per_token: 1
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 2
num_workers: 16
wrap: false
train:
target: ldm.data.personalized_style.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized_style.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@ -0,0 +1,79 @@
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 10000 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@ -0,0 +1,79 @@
model:
base_learning_rate: 7.5e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid # important
monitor: val/loss_simple_ema
scale_factor: 0.18215
finetune_keys: null
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 8
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 9 # 4 data + 4 downscaled image + 1 mask
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@ -0,0 +1,110 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 6
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 2
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
modelcheckpoint:
params:
every_n_train_steps: 500
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 5
increase_log_steps: False
trainer:
benchmark: False
max_steps: 6200
# max_steps: 4000

4
coverage/.gitignore vendored
View File

@ -1,4 +0,0 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore

65
docker-build/Dockerfile Normal file
View File

@ -0,0 +1,65 @@
FROM python:3.10-slim AS builder
# use bash
SHELL [ "/bin/bash", "-c" ]
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc=4:10.2.* \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
python3-dev=3.9.* \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set WORKDIR, PATH and copy sources
ARG APPDIR=/usr/src/app
WORKDIR ${APPDIR}
ENV PATH ${APPDIR}/.venv/bin:$PATH
ARG PIP_REQUIREMENTS=requirements-lin-cuda.txt
COPY . ./environments-and-requirements/${PIP_REQUIREMENTS} ./
# install requirements
RUN python3 -m venv .venv \
&& pip install \
--upgrade \
--no-cache-dir \
'wheel>=0.38.4' \
&& pip install \
--no-cache-dir \
-r ${PIP_REQUIREMENTS}
FROM python:3.10-slim AS runtime
# setup environment
ARG APPDIR=/usr/src/app
WORKDIR ${APPDIR}
COPY --from=builder ${APPDIR} .
ENV \
PATH=${APPDIR}/.venv/bin:$PATH \
INVOKEAI_ROOT=/data \
INVOKE_MODEL_RECONFIGURE=--yes
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.* \
&& ln -sf \
/usr/lib/"$(arch)"-linux-gnu/pkgconfig/opencv4.pc \
/usr/lib/"$(arch)"-linux-gnu/pkgconfig/opencv.pc \
&& python3 -c "from patchmatch import patch_match" \
&& apt-get remove -y \
--autoremove \
build-essential \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/*
# set Entrypoint and default CMD
ENTRYPOINT [ "python3", "scripts/invoke.py" ]
CMD [ "--web", "--host=0.0.0.0" ]

View File

@ -0,0 +1,86 @@
#######################
#### Builder stage ####
FROM library/ubuntu:22.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \
git \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
python3-pip \
build-essential \
python3-opencv \
libopencv-dev
# This is needed for patchmatch support
RUN cd /usr/lib/x86_64-linux-gnu/pkgconfig/ &&\
ln -sf opencv4.pc opencv.pc
ARG WORKDIR=/invokeai
WORKDIR ${WORKDIR}
ENV VIRTUAL_ENV=${WORKDIR}/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN --mount=type=cache,target=/root/.cache/pip \
python3 -m venv ${VIRTUAL_ENV} &&\
pip install --extra-index-url https://download.pytorch.org/whl/cu116 \
torch==1.12.0+cu116 \
torchvision==0.13.0+cu116 &&\
pip install -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.3#egg=pypatchmatch
COPY . .
RUN --mount=type=cache,target=/root/.cache/pip \
cp environments-and-requirements/requirements-lin-cuda.txt requirements.txt && \
pip install -r requirements.txt &&\
pip install -e .
#######################
#### Runtime stage ####
FROM library/ubuntu:22.04 as runtime
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt install -y --no-install-recommends \
git \
curl \
ncdu \
iotop \
bzip2 \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
python3-pip \
build-essential \
python3-opencv \
libopencv-dev &&\
apt-get clean && apt-get autoclean
ARG WORKDIR=/invokeai
WORKDIR ${WORKDIR}
ENV INVOKEAI_ROOT=/mnt/invokeai
ENV VIRTUAL_ENV=${WORKDIR}/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY --from=builder ${WORKDIR} ${WORKDIR}
COPY --from=builder /usr/lib/x86_64-linux-gnu/pkgconfig /usr/lib/x86_64-linux-gnu/pkgconfig
# build patchmatch
RUN python -c "from patchmatch import patch_match"
## workaround for non-existent initfile when runtime directory is mounted; see #1613
RUN touch /root/.invokeai
ENTRYPOINT ["bash"]
CMD ["-c", "python3 scripts/invoke.py --web --host 0.0.0.0"]

44
docker-build/Makefile Normal file
View File

@ -0,0 +1,44 @@
# Directory in the container where the INVOKEAI_ROOT (runtime dir) will be mounted
INVOKEAI_ROOT=/mnt/invokeai
# Host directory to contain the runtime dir. Will be mounted at INVOKEAI_ROOT path in the container
HOST_MOUNT_PATH=${HOME}/invokeai
IMAGE=local/invokeai:latest
USER=$(shell id -u)
GROUP=$(shell id -g)
# All downloaded models, config, etc will end up in ${HOST_MOUNT_PATH} on the host.
# This is consistent with the expected non-Docker behaviour.
# Contents can be moved to a persistent storage and used to prime the cache on another host.
build:
DOCKER_BUILDKIT=1 docker build -t local/invokeai:latest -f Dockerfile.cloud ..
configure:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} \
-e INVOKEAI_ROOT=${INVOKEAI_ROOT} \
${IMAGE} -c "python scripts/configure_invokeai.py"
# Run the container with the runtime dir mounted and the web server exposed on port 9090
web:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} \
-e INVOKEAI_ROOT=${INVOKEAI_ROOT} \
-p 9090:9090 \
${IMAGE} -c "python scripts/invoke.py --web --host 0.0.0.0"
# Run the cli with the runtime dir mounted
cli:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} \
-e INVOKEAI_ROOT=${INVOKEAI_ROOT} \
${IMAGE} -c "python scripts/invoke.py"
# Run the container with the runtime dir mounted and open a bash shell
shell:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} ${IMAGE} --
.PHONY: build configure web cli shell

35
docker-build/build.sh Executable file
View File

@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#setup
source ./docker-build/env.sh \
|| echo "please execute docker-build/build.sh from repository root" \
|| exit 1
PIP_REQUIREMENTS=${PIP_REQUIREMENTS:-requirements-lin-cuda.txt}
DOCKERFILE=${INVOKE_DOCKERFILE:-docker-build/Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile:\t ${DOCKERFILE}"
echo -e "Requirements:\t ${PIP_REQUIREMENTS}"
echo -e "Volumename:\t ${VOLUMENAME}"
echo -e "arch:\t\t ${ARCH}"
echo -e "Platform:\t ${PLATFORM}"
echo -e "Invokeai_tag:\t ${INVOKEAI_TAG}\n"
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "createing docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
docker build \
--platform="${PLATFORM}" \
--tag="${INVOKEAI_TAG}" \
--build-arg="PIP_REQUIREMENTS=${PIP_REQUIREMENTS}" \
--file="${DOCKERFILE}" \
.

10
docker-build/env.sh Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# Variables shared by build.sh and run.sh
REPOSITORY_NAME=${REPOSITORY_NAME:-$(basename "$(git rev-parse --show-toplevel)")}
VOLUMENAME=${VOLUMENAME:-${REPOSITORY_NAME,,}_data}
ARCH=${ARCH:-$(uname -m)}
PLATFORM=${PLATFORM:-Linux/${ARCH}}
CONTAINER_FLAVOR=${CONTAINER_FLAVOR:-cuda}
INVOKEAI_BRANCH=$(git branch --show)
INVOKEAI_TAG=${REPOSITORY_NAME,,}-${CONTAINER_FLAVOR}:${INVOKEAI_TAG:-${INVOKEAI_BRANCH/\//-}}

31
docker-build/run.sh Executable file
View File

@ -0,0 +1,31 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#run-the-container
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
source ./docker-build/env.sh \
|| echo "please run from repository root" \
|| exit 1
# check if HUGGINGFACE_TOKEN is available
# You must have accepted the terms of use for required models
HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN:?Please set your token for Huggingface as HUGGINGFACE_TOKEN}
echo -e "You are using these values:\n"
echo -e "Volumename:\t ${VOLUMENAME}"
echo -e "Invokeai_tag:\t ${INVOKEAI_TAG}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="$PLATFORM" \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount="source=$VOLUMENAME,target=/data" \
--env="HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN}" \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus=${GPU_FLAGS}} \
"$INVOKEAI_TAG" ${1:+$@}

View File

@ -1,107 +0,0 @@
# syntax=docker/dockerfile:1
ARG PYTHON_VERSION=3.9
##################
## base image ##
##################
FROM --platform=${TARGETPLATFORM} python:${PYTHON_VERSION}-slim AS python-base
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
# Prepare apt for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.*
# Set working directory and env
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
ENV PATH ${APPDIR}/${APPNAME}/bin:$PATH
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# Don't fall back to legacy build system
ENV PIP_USE_PEP517=1
#######################
## build pyproject ##
#######################
FROM python-base AS pyproject-builder
# Install build dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
gcc=4:10.2.* \
python3-dev=3.9.*
# Prepare pip for buildkit cache
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
RUN mkdir -p ${PIP_CACHE_DIR}
# Create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
python3 -m venv "${APPNAME}" \
--upgrade-deps
# Install requirements
COPY --link pyproject.toml .
COPY --link invokeai/version/invokeai_version.py invokeai/version/__init__.py invokeai/version/
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPNAME}"/bin/pip install .
# Install pyproject.toml
COPY --link . .
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPNAME}/bin/pip" install .
# Build patchmatch
RUN python3 -c "from patchmatch import patch_match"
#####################
## runtime image ##
#####################
FROM python-base AS runtime
# Create a new user
ARG UNAME=appuser
RUN useradd \
--no-log-init \
-m \
-U \
"${UNAME}"
# Create volume directory
ARG VOLUME_DIR=/data
RUN mkdir -p "${VOLUME_DIR}" \
&& chown -hR "${UNAME}:${UNAME}" "${VOLUME_DIR}"
# Setup runtime environment
USER ${UNAME}:${UNAME}
COPY --chown=${UNAME}:${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
ENV INVOKEAI_ROOT ${VOLUME_DIR}
ENV TRANSFORMERS_CACHE ${VOLUME_DIR}/.cache
ENV INVOKE_MODEL_RECONFIGURE "--yes --default_only"
EXPOSE 9090
ENTRYPOINT [ "invokeai" ]
CMD [ "--web", "--host", "0.0.0.0", "--port", "9090" ]
VOLUME [ "${VOLUME_DIR}" ]

View File

@ -1,51 +0,0 @@
#!/usr/bin/env bash
set -e
# If you want to build a specific flavor, set the CONTAINER_FLAVOR environment variable
# e.g. CONTAINER_FLAVOR=cpu ./build.sh
# Possible Values are:
# - cpu
# - cuda
# - rocm
# Don't forget to also set it when executing run.sh
# if it is not set, the script will try to detect the flavor by itself.
#
# Doc can be found here:
# https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
DOCKERFILE=${INVOKE_DOCKERFILE:-./Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile:\t\t${DOCKERFILE}"
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
echo -e "Volumename:\t\t${VOLUMENAME}"
echo -e "Platform:\t\t${PLATFORM}"
echo -e "Container Registry:\t${CONTAINER_REGISTRY}"
echo -e "Container Repository:\t${CONTAINER_REPOSITORY}"
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
echo -e "Container Flavor:\t${CONTAINER_FLAVOR}"
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
# Create docker volume
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "creating docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
docker build \
--platform="${PLATFORM:-linux/amd64}" \
--tag="${CONTAINER_IMAGE:-invokeai}" \
${CONTAINER_FLAVOR:+--build-arg="CONTAINER_FLAVOR=${CONTAINER_FLAVOR}"} \
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
--file="${DOCKERFILE}" \
..

View File

@ -1,54 +0,0 @@
#!/usr/bin/env bash
# This file is used to set environment variables for the build.sh and run.sh scripts.
# Try to detect the container flavor if no PIP_EXTRA_INDEX_URL got specified
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
# Activate virtual environment if not already activated and exists
if [[ -z $VIRTUAL_ENV ]]; then
[[ -e "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" ]] \
&& source "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" \
&& echo "Activated virtual environment: $VIRTUAL_ENV"
fi
# Decide which container flavor to build if not specified
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
# Check for CUDA and ROCm
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
if [[ "${CUDA_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="cuda"
elif [[ "${ROCM_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="rocm"
else
CONTAINER_FLAVOR="cpu"
fi
fi
# Set PIP_EXTRA_INDEX_URL based on container flavor
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
elif [[ "$CONTAINER_FLAVOR" == "cpu" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
# elif [[ -z "$CONTAINER_FLAVOR" || "$CONTAINER_FLAVOR" == "cuda" ]]; then
# PIP_PACKAGE=${PIP_PACKAGE-".[xformers]"}
fi
fi
# Variables shared by build.sh and run.sh
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
REPOSITORY_NAME="${REPOSITORY_NAME,,}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME}_data"}"
ARCH="${ARCH-$(uname -m)}"
PLATFORM="${PLATFORM-linux/${ARCH}}"
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"
# enable docker buildkit
export DOCKER_BUILDKIT=1

View File

@ -1,41 +0,0 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
# Create outputs directory if it does not exist
[[ -d ./outputs ]] || mkdir ./outputs
echo -e "You are using these values:\n"
echo -e "Volumename:\t${VOLUMENAME}"
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
echo -e "local Models:\t${MODELSPATH:-unset}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="${PLATFORM}" \
--name="${REPOSITORY_NAME}" \
--hostname="${REPOSITORY_NAME}" \
--mount type=volume,volume-driver=local,source="${VOLUMENAME}",target=/data \
--mount type=bind,source="$(pwd)"/outputs/,target=/data/outputs/ \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${@:+$@}
echo -e "\nCleaning trash folder ..."
for f in outputs/.Trash*; do
if [ -e "$f" ]; then
rm -Rf "$f"
break
fi
done

View File

@ -4,108 +4,6 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: ckpt
width: 512
height: 512
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
```
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
with a `format` of `diffusers` and a `repo_id` that points to the
repository ID of the model on HuggingFace:
```
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
```
A configuration stanza for a diffuers model stored locally should
look like this, with a `format` of `diffusers`, but a `path` field
that points at the directory that contains `model_index.json`:
```
waifu-diffusion:
description: Latest waifu diffusion 1.4
format: diffusers
path: models/diffusers/hakurei-haifu-diffusion-1.4
```
2. In order of precedence, InvokeAI will now use HF_HOME, then
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
store HuggingFace diffusers models.
Consequently, the format of the models directory has changed to
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
are not set, diffusers models are now automatically downloaded
and retrieved from the directory `ROOTDIR/models/diffusers`,
while other models are stored in the directory
`ROOTDIR/models/hub`. This organization is the same as that used
by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with
other machine learning applications that use the HuggingFace
libraries. To do this, set the environment variable HF_HOME
before starting up InvokeAI to tell it what directory to
cache models in. To tell InvokeAI to use the standard HuggingFace
cache directory, you would set HF_HOME like this (Linux/Mac):
`export HF_HOME=~/.cache/huggingface`
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
environment variable if HF_HOME is not set; this path
takes precedence over `ROOTDIR/models` to allow for the same sharing
with other machine learning applications that use HuggingFace
libraries.
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
will be a one-time migration from the old models directory format
to the new one. You will see a message about this the first time
you start `invoke.py`.
4. Both the front end back ends of the model manager have been
rewritten to accommodate diffusers. You can import models using
their local file path, using their URLs, or their HuggingFace
repo_ids. On the command line, all these syntaxes work:
```
!import_model stabilityai/stable-diffusion-2-1-base
!import_model /opt/sd-models/sd-1.4.ckpt
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
```
**KNOWN BUGS (15 January 2023)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**
@ -196,7 +94,7 @@ the desired release's zip file, which you can find by clicking on the green
This point release removes references to the binary installer from the
installation guide. The binary installer is not stable at the current
time. First time users are encouraged to use the "source" installer as
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
described in [Installing InvokeAI with the Source Installer](installation/INSTALL_SOURCE.md)
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
@ -261,7 +159,7 @@ sections describe what's new for InvokeAI.
[Installation](installation/index.md).
- A streamlined manual installation process that works for both Conda and
PIP-only installs. See
[Manual Installation](installation/020_INSTALL_MANUAL.md).
[Manual Installation](installation/INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps,
sampler, etc) in a `.invokeai` file. See
[Client](features/CLI.md)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 470 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 457 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Some files were not shown because too many files have changed in this diff Show More