mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into ryan/spandrel-upscale
This commit is contained in:
commit
7b5d4935b4
@ -9,9 +9,9 @@ runs:
|
|||||||
node-version: '18'
|
node-version: '18'
|
||||||
|
|
||||||
- name: setup pnpm
|
- name: setup pnpm
|
||||||
uses: pnpm/action-setup@v2
|
uses: pnpm/action-setup@v4
|
||||||
with:
|
with:
|
||||||
version: 8
|
version: 8.15.6
|
||||||
run_install: false
|
run_install: false
|
||||||
|
|
||||||
- name: get pnpm store directory
|
- name: get pnpm store directory
|
||||||
|
2
.github/pull_request_template.md
vendored
2
.github/pull_request_template.md
vendored
@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
## QA Instructions
|
## QA Instructions
|
||||||
|
|
||||||
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
|
<!--WHEN APPLICABLE: Describe how you have tested the changes in this PR. Provide enough detail that a reviewer can reproduce your tests.-->
|
||||||
|
|
||||||
## Merge Plan
|
## Merge Plan
|
||||||
|
|
||||||
|
29
README.md
29
README.md
@ -49,6 +49,33 @@ Invoke is available in two editions:
|
|||||||
|
|
||||||
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
|
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
|
||||||
|
|
||||||
|
## Docker Container
|
||||||
|
|
||||||
|
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
|
||||||
|
|
||||||
|
### Generate!
|
||||||
|
|
||||||
|
Run the container, modifying the command as necessary:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
|
||||||
|
|
||||||
|
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
|
||||||
|
|
||||||
|
### Persist your data
|
||||||
|
|
||||||
|
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
|
||||||
|
|
||||||
|
### DIY
|
||||||
|
|
||||||
|
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
|
||||||
|
|
||||||
## Troubleshooting, FAQ and Support
|
## Troubleshooting, FAQ and Support
|
||||||
|
|
||||||
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
|
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
|
||||||
@ -126,3 +153,5 @@ Original portions of the software are Copyright © 2024 by respective contributo
|
|||||||
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
|
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
|
||||||
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
|
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
|
||||||
[translation status link]: https://hosted.weblate.org/engage/invokeai/
|
[translation status link]: https://hosted.weblate.org/engage/invokeai/
|
||||||
|
[nvidia docker docs]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
|
||||||
|
[amd docker docs]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html
|
||||||
|
@ -19,8 +19,9 @@
|
|||||||
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
|
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
|
||||||
# INVOKEAI_PORT=9090
|
# INVOKEAI_PORT=9090
|
||||||
|
|
||||||
## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
|
## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
|
||||||
# GPU_DRIVER=nvidia #| rocm
|
# GPU_DRIVER=cuda #| rocm
|
||||||
|
|
||||||
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
|
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
|
||||||
|
## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
|
||||||
# CONTAINER_UID=1000
|
# CONTAINER_UID=1000
|
||||||
|
@ -1,41 +1,75 @@
|
|||||||
# InvokeAI Containerized
|
# Invoke in Docker
|
||||||
|
|
||||||
All commands should be run within the `docker` directory: `cd docker`
|
- Ensure that Docker can use the GPU on your system
|
||||||
|
- This documentation assumes Linux, but should work similarly under Windows with WSL2
|
||||||
|
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
|
||||||
|
|
||||||
## Quickstart :rocket:
|
## Quickstart :lightning:
|
||||||
|
|
||||||
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
|
No `docker compose`, no persistence, just a simple one-liner using the official images:
|
||||||
|
|
||||||
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
|
**CUDA:**
|
||||||
|
|
||||||
## Detailed setup
|
```bash
|
||||||
|
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
**ROCm:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
|
||||||
|
```
|
||||||
|
|
||||||
|
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
|
||||||
|
|
||||||
|
## Customize the container
|
||||||
|
|
||||||
|
We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd docker
|
||||||
|
cp .env.sample .env
|
||||||
|
# edit .env to your liking if you need to; it is well commented.
|
||||||
|
./run.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
|
||||||
|
|
||||||
|
## Docker setup in detail
|
||||||
|
|
||||||
#### Linux
|
#### Linux
|
||||||
|
|
||||||
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
|
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
|
||||||
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
|
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
|
||||||
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
|
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
|
||||||
3. Ensure docker daemon is able to access the GPU.
|
3. Ensure docker daemon is able to access the GPU.
|
||||||
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
- [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
||||||
|
- [AMD docs](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html)
|
||||||
|
|
||||||
#### macOS
|
#### macOS
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
|
||||||
|
|
||||||
|
If you are still reading:
|
||||||
|
|
||||||
1. Ensure Docker has at least 16GB RAM
|
1. Ensure Docker has at least 16GB RAM
|
||||||
2. Enable VirtioFS for file sharing
|
2. Enable VirtioFS for file sharing
|
||||||
3. Enable `docker compose` V2 support
|
3. Enable `docker compose` V2 support
|
||||||
|
|
||||||
This is done via Docker Desktop preferences
|
This is done via Docker Desktop preferences.
|
||||||
|
|
||||||
### Configure Invoke environment
|
### Configure the Invoke Environment
|
||||||
|
|
||||||
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
|
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
|
||||||
a. the desired location of the InvokeAI runtime directory, or
|
|
||||||
b. an existing, v3.0.0 compatible runtime directory.
|
|
||||||
1. Execute `run.sh`
|
1. Execute `run.sh`
|
||||||
|
|
||||||
The image will be built automatically if needed.
|
The image will be built automatically if needed.
|
||||||
|
|
||||||
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
|
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
|
||||||
|
|
||||||
### Use a GPU
|
### Use a GPU
|
||||||
|
|
||||||
@ -43,9 +77,9 @@ The runtime directory (holding models and outputs) will be created in the locati
|
|||||||
- WSL2 is *required* for Windows.
|
- WSL2 is *required* for Windows.
|
||||||
- only `x86_64` architecture is supported.
|
- only `x86_64` architecture is supported.
|
||||||
|
|
||||||
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
|
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
|
||||||
|
|
||||||
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
|
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
|
||||||
|
|
||||||
## Customize
|
## Customize
|
||||||
|
|
||||||
@ -59,10 +93,10 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
|
|||||||
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
|
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
|
||||||
HUGGINGFACE_TOKEN=the_actual_token
|
HUGGINGFACE_TOKEN=the_actual_token
|
||||||
CONTAINER_UID=1000
|
CONTAINER_UID=1000
|
||||||
GPU_DRIVER=nvidia
|
GPU_DRIVER=cuda
|
||||||
```
|
```
|
||||||
|
|
||||||
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
|
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
|
||||||
|
|
||||||
## Even More Customizing!
|
## Even More Customizing!
|
||||||
|
|
||||||
|
@ -1,7 +1,5 @@
|
|||||||
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
|
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
|
||||||
|
|
||||||
version: '3.8'
|
|
||||||
|
|
||||||
x-invokeai: &invokeai
|
x-invokeai: &invokeai
|
||||||
image: "local/invokeai:latest"
|
image: "local/invokeai:latest"
|
||||||
build:
|
build:
|
||||||
@ -32,7 +30,7 @@ x-invokeai: &invokeai
|
|||||||
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
invokeai-nvidia:
|
invokeai-cuda:
|
||||||
<<: *invokeai
|
<<: *invokeai
|
||||||
deploy:
|
deploy:
|
||||||
resources:
|
resources:
|
||||||
|
@ -23,18 +23,18 @@ usermod -u ${USER_ID} ${USER} 1>/dev/null
|
|||||||
# but it is useful to have the full SSH server e.g. on Runpod.
|
# but it is useful to have the full SSH server e.g. on Runpod.
|
||||||
# (use SCP to copy files to/from the image, etc)
|
# (use SCP to copy files to/from the image, etc)
|
||||||
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
|
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
|
||||||
apt-get update
|
apt-get update
|
||||||
apt-get install -y openssh-server
|
apt-get install -y openssh-server
|
||||||
pushd "$HOME"
|
pushd "$HOME"
|
||||||
mkdir -p .ssh
|
mkdir -p .ssh
|
||||||
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
|
echo "${PUBLIC_KEY}" >.ssh/authorized_keys
|
||||||
chmod -R 700 .ssh
|
chmod -R 700 .ssh
|
||||||
popd
|
popd
|
||||||
service ssh start
|
service ssh start
|
||||||
fi
|
fi
|
||||||
|
|
||||||
mkdir -p "${INVOKEAI_ROOT}"
|
mkdir -p "${INVOKEAI_ROOT}"
|
||||||
chown --recursive ${USER} "${INVOKEAI_ROOT}"
|
chown --recursive ${USER} "${INVOKEAI_ROOT}" || true
|
||||||
cd "${INVOKEAI_ROOT}"
|
cd "${INVOKEAI_ROOT}"
|
||||||
|
|
||||||
# Run the CMD as the Container User (not root).
|
# Run the CMD as the Container User (not root).
|
||||||
|
@ -8,11 +8,15 @@ run() {
|
|||||||
local build_args=""
|
local build_args=""
|
||||||
local profile=""
|
local profile=""
|
||||||
|
|
||||||
|
# create .env file if it doesn't exist, otherwise docker compose will fail
|
||||||
touch .env
|
touch .env
|
||||||
|
|
||||||
|
# parse .env file for build args
|
||||||
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
|
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
|
||||||
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
|
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
|
||||||
|
|
||||||
[[ -z "$profile" ]] && profile="nvidia"
|
# default to 'cuda' profile
|
||||||
|
[[ -z "$profile" ]] && profile="cuda"
|
||||||
|
|
||||||
local service_name="invokeai-$profile"
|
local service_name="invokeai-$profile"
|
||||||
|
|
||||||
|
@ -4,50 +4,37 @@ title: Installing with Docker
|
|||||||
|
|
||||||
# :fontawesome-brands-docker: Docker
|
# :fontawesome-brands-docker: Docker
|
||||||
|
|
||||||
!!! warning "macOS and AMD GPU Users"
|
!!! warning "macOS users"
|
||||||
|
|
||||||
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
|
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
|
||||||
because Docker containers can not access the GPU on macOS.
|
|
||||||
|
|
||||||
!!! warning "AMD GPU Users"
|
|
||||||
|
|
||||||
Container support for AMD GPUs has been reported to work by the community, but has not received
|
|
||||||
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
|
|
||||||
use the `build.sh` script to build the image for this to take effect at build time.
|
|
||||||
|
|
||||||
!!! tip "Linux and Windows Users"
|
!!! tip "Linux and Windows Users"
|
||||||
|
|
||||||
For optimal performance, configure your Docker daemon to access your machine's GPU.
|
Configure Docker to access your machine's GPU.
|
||||||
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
|
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
|
||||||
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
|
||||||
|
|
||||||
## Why containers?
|
|
||||||
|
|
||||||
They provide a flexible, reliable way to build and deploy InvokeAI.
|
|
||||||
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
|
|
||||||
methodology for details on why running applications in such a stateless fashion is important.
|
|
||||||
|
|
||||||
The container is configured for CUDA by default, but can be built to support AMD GPUs
|
|
||||||
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
|
|
||||||
|
|
||||||
Developers on Apple silicon (M1/M2/M3): You
|
|
||||||
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
|
|
||||||
and performance is reduced compared with running it directly on macOS but for
|
|
||||||
development purposes it's fine. Once you're done with development tasks on your
|
|
||||||
laptop you can build for the target platform and architecture and deploy to
|
|
||||||
another environment with NVIDIA GPUs on-premises or in the cloud.
|
|
||||||
|
|
||||||
## TL;DR
|
## TL;DR
|
||||||
|
|
||||||
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
|
Ensure your Docker setup is able to use your GPU. Then:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
|
||||||
|
|
||||||
|
## Build-It-Yourself
|
||||||
|
|
||||||
|
All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# docker compose commands should be run from the `docker` directory
|
|
||||||
cd docker
|
cd docker
|
||||||
|
cp .env.sample .env
|
||||||
docker compose up
|
docker compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
## Installation in a Linux container (desktop)
|
We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
@ -58,18 +45,9 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
|
|||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
|
||||||
increase Swap and Disk image size too.
|
increase Swap and Disk image size too.
|
||||||
|
|
||||||
#### Get a Huggingface-Token
|
|
||||||
|
|
||||||
Besides the Docker Agent you will need an Account on
|
|
||||||
[huggingface.co](https://huggingface.co/join).
|
|
||||||
|
|
||||||
After you succesfully registered your account, go to
|
|
||||||
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
|
|
||||||
a token and copy it, since you will need in for the next step.
|
|
||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
|
|
||||||
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
|
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
|
||||||
|
|
||||||
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
|
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
|
||||||
|
|
||||||
@ -103,10 +81,9 @@ Once the container starts up (and configures the InvokeAI root directory if this
|
|||||||
## Troubleshooting / FAQ
|
## Troubleshooting / FAQ
|
||||||
|
|
||||||
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
|
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
|
||||||
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
|
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
|
||||||
and you may have cloned this repository before the issue was fixed. To solve this, please change
|
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
|
||||||
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
|
|
||||||
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
|
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
|
||||||
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
|
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
|
||||||
to reset the file to its most recent version.
|
to reset the file to its most recent version.
|
||||||
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
|
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
|
||||||
|
@ -13,7 +13,7 @@ echo 2. Open the developer console
|
|||||||
echo 3. Command-line help
|
echo 3. Command-line help
|
||||||
echo Q - Quit
|
echo Q - Quit
|
||||||
echo.
|
echo.
|
||||||
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
|
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest
|
||||||
echo.
|
echo.
|
||||||
set /P choice="Please enter 1-4, Q: [1] "
|
set /P choice="Please enter 1-4, Q: [1] "
|
||||||
if not defined choice set choice=1
|
if not defined choice set choice=1
|
||||||
|
@ -4,37 +4,39 @@ from logging import Logger
|
|||||||
|
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
|
from invokeai.app.services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
|
||||||
|
from invokeai.app.services.board_images.board_images_default import BoardImagesService
|
||||||
|
from invokeai.app.services.board_records.board_records_sqlite import SqliteBoardRecordStorage
|
||||||
|
from invokeai.app.services.boards.boards_default import BoardService
|
||||||
|
from invokeai.app.services.bulk_download.bulk_download_default import BulkDownloadService
|
||||||
|
from invokeai.app.services.config.config_default import InvokeAIAppConfig
|
||||||
|
from invokeai.app.services.download.download_default import DownloadQueueService
|
||||||
|
from invokeai.app.services.events.events_fastapievents import FastAPIEventService
|
||||||
|
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
|
||||||
|
from invokeai.app.services.image_records.image_records_sqlite import SqliteImageRecordStorage
|
||||||
|
from invokeai.app.services.images.images_default import ImageService
|
||||||
|
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
|
||||||
|
from invokeai.app.services.invocation_services import InvocationServices
|
||||||
|
from invokeai.app.services.invocation_stats.invocation_stats_default import InvocationStatsService
|
||||||
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
from invokeai.app.services.model_images.model_images_default import ModelImageFileStorageDisk
|
||||||
|
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService
|
||||||
|
from invokeai.app.services.model_records.model_records_sql import ModelRecordServiceSQL
|
||||||
|
from invokeai.app.services.names.names_default import SimpleNameService
|
||||||
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
|
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
|
||||||
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
|
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
|
||||||
|
from invokeai.app.services.session_processor.session_processor_default import (
|
||||||
|
DefaultSessionProcessor,
|
||||||
|
DefaultSessionRunner,
|
||||||
|
)
|
||||||
|
from invokeai.app.services.session_queue.session_queue_sqlite import SqliteSessionQueue
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
|
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
|
||||||
|
from invokeai.app.services.urls.urls_default import LocalUrlService
|
||||||
|
from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
|
||||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
from invokeai.version.invokeai_version import __version__
|
from invokeai.version.invokeai_version import __version__
|
||||||
|
|
||||||
from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
|
|
||||||
from ..services.board_images.board_images_default import BoardImagesService
|
|
||||||
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
|
|
||||||
from ..services.boards.boards_default import BoardService
|
|
||||||
from ..services.bulk_download.bulk_download_default import BulkDownloadService
|
|
||||||
from ..services.config import InvokeAIAppConfig
|
|
||||||
from ..services.download import DownloadQueueService
|
|
||||||
from ..services.events.events_fastapievents import FastAPIEventService
|
|
||||||
from ..services.image_files.image_files_disk import DiskImageFileStorage
|
|
||||||
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
|
|
||||||
from ..services.images.images_default import ImageService
|
|
||||||
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
|
|
||||||
from ..services.invocation_services import InvocationServices
|
|
||||||
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
|
|
||||||
from ..services.invoker import Invoker
|
|
||||||
from ..services.model_images.model_images_default import ModelImageFileStorageDisk
|
|
||||||
from ..services.model_manager.model_manager_default import ModelManagerService
|
|
||||||
from ..services.model_records import ModelRecordServiceSQL
|
|
||||||
from ..services.names.names_default import SimpleNameService
|
|
||||||
from ..services.session_processor.session_processor_default import DefaultSessionProcessor, DefaultSessionRunner
|
|
||||||
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
|
|
||||||
from ..services.urls.urls_default import LocalUrlService
|
|
||||||
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: is there a better way to achieve this?
|
# TODO: is there a better way to achieve this?
|
||||||
def check_internet() -> bool:
|
def check_internet() -> bool:
|
||||||
|
@ -10,14 +10,13 @@ from fastapi import Body
|
|||||||
from fastapi.routing import APIRouter
|
from fastapi.routing import APIRouter
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.invocations.upscale import ESRGAN_MODELS
|
from invokeai.app.invocations.upscale import ESRGAN_MODELS
|
||||||
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
||||||
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
|
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
|
||||||
from invokeai.backend.util.logging import logging
|
from invokeai.backend.util.logging import logging
|
||||||
from invokeai.version import __version__
|
from invokeai.version import __version__
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
|
||||||
|
|
||||||
|
|
||||||
class LogLevel(int, Enum):
|
class LogLevel(int, Enum):
|
||||||
NotSet = logging.NOTSET
|
NotSet = logging.NOTSET
|
||||||
|
@ -2,7 +2,7 @@ from fastapi import Body, HTTPException
|
|||||||
from fastapi.routing import APIRouter
|
from fastapi.routing import APIRouter
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
|
|
||||||
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])
|
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])
|
||||||
|
|
||||||
|
@ -4,12 +4,11 @@ from fastapi import Body, HTTPException, Path, Query
|
|||||||
from fastapi.routing import APIRouter
|
from fastapi.routing import APIRouter
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
||||||
from invokeai.app.services.boards.boards_common import BoardDTO
|
from invokeai.app.services.boards.boards_common import BoardDTO
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
|
||||||
|
|
||||||
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
|
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
|
||||||
|
|
||||||
|
|
||||||
@ -32,6 +31,7 @@ class DeleteBoardResult(BaseModel):
|
|||||||
)
|
)
|
||||||
async def create_board(
|
async def create_board(
|
||||||
board_name: str = Query(description="The name of the board to create"),
|
board_name: str = Query(description="The name of the board to create"),
|
||||||
|
is_private: bool = Query(default=False, description="Whether the board is private"),
|
||||||
) -> BoardDTO:
|
) -> BoardDTO:
|
||||||
"""Creates a board"""
|
"""Creates a board"""
|
||||||
try:
|
try:
|
||||||
|
@ -8,13 +8,12 @@ from fastapi.routing import APIRouter
|
|||||||
from pydantic.networks import AnyHttpUrl
|
from pydantic.networks import AnyHttpUrl
|
||||||
from starlette.exceptions import HTTPException
|
from starlette.exceptions import HTTPException
|
||||||
|
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.services.download import (
|
from invokeai.app.services.download import (
|
||||||
DownloadJob,
|
DownloadJob,
|
||||||
UnknownJobIDException,
|
UnknownJobIDException,
|
||||||
)
|
)
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
|
||||||
|
|
||||||
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])
|
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])
|
||||||
|
|
||||||
|
|
||||||
|
@ -8,6 +8,7 @@ from fastapi.routing import APIRouter
|
|||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pydantic import BaseModel, Field, JsonValue
|
from pydantic import BaseModel, Field, JsonValue
|
||||||
|
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.invocations.fields import MetadataField
|
from invokeai.app.invocations.fields import MetadataField
|
||||||
from invokeai.app.services.image_records.image_records_common import (
|
from invokeai.app.services.image_records.image_records_common import (
|
||||||
ImageCategory,
|
ImageCategory,
|
||||||
@ -18,8 +19,6 @@ from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
|
|||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
|
||||||
|
|
||||||
images_router = APIRouter(prefix="/v1/images", tags=["images"])
|
images_router = APIRouter(prefix="/v1/images", tags=["images"])
|
||||||
|
|
||||||
|
|
||||||
|
@ -16,6 +16,7 @@ from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
|
|||||||
from starlette.exceptions import HTTPException
|
from starlette.exceptions import HTTPException
|
||||||
from typing_extensions import Annotated
|
from typing_extensions import Annotated
|
||||||
|
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
|
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
|
||||||
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
|
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
|
||||||
from invokeai.app.services.model_records import (
|
from invokeai.app.services.model_records import (
|
||||||
@ -35,8 +36,6 @@ from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataW
|
|||||||
from invokeai.backend.model_manager.search import ModelSearch
|
from invokeai.backend.model_manager.search import ModelSearch
|
||||||
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
|
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
|
||||||
|
|
||||||
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
|
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
|
||||||
|
|
||||||
# images are immutable; set a high max-age
|
# images are immutable; set a high max-age
|
||||||
|
@ -4,6 +4,7 @@ from fastapi import Body, Path, Query
|
|||||||
from fastapi.routing import APIRouter
|
from fastapi.routing import APIRouter
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
|
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
|
||||||
from invokeai.app.services.session_queue.session_queue_common import (
|
from invokeai.app.services.session_queue.session_queue_common import (
|
||||||
QUEUE_ITEM_STATUS,
|
QUEUE_ITEM_STATUS,
|
||||||
@ -19,8 +20,6 @@ from invokeai.app.services.session_queue.session_queue_common import (
|
|||||||
)
|
)
|
||||||
from invokeai.app.services.shared.pagination import CursorPaginatedResults
|
from invokeai.app.services.shared.pagination import CursorPaginatedResults
|
||||||
|
|
||||||
from ..dependencies import ApiDependencies
|
|
||||||
|
|
||||||
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
|
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
|
||||||
|
|
||||||
|
|
||||||
|
@ -20,14 +20,9 @@ from torch.backends.mps import is_available as is_mps_available
|
|||||||
# noinspection PyUnresolvedReferences
|
# noinspection PyUnresolvedReferences
|
||||||
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
|
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
|
||||||
import invokeai.frontend.web as web_dir
|
import invokeai.frontend.web as web_dir
|
||||||
|
from invokeai.app.api.dependencies import ApiDependencies
|
||||||
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
|
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.api.routers import (
|
||||||
from invokeai.app.util.custom_openapi import get_openapi_func
|
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
|
||||||
|
|
||||||
from ..backend.util.logging import InvokeAILogger
|
|
||||||
from .api.dependencies import ApiDependencies
|
|
||||||
from .api.routers import (
|
|
||||||
app_info,
|
app_info,
|
||||||
board_images,
|
board_images,
|
||||||
boards,
|
boards,
|
||||||
@ -38,7 +33,11 @@ from .api.routers import (
|
|||||||
utilities,
|
utilities,
|
||||||
workflows,
|
workflows,
|
||||||
)
|
)
|
||||||
from .api.sockets import SocketIO
|
from invokeai.app.api.sockets import SocketIO
|
||||||
|
from invokeai.app.services.config.config_default import get_config
|
||||||
|
from invokeai.app.util.custom_openapi import get_openapi_func
|
||||||
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
app_config = get_config()
|
app_config = get_config()
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ from invokeai.app.util.misc import uuid_string
|
|||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ..services.invocation_services import InvocationServices
|
from invokeai.app.services.invocation_services import InvocationServices
|
||||||
|
|
||||||
logger = InvokeAILogger.get_logger()
|
logger = InvokeAILogger.get_logger()
|
||||||
|
|
||||||
|
@ -4,13 +4,12 @@
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ValidationInfo, field_validator
|
from pydantic import ValidationInfo, field_validator
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
|
from invokeai.app.invocations.fields import InputField
|
||||||
from invokeai.app.invocations.primitives import IntegerCollectionOutput
|
from invokeai.app.invocations.primitives import IntegerCollectionOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.misc import SEED_MAX
|
from invokeai.app.util.misc import SEED_MAX
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
from .fields import InputField
|
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
|
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
|
||||||
|
@ -5,6 +5,7 @@ from compel import Compel, ReturnedEmbeddingsType
|
|||||||
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
|
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
|
||||||
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
|
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
ConditioningField,
|
ConditioningField,
|
||||||
FieldDescriptions,
|
FieldDescriptions,
|
||||||
@ -14,6 +15,7 @@ from invokeai.app.invocations.fields import (
|
|||||||
TensorField,
|
TensorField,
|
||||||
UIComponent,
|
UIComponent,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.invocations.model import CLIPField
|
||||||
from invokeai.app.invocations.primitives import ConditioningOutput
|
from invokeai.app.invocations.primitives import ConditioningOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.ti_utils import generate_ti_list
|
from invokeai.app.util.ti_utils import generate_ti_list
|
||||||
@ -26,9 +28,6 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
|||||||
)
|
)
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
|
||||||
from .model import CLIPField
|
|
||||||
|
|
||||||
# unconditioned: Optional[torch.Tensor]
|
# unconditioned: Optional[torch.Tensor]
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,6 +1,5 @@
|
|||||||
from typing import Literal
|
from typing import Literal
|
||||||
|
|
||||||
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
LATENT_SCALE_FACTOR = 8
|
LATENT_SCALE_FACTOR = 8
|
||||||
@ -11,9 +10,6 @@ factor is hard-coded to a literal '8' rather than using this constant.
|
|||||||
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
|
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
SCHEDULER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
|
|
||||||
"""A literal type representing the valid scheduler names."""
|
|
||||||
|
|
||||||
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
|
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
|
||||||
"""A literal type for PIL image modes supported by Invoke"""
|
"""A literal type for PIL image modes supported by Invoke"""
|
||||||
|
|
||||||
|
@ -22,6 +22,13 @@ from controlnet_aux.util import HWC3, ade_palette
|
|||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pydantic import BaseModel, Field, field_validator, model_validator
|
from pydantic import BaseModel, Field, field_validator, model_validator
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import (
|
||||||
|
BaseInvocation,
|
||||||
|
BaseInvocationOutput,
|
||||||
|
Classification,
|
||||||
|
invocation,
|
||||||
|
invocation_output,
|
||||||
|
)
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
FieldDescriptions,
|
FieldDescriptions,
|
||||||
ImageField,
|
ImageField,
|
||||||
@ -45,8 +52,6 @@ from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
|
|||||||
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
|
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
|
|
||||||
|
|
||||||
|
|
||||||
class ControlField(BaseModel):
|
class ControlField(BaseModel):
|
||||||
image: ImageField = Field(description="The control image")
|
image: ImageField = Field(description="The control image")
|
||||||
|
@ -5,13 +5,11 @@ import cv2 as cv
|
|||||||
import numpy
|
import numpy
|
||||||
from PIL import Image, ImageOps
|
from PIL import Image, ImageOps
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import ImageField
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
|
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
|
||||||
from invokeai.app.invocations.primitives import ImageOutput
|
from invokeai.app.invocations.primitives import ImageOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
from .fields import InputField, WithBoard, WithMetadata
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
|
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
|
||||||
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||||
|
@ -17,7 +17,7 @@ from torchvision.transforms.functional import resize as tv_resize
|
|||||||
from transformers import CLIPVisionModelWithProjection
|
from transformers import CLIPVisionModelWithProjection
|
||||||
|
|
||||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
|
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||||
from invokeai.app.invocations.controlnet_image_processors import ControlField
|
from invokeai.app.invocations.controlnet_image_processors import ControlField
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
ConditioningField,
|
ConditioningField,
|
||||||
@ -54,6 +54,7 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
|||||||
TextConditioningRegions,
|
TextConditioningRegions,
|
||||||
)
|
)
|
||||||
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
||||||
|
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.hotfixes import ControlNetModel
|
from invokeai.backend.util.hotfixes import ControlNetModel
|
||||||
from invokeai.backend.util.mask import to_standard_float_mask
|
from invokeai.backend.util.mask import to_standard_float_mask
|
||||||
|
@ -6,6 +6,7 @@ import cv2
|
|||||||
import numpy
|
import numpy
|
||||||
from PIL import Image, ImageChops, ImageFilter, ImageOps
|
from PIL import Image, ImageChops, ImageFilter, ImageOps
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
|
||||||
from invokeai.app.invocations.constants import IMAGE_MODES
|
from invokeai.app.invocations.constants import IMAGE_MODES
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
ColorField,
|
ColorField,
|
||||||
@ -21,8 +22,6 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
|
|||||||
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
|
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
|
||||||
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, Classification, invocation
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.1")
|
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.1")
|
||||||
class ShowImageInvocation(BaseInvocation):
|
class ShowImageInvocation(BaseInvocation):
|
||||||
|
@ -3,7 +3,9 @@ from typing import Literal, get_args
|
|||||||
|
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import ColorField, ImageField
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
|
from invokeai.app.invocations.fields import ColorField, ImageField, InputField, WithBoard, WithMetadata
|
||||||
|
from invokeai.app.invocations.image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
|
||||||
from invokeai.app.invocations.primitives import ImageOutput
|
from invokeai.app.invocations.primitives import ImageOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.misc import SEED_MAX
|
from invokeai.app.util.misc import SEED_MAX
|
||||||
@ -14,10 +16,6 @@ from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, in
|
|||||||
from invokeai.backend.image_util.infill_methods.tile import infill_tile
|
from invokeai.backend.image_util.infill_methods.tile import infill_tile
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
from .fields import InputField, WithBoard, WithMetadata
|
|
||||||
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
|
|
||||||
|
|
||||||
logger = InvokeAILogger.get_logger()
|
logger = InvokeAILogger.get_logger()
|
||||||
|
|
||||||
|
|
||||||
|
@ -5,12 +5,11 @@ from typing import Literal
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ValidationInfo, field_validator
|
from pydantic import ValidationInfo, field_validator
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
from invokeai.app.invocations.fields import FieldDescriptions, InputField
|
from invokeai.app.invocations.fields import FieldDescriptions, InputField
|
||||||
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
|
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
|
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
|
||||||
class AddInvocation(BaseInvocation):
|
class AddInvocation(BaseInvocation):
|
||||||
|
@ -14,8 +14,7 @@ from invokeai.app.invocations.fields import (
|
|||||||
from invokeai.app.invocations.model import ModelIdentifierField
|
from invokeai.app.invocations.model import ModelIdentifierField
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
|
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
|
||||||
|
from invokeai.version.invokeai_version import __version__
|
||||||
from ...version import __version__
|
|
||||||
|
|
||||||
|
|
||||||
class MetadataItemField(BaseModel):
|
class MetadataItemField(BaseModel):
|
||||||
|
@ -3,18 +3,17 @@ from typing import List, Optional
|
|||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
|
from invokeai.app.invocations.baseinvocation import (
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
|
||||||
from invokeai.app.shared.models import FreeUConfig
|
|
||||||
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
|
|
||||||
|
|
||||||
from .baseinvocation import (
|
|
||||||
BaseInvocation,
|
BaseInvocation,
|
||||||
BaseInvocationOutput,
|
BaseInvocationOutput,
|
||||||
Classification,
|
Classification,
|
||||||
invocation,
|
invocation,
|
||||||
invocation_output,
|
invocation_output,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
|
||||||
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
from invokeai.app.shared.models import FreeUConfig
|
||||||
|
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
|
||||||
|
|
||||||
|
|
||||||
class ModelIdentifierField(BaseModel):
|
class ModelIdentifierField(BaseModel):
|
||||||
|
@ -4,18 +4,12 @@
|
|||||||
import torch
|
import torch
|
||||||
from pydantic import field_validator
|
from pydantic import field_validator
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||||
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
|
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.misc import SEED_MAX
|
from invokeai.app.util.misc import SEED_MAX
|
||||||
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from ...backend.util.devices import TorchDevice
|
|
||||||
from .baseinvocation import (
|
|
||||||
BaseInvocation,
|
|
||||||
BaseInvocationOutput,
|
|
||||||
invocation,
|
|
||||||
invocation_output,
|
|
||||||
)
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Utilities
|
Utilities
|
||||||
|
@ -39,12 +39,11 @@ from easing_functions import (
|
|||||||
)
|
)
|
||||||
from matplotlib.ticker import MaxNLocator
|
from matplotlib.ticker import MaxNLocator
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
|
from invokeai.app.invocations.fields import InputField
|
||||||
from invokeai.app.invocations.primitives import FloatCollectionOutput
|
from invokeai.app.invocations.primitives import FloatCollectionOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
from .fields import InputField
|
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
"float_range",
|
"float_range",
|
||||||
|
@ -4,6 +4,7 @@ from typing import Optional
|
|||||||
|
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
ColorField,
|
ColorField,
|
||||||
@ -21,13 +22,6 @@ from invokeai.app.invocations.fields import (
|
|||||||
from invokeai.app.services.images.images_common import ImageDTO
|
from invokeai.app.services.images.images_common import ImageDTO
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
|
||||||
from .baseinvocation import (
|
|
||||||
BaseInvocation,
|
|
||||||
BaseInvocationOutput,
|
|
||||||
invocation,
|
|
||||||
invocation_output,
|
|
||||||
)
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
|
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
|
||||||
- primitive nodes
|
- primitive nodes
|
||||||
|
@ -5,12 +5,11 @@ import numpy as np
|
|||||||
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
|
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
|
||||||
from pydantic import field_validator
|
from pydantic import field_validator
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
|
from invokeai.app.invocations.fields import InputField, UIComponent
|
||||||
from invokeai.app.invocations.primitives import StringCollectionOutput
|
from invokeai.app.invocations.primitives import StringCollectionOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
from .fields import InputField, UIComponent
|
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
"dynamic_prompt",
|
"dynamic_prompt",
|
||||||
|
@ -1,5 +1,4 @@
|
|||||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
|
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
FieldDescriptions,
|
FieldDescriptions,
|
||||||
InputField,
|
InputField,
|
||||||
@ -7,6 +6,7 @@ from invokeai.app.invocations.fields import (
|
|||||||
UIType,
|
UIType,
|
||||||
)
|
)
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||||
|
|
||||||
|
|
||||||
@invocation_output("scheduler_output")
|
@invocation_output("scheduler_output")
|
||||||
|
@ -1,15 +1,9 @@
|
|||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
|
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
|
||||||
|
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.backend.model_manager import SubModelType
|
from invokeai.backend.model_manager import SubModelType
|
||||||
|
|
||||||
from .baseinvocation import (
|
|
||||||
BaseInvocation,
|
|
||||||
BaseInvocationOutput,
|
|
||||||
invocation,
|
|
||||||
invocation_output,
|
|
||||||
)
|
|
||||||
from .model import CLIPField, ModelIdentifierField, UNetField, VAEField
|
|
||||||
|
|
||||||
|
|
||||||
@invocation_output("sdxl_model_loader_output")
|
@invocation_output("sdxl_model_loader_output")
|
||||||
class SDXLModelLoaderOutput(BaseInvocationOutput):
|
class SDXLModelLoaderOutput(BaseInvocationOutput):
|
||||||
|
@ -2,17 +2,11 @@
|
|||||||
|
|
||||||
import re
|
import re
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
|
from invokeai.app.invocations.fields import InputField, OutputField, UIComponent
|
||||||
|
from invokeai.app.invocations.primitives import StringOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
|
|
||||||
from .baseinvocation import (
|
|
||||||
BaseInvocation,
|
|
||||||
BaseInvocationOutput,
|
|
||||||
invocation,
|
|
||||||
invocation_output,
|
|
||||||
)
|
|
||||||
from .fields import InputField, OutputField, UIComponent
|
|
||||||
from .primitives import StringOutput
|
|
||||||
|
|
||||||
|
|
||||||
@invocation_output("string_pos_neg_output")
|
@invocation_output("string_pos_neg_output")
|
||||||
class StringPosNegOutput(BaseInvocationOutput):
|
class StringPosNegOutput(BaseInvocationOutput):
|
||||||
|
@ -8,7 +8,7 @@ from diffusers.schedulers.scheduling_utils import SchedulerMixin
|
|||||||
from pydantic import field_validator
|
from pydantic import field_validator
|
||||||
|
|
||||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
|
||||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
|
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||||
from invokeai.app.invocations.controlnet_image_processors import ControlField
|
from invokeai.app.invocations.controlnet_image_processors import ControlField
|
||||||
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
|
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
@ -29,6 +29,7 @@ from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
|
|||||||
MultiDiffusionPipeline,
|
MultiDiffusionPipeline,
|
||||||
MultiDiffusionRegionConditioning,
|
MultiDiffusionRegionConditioning,
|
||||||
)
|
)
|
||||||
|
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||||
from invokeai.backend.tiles.tiles import (
|
from invokeai.backend.tiles.tiles import (
|
||||||
calc_tiles_min_overlap,
|
calc_tiles_min_overlap,
|
||||||
)
|
)
|
||||||
|
@ -6,15 +6,13 @@ import numpy as np
|
|||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pydantic import ConfigDict
|
from pydantic import ConfigDict
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import ImageField
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||||
|
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
|
||||||
from invokeai.app.invocations.primitives import ImageOutput
|
from invokeai.app.invocations.primitives import ImageOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
||||||
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
|
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
|
||||||
from .fields import InputField, WithBoard, WithMetadata
|
|
||||||
|
|
||||||
# TODO: Populate this from disk?
|
# TODO: Populate this from disk?
|
||||||
# TODO: Use model manager to load?
|
# TODO: Use model manager to load?
|
||||||
ESRGAN_MODELS = Literal[
|
ESRGAN_MODELS = Literal[
|
||||||
|
@ -2,12 +2,11 @@ import sqlite3
|
|||||||
import threading
|
import threading
|
||||||
from typing import Optional, cast
|
from typing import Optional, cast
|
||||||
|
|
||||||
|
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
|
||||||
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record
|
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||||
|
|
||||||
from .board_image_records_base import BoardImageRecordStorageBase
|
|
||||||
|
|
||||||
|
|
||||||
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
|
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
|
||||||
_conn: sqlite3.Connection
|
_conn: sqlite3.Connection
|
||||||
|
@ -1,9 +1,8 @@
|
|||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
|
||||||
from .board_images_base import BoardImagesServiceABC
|
|
||||||
|
|
||||||
|
|
||||||
class BoardImagesService(BoardImagesServiceABC):
|
class BoardImagesService(BoardImagesServiceABC):
|
||||||
__invoker: Invoker
|
__invoker: Invoker
|
||||||
|
@ -1,9 +1,8 @@
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
|
|
||||||
|
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
|
||||||
from .board_records_common import BoardChanges, BoardRecord
|
|
||||||
|
|
||||||
|
|
||||||
class BoardRecordStorageBase(ABC):
|
class BoardRecordStorageBase(ABC):
|
||||||
"""Low-level service responsible for interfacing with the board record store."""
|
"""Low-level service responsible for interfacing with the board record store."""
|
||||||
|
@ -24,6 +24,8 @@ class BoardRecord(BaseModelExcludeNull):
|
|||||||
"""The name of the cover image of the board."""
|
"""The name of the cover image of the board."""
|
||||||
archived: bool = Field(description="Whether or not the board is archived.")
|
archived: bool = Field(description="Whether or not the board is archived.")
|
||||||
"""Whether or not the board is archived."""
|
"""Whether or not the board is archived."""
|
||||||
|
is_private: Optional[bool] = Field(default=None, description="Whether the board is private.")
|
||||||
|
"""Whether the board is private."""
|
||||||
|
|
||||||
|
|
||||||
def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
||||||
@ -38,6 +40,7 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
|||||||
updated_at = board_dict.get("updated_at", get_iso_timestamp())
|
updated_at = board_dict.get("updated_at", get_iso_timestamp())
|
||||||
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
|
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
|
||||||
archived = board_dict.get("archived", False)
|
archived = board_dict.get("archived", False)
|
||||||
|
is_private = board_dict.get("is_private", False)
|
||||||
|
|
||||||
return BoardRecord(
|
return BoardRecord(
|
||||||
board_id=board_id,
|
board_id=board_id,
|
||||||
@ -47,6 +50,7 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
|||||||
updated_at=updated_at,
|
updated_at=updated_at,
|
||||||
deleted_at=deleted_at,
|
deleted_at=deleted_at,
|
||||||
archived=archived,
|
archived=archived,
|
||||||
|
is_private=is_private,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -2,12 +2,8 @@ import sqlite3
|
|||||||
import threading
|
import threading
|
||||||
from typing import Union, cast
|
from typing import Union, cast
|
||||||
|
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
from invokeai.app.services.board_records.board_records_common import (
|
||||||
from invokeai.app.util.misc import uuid_string
|
|
||||||
|
|
||||||
from .board_records_base import BoardRecordStorageBase
|
|
||||||
from .board_records_common import (
|
|
||||||
BoardChanges,
|
BoardChanges,
|
||||||
BoardRecord,
|
BoardRecord,
|
||||||
BoardRecordDeleteException,
|
BoardRecordDeleteException,
|
||||||
@ -15,6 +11,9 @@ from .board_records_common import (
|
|||||||
BoardRecordSaveException,
|
BoardRecordSaveException,
|
||||||
deserialize_board_record,
|
deserialize_board_record,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||||
|
from invokeai.app.util.misc import uuid_string
|
||||||
|
|
||||||
|
|
||||||
class SqliteBoardRecordStorage(BoardRecordStorageBase):
|
class SqliteBoardRecordStorage(BoardRecordStorageBase):
|
||||||
|
@ -1,10 +1,9 @@
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
|
|
||||||
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
||||||
|
from invokeai.app.services.boards.boards_common import BoardDTO
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
|
||||||
from .boards_common import BoardDTO
|
|
||||||
|
|
||||||
|
|
||||||
class BoardServiceABC(ABC):
|
class BoardServiceABC(ABC):
|
||||||
"""High-level service for board management."""
|
"""High-level service for board management."""
|
||||||
|
@ -2,7 +2,7 @@ from typing import Optional
|
|||||||
|
|
||||||
from pydantic import Field
|
from pydantic import Field
|
||||||
|
|
||||||
from ..board_records.board_records_common import BoardRecord
|
from invokeai.app.services.board_records.board_records_common import BoardRecord
|
||||||
|
|
||||||
|
|
||||||
class BoardDTO(BoardRecord):
|
class BoardDTO(BoardRecord):
|
||||||
|
@ -1,11 +1,9 @@
|
|||||||
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
||||||
from invokeai.app.services.boards.boards_common import BoardDTO
|
from invokeai.app.services.boards.boards_base import BoardServiceABC
|
||||||
|
from invokeai.app.services.boards.boards_common import BoardDTO, board_record_to_dto
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
|
||||||
from .boards_base import BoardServiceABC
|
|
||||||
from .boards_common import board_record_to_dto
|
|
||||||
|
|
||||||
|
|
||||||
class BoardService(BoardServiceABC):
|
class BoardService(BoardServiceABC):
|
||||||
__invoker: Invoker
|
__invoker: Invoker
|
||||||
|
@ -4,6 +4,7 @@ from typing import Optional, Union
|
|||||||
from zipfile import ZipFile
|
from zipfile import ZipFile
|
||||||
|
|
||||||
from invokeai.app.services.board_records.board_records_common import BoardRecordNotFoundException
|
from invokeai.app.services.board_records.board_records_common import BoardRecordNotFoundException
|
||||||
|
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
|
||||||
from invokeai.app.services.bulk_download.bulk_download_common import (
|
from invokeai.app.services.bulk_download.bulk_download_common import (
|
||||||
DEFAULT_BULK_DOWNLOAD_ID,
|
DEFAULT_BULK_DOWNLOAD_ID,
|
||||||
BulkDownloadException,
|
BulkDownloadException,
|
||||||
@ -15,8 +16,6 @@ from invokeai.app.services.images.images_common import ImageDTO
|
|||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.app.util.misc import uuid_string
|
from invokeai.app.util.misc import uuid_string
|
||||||
|
|
||||||
from .bulk_download_base import BulkDownloadBase
|
|
||||||
|
|
||||||
|
|
||||||
class BulkDownloadService(BulkDownloadBase):
|
class BulkDownloadService(BulkDownloadBase):
|
||||||
def start(self, invoker: Invoker) -> None:
|
def start(self, invoker: Invoker) -> None:
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
"""Init file for InvokeAI configure package."""
|
"""Init file for InvokeAI configure package."""
|
||||||
|
|
||||||
from invokeai.app.services.config.config_common import PagingArgumentParser
|
from invokeai.app.services.config.config_common import PagingArgumentParser
|
||||||
|
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
|
||||||
from .config_default import InvokeAIAppConfig, get_config
|
|
||||||
|
|
||||||
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]
|
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]
|
||||||
|
@ -1,13 +1,13 @@
|
|||||||
"""Init file for download queue."""
|
"""Init file for download queue."""
|
||||||
|
|
||||||
from .download_base import (
|
from invokeai.app.services.download.download_base import (
|
||||||
DownloadJob,
|
DownloadJob,
|
||||||
DownloadJobStatus,
|
DownloadJobStatus,
|
||||||
DownloadQueueServiceBase,
|
DownloadQueueServiceBase,
|
||||||
MultiFileDownloadJob,
|
MultiFileDownloadJob,
|
||||||
UnknownJobIDException,
|
UnknownJobIDException,
|
||||||
)
|
)
|
||||||
from .download_default import DownloadQueueService, TqdmProgress
|
from invokeai.app.services.download.download_default import DownloadQueueService, TqdmProgress
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"DownloadJob",
|
"DownloadJob",
|
||||||
|
@ -16,12 +16,7 @@ from requests import HTTPError
|
|||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
from invokeai.app.services.config import InvokeAIAppConfig, get_config
|
from invokeai.app.services.config import InvokeAIAppConfig, get_config
|
||||||
from invokeai.app.services.events.events_base import EventServiceBase
|
from invokeai.app.services.download.download_base import (
|
||||||
from invokeai.app.util.misc import get_iso_timestamp
|
|
||||||
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
|
||||||
|
|
||||||
from .download_base import (
|
|
||||||
DownloadEventHandler,
|
DownloadEventHandler,
|
||||||
DownloadExceptionHandler,
|
DownloadExceptionHandler,
|
||||||
DownloadJob,
|
DownloadJob,
|
||||||
@ -33,6 +28,10 @@ from .download_base import (
|
|||||||
ServiceInactiveException,
|
ServiceInactiveException,
|
||||||
UnknownJobIDException,
|
UnknownJobIDException,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
|
from invokeai.app.util.misc import get_iso_timestamp
|
||||||
|
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
||||||
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
# Maximum number of bytes to download during each call to requests.iter_content()
|
# Maximum number of bytes to download during each call to requests.iter_content()
|
||||||
DOWNLOAD_CHUNK_SIZE = 100000
|
DOWNLOAD_CHUNK_SIZE = 100000
|
||||||
@ -185,7 +184,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
|||||||
job = DownloadJob(
|
job = DownloadJob(
|
||||||
source=url,
|
source=url,
|
||||||
dest=path,
|
dest=path,
|
||||||
access_token=access_token,
|
access_token=access_token or self._lookup_access_token(url),
|
||||||
)
|
)
|
||||||
mfdj.download_parts.add(job)
|
mfdj.download_parts.add(job)
|
||||||
self._download_part2parent[job.source] = mfdj
|
self._download_part2parent[job.source] = mfdj
|
||||||
|
@ -6,12 +6,11 @@ from queue import Empty, Queue
|
|||||||
|
|
||||||
from fastapi_events.dispatcher import dispatch
|
from fastapi_events.dispatcher import dispatch
|
||||||
|
|
||||||
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
from invokeai.app.services.events.events_common import (
|
from invokeai.app.services.events.events_common import (
|
||||||
EventBase,
|
EventBase,
|
||||||
)
|
)
|
||||||
|
|
||||||
from .events_base import EventServiceBase
|
|
||||||
|
|
||||||
|
|
||||||
class FastAPIEventService(EventServiceBase):
|
class FastAPIEventService(EventServiceBase):
|
||||||
def __init__(self, event_handler_id: int) -> None:
|
def __init__(self, event_handler_id: int) -> None:
|
||||||
|
@ -7,12 +7,15 @@ from PIL import Image, PngImagePlugin
|
|||||||
from PIL.Image import Image as PILImageType
|
from PIL.Image import Image as PILImageType
|
||||||
from send2trash import send2trash
|
from send2trash import send2trash
|
||||||
|
|
||||||
|
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
|
||||||
|
from invokeai.app.services.image_files.image_files_common import (
|
||||||
|
ImageFileDeleteException,
|
||||||
|
ImageFileNotFoundException,
|
||||||
|
ImageFileSaveException,
|
||||||
|
)
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
|
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
|
||||||
|
|
||||||
from .image_files_base import ImageFileStorageBase
|
|
||||||
from .image_files_common import ImageFileDeleteException, ImageFileNotFoundException, ImageFileSaveException
|
|
||||||
|
|
||||||
|
|
||||||
class DiskImageFileStorage(ImageFileStorageBase):
|
class DiskImageFileStorage(ImageFileStorageBase):
|
||||||
"""Stores images on disk"""
|
"""Stores images on disk"""
|
||||||
|
@ -3,11 +3,15 @@ from datetime import datetime
|
|||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import MetadataField
|
from invokeai.app.invocations.fields import MetadataField
|
||||||
|
from invokeai.app.services.image_records.image_records_common import (
|
||||||
|
ImageCategory,
|
||||||
|
ImageRecord,
|
||||||
|
ImageRecordChanges,
|
||||||
|
ResourceOrigin,
|
||||||
|
)
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||||
|
|
||||||
from .image_records_common import ImageCategory, ImageRecord, ImageRecordChanges, ResourceOrigin
|
|
||||||
|
|
||||||
|
|
||||||
class ImageRecordStorageBase(ABC):
|
class ImageRecordStorageBase(ABC):
|
||||||
"""Low-level service responsible for interfacing with the image record store."""
|
"""Low-level service responsible for interfacing with the image record store."""
|
||||||
|
@ -4,12 +4,8 @@ from datetime import datetime
|
|||||||
from typing import Optional, Union, cast
|
from typing import Optional, Union, cast
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
|
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
from invokeai.app.services.image_records.image_records_common import (
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
|
||||||
|
|
||||||
from .image_records_base import ImageRecordStorageBase
|
|
||||||
from .image_records_common import (
|
|
||||||
IMAGE_DTO_COLS,
|
IMAGE_DTO_COLS,
|
||||||
ImageCategory,
|
ImageCategory,
|
||||||
ImageRecord,
|
ImageRecord,
|
||||||
@ -20,6 +16,9 @@ from .image_records_common import (
|
|||||||
ResourceOrigin,
|
ResourceOrigin,
|
||||||
deserialize_image_record,
|
deserialize_image_record,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||||
|
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||||
|
|
||||||
|
|
||||||
class SqliteImageRecordStorage(ImageRecordStorageBase):
|
class SqliteImageRecordStorage(ImageRecordStorageBase):
|
||||||
|
@ -3,16 +3,12 @@ from typing import Optional
|
|||||||
from PIL.Image import Image as PILImageType
|
from PIL.Image import Image as PILImageType
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import MetadataField
|
from invokeai.app.invocations.fields import MetadataField
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.image_files.image_files_common import (
|
||||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
|
||||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
|
||||||
|
|
||||||
from ..image_files.image_files_common import (
|
|
||||||
ImageFileDeleteException,
|
ImageFileDeleteException,
|
||||||
ImageFileNotFoundException,
|
ImageFileNotFoundException,
|
||||||
ImageFileSaveException,
|
ImageFileSaveException,
|
||||||
)
|
)
|
||||||
from ..image_records.image_records_common import (
|
from invokeai.app.services.image_records.image_records_common import (
|
||||||
ImageCategory,
|
ImageCategory,
|
||||||
ImageRecord,
|
ImageRecord,
|
||||||
ImageRecordChanges,
|
ImageRecordChanges,
|
||||||
@ -23,8 +19,11 @@ from ..image_records.image_records_common import (
|
|||||||
InvalidOriginException,
|
InvalidOriginException,
|
||||||
ResourceOrigin,
|
ResourceOrigin,
|
||||||
)
|
)
|
||||||
from .images_base import ImageServiceABC
|
from invokeai.app.services.images.images_base import ImageServiceABC
|
||||||
from .images_common import ImageDTO, image_record_to_dto
|
from invokeai.app.services.images.images_common import ImageDTO, image_record_to_dto
|
||||||
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||||
|
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||||
|
|
||||||
|
|
||||||
class ImageService(ImageServiceABC):
|
class ImageService(ImageServiceABC):
|
||||||
|
@ -10,29 +10,28 @@ if TYPE_CHECKING:
|
|||||||
|
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
|
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
|
||||||
|
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
|
||||||
|
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
|
||||||
|
from invokeai.app.services.boards.boards_base import BoardServiceABC
|
||||||
|
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
|
||||||
|
from invokeai.app.services.config import InvokeAIAppConfig
|
||||||
|
from invokeai.app.services.download import DownloadQueueServiceBase
|
||||||
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
|
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
|
||||||
|
from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
|
||||||
|
from invokeai.app.services.images.images_base import ImageServiceABC
|
||||||
|
from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
|
||||||
|
from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
|
||||||
|
from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
|
||||||
|
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
|
||||||
|
from invokeai.app.services.names.names_base import NameServiceBase
|
||||||
|
from invokeai.app.services.session_processor.session_processor_base import SessionProcessorBase
|
||||||
|
from invokeai.app.services.session_queue.session_queue_base import SessionQueueBase
|
||||||
|
from invokeai.app.services.urls.urls_base import UrlServiceBase
|
||||||
|
from invokeai.app.services.workflow_records.workflow_records_base import WorkflowRecordsStorageBase
|
||||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
||||||
|
|
||||||
from .board_image_records.board_image_records_base import BoardImageRecordStorageBase
|
|
||||||
from .board_images.board_images_base import BoardImagesServiceABC
|
|
||||||
from .board_records.board_records_base import BoardRecordStorageBase
|
|
||||||
from .boards.boards_base import BoardServiceABC
|
|
||||||
from .bulk_download.bulk_download_base import BulkDownloadBase
|
|
||||||
from .config import InvokeAIAppConfig
|
|
||||||
from .download import DownloadQueueServiceBase
|
|
||||||
from .events.events_base import EventServiceBase
|
|
||||||
from .image_files.image_files_base import ImageFileStorageBase
|
|
||||||
from .image_records.image_records_base import ImageRecordStorageBase
|
|
||||||
from .images.images_base import ImageServiceABC
|
|
||||||
from .invocation_cache.invocation_cache_base import InvocationCacheBase
|
|
||||||
from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
|
|
||||||
from .model_images.model_images_base import ModelImageFileStorageBase
|
|
||||||
from .model_manager.model_manager_base import ModelManagerServiceBase
|
|
||||||
from .names.names_base import NameServiceBase
|
|
||||||
from .session_processor.session_processor_base import SessionProcessorBase
|
|
||||||
from .session_queue.session_queue_base import SessionQueueBase
|
|
||||||
from .urls.urls_base import UrlServiceBase
|
|
||||||
from .workflow_records.workflow_records_base import WorkflowRecordsStorageBase
|
|
||||||
|
|
||||||
|
|
||||||
class InvocationServices:
|
class InvocationServices:
|
||||||
"""Services that can be used by invocations"""
|
"""Services that can be used by invocations"""
|
||||||
|
@ -9,11 +9,8 @@ import torch
|
|||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.app.invocations.baseinvocation import BaseInvocation
|
from invokeai.app.invocations.baseinvocation import BaseInvocation
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
|
||||||
from invokeai.backend.model_manager.load.model_cache import CacheStats
|
from invokeai.app.services.invocation_stats.invocation_stats_common import (
|
||||||
|
|
||||||
from .invocation_stats_base import InvocationStatsServiceBase
|
|
||||||
from .invocation_stats_common import (
|
|
||||||
GESStatsNotFoundError,
|
GESStatsNotFoundError,
|
||||||
GraphExecutionStats,
|
GraphExecutionStats,
|
||||||
GraphExecutionStatsSummary,
|
GraphExecutionStatsSummary,
|
||||||
@ -22,6 +19,8 @@ from .invocation_stats_common import (
|
|||||||
NodeExecutionStats,
|
NodeExecutionStats,
|
||||||
NodeExecutionStatsSummary,
|
NodeExecutionStatsSummary,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
from invokeai.backend.model_manager.load.model_cache import CacheStats
|
||||||
|
|
||||||
# Size of 1GB in bytes.
|
# Size of 1GB in bytes.
|
||||||
GB = 2**30
|
GB = 2**30
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
|
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
|
||||||
|
|
||||||
|
|
||||||
from .invocation_services import InvocationServices
|
from invokeai.app.services.invocation_services import InvocationServices
|
||||||
|
|
||||||
|
|
||||||
class Invoker:
|
class Invoker:
|
||||||
|
@ -5,15 +5,14 @@ from PIL.Image import Image as PILImageType
|
|||||||
from send2trash import send2trash
|
from send2trash import send2trash
|
||||||
|
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.app.util.misc import uuid_string
|
from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
|
||||||
from invokeai.app.util.thumbnails import make_thumbnail
|
from invokeai.app.services.model_images.model_images_common import (
|
||||||
|
|
||||||
from .model_images_base import ModelImageFileStorageBase
|
|
||||||
from .model_images_common import (
|
|
||||||
ModelImageFileDeleteException,
|
ModelImageFileDeleteException,
|
||||||
ModelImageFileNotFoundException,
|
ModelImageFileNotFoundException,
|
||||||
ModelImageFileSaveException,
|
ModelImageFileSaveException,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.util.misc import uuid_string
|
||||||
|
from invokeai.app.util.thumbnails import make_thumbnail
|
||||||
|
|
||||||
|
|
||||||
class ModelImageFileStorageDisk(ModelImageFileStorageBase):
|
class ModelImageFileStorageDisk(ModelImageFileStorageBase):
|
||||||
|
@ -1,9 +1,7 @@
|
|||||||
"""Initialization file for model install service package."""
|
"""Initialization file for model install service package."""
|
||||||
|
|
||||||
from .model_install_base import (
|
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||||
ModelInstallServiceBase,
|
from invokeai.app.services.model_install.model_install_common import (
|
||||||
)
|
|
||||||
from .model_install_common import (
|
|
||||||
HFModelSource,
|
HFModelSource,
|
||||||
InstallStatus,
|
InstallStatus,
|
||||||
LocalModelSource,
|
LocalModelSource,
|
||||||
@ -12,7 +10,7 @@ from .model_install_common import (
|
|||||||
UnknownInstallJobException,
|
UnknownInstallJobException,
|
||||||
URLModelSource,
|
URLModelSource,
|
||||||
)
|
)
|
||||||
from .model_install_default import ModelInstallService
|
from invokeai.app.services.model_install.model_install_default import ModelInstallService
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"ModelInstallServiceBase",
|
"ModelInstallServiceBase",
|
||||||
|
@ -23,6 +23,16 @@ from invokeai.app.services.download import DownloadQueueServiceBase, MultiFileDo
|
|||||||
from invokeai.app.services.events.events_base import EventServiceBase
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||||
|
from invokeai.app.services.model_install.model_install_common import (
|
||||||
|
MODEL_SOURCE_TO_TYPE_MAP,
|
||||||
|
HFModelSource,
|
||||||
|
InstallStatus,
|
||||||
|
LocalModelSource,
|
||||||
|
ModelInstallJob,
|
||||||
|
ModelSource,
|
||||||
|
StringLikeSource,
|
||||||
|
URLModelSource,
|
||||||
|
)
|
||||||
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
|
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
|
||||||
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
||||||
from invokeai.backend.model_manager.config import (
|
from invokeai.backend.model_manager.config import (
|
||||||
@ -47,17 +57,6 @@ from invokeai.backend.util.catch_sigint import catch_sigint
|
|||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.util import slugify
|
from invokeai.backend.util.util import slugify
|
||||||
|
|
||||||
from .model_install_common import (
|
|
||||||
MODEL_SOURCE_TO_TYPE_MAP,
|
|
||||||
HFModelSource,
|
|
||||||
InstallStatus,
|
|
||||||
LocalModelSource,
|
|
||||||
ModelInstallJob,
|
|
||||||
ModelSource,
|
|
||||||
StringLikeSource,
|
|
||||||
URLModelSource,
|
|
||||||
)
|
|
||||||
|
|
||||||
TMPDIR_PREFIX = "tmpinstall_"
|
TMPDIR_PREFIX = "tmpinstall_"
|
||||||
|
|
||||||
|
|
||||||
@ -848,7 +847,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
|||||||
with self._lock:
|
with self._lock:
|
||||||
if install_job := self._download_cache.pop(download_job.id, None):
|
if install_job := self._download_cache.pop(download_job.id, None):
|
||||||
assert excp is not None
|
assert excp is not None
|
||||||
install_job.set_error(excp)
|
self._set_error(install_job, excp)
|
||||||
self._download_queue.cancel_job(download_job)
|
self._download_queue.cancel_job(download_job)
|
||||||
|
|
||||||
# Let other threads know that the number of downloads has changed
|
# Let other threads know that the number of downloads has changed
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
"""Initialization file for model load service module."""
|
"""Initialization file for model load service module."""
|
||||||
|
|
||||||
from .model_load_base import ModelLoadServiceBase
|
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||||
from .model_load_default import ModelLoadService
|
from invokeai.app.services.model_load.model_load_default import ModelLoadService
|
||||||
|
|
||||||
__all__ = ["ModelLoadServiceBase", "ModelLoadService"]
|
__all__ = ["ModelLoadServiceBase", "ModelLoadService"]
|
||||||
|
@ -10,6 +10,7 @@ from torch import load as torch_load
|
|||||||
|
|
||||||
from invokeai.app.services.config import InvokeAIAppConfig
|
from invokeai.app.services.config import InvokeAIAppConfig
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||||
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
|
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
|
||||||
from invokeai.backend.model_manager.load import (
|
from invokeai.backend.model_manager.load import (
|
||||||
LoadedModel,
|
LoadedModel,
|
||||||
@ -22,8 +23,6 @@ from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import
|
|||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from .model_load_base import ModelLoadServiceBase
|
|
||||||
|
|
||||||
|
|
||||||
class ModelLoadService(ModelLoadServiceBase):
|
class ModelLoadService(ModelLoadServiceBase):
|
||||||
"""Wrapper around ModelLoaderRegistry."""
|
"""Wrapper around ModelLoaderRegistry."""
|
||||||
|
@ -1,10 +1,9 @@
|
|||||||
"""Initialization file for model manager service."""
|
"""Initialization file for model manager service."""
|
||||||
|
|
||||||
|
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService, ModelManagerServiceBase
|
||||||
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelType, SubModelType
|
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelType, SubModelType
|
||||||
from invokeai.backend.model_manager.load import LoadedModel
|
from invokeai.backend.model_manager.load import LoadedModel
|
||||||
|
|
||||||
from .model_manager_default import ModelManagerService, ModelManagerServiceBase
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"ModelManagerServiceBase",
|
"ModelManagerServiceBase",
|
||||||
"ModelManagerService",
|
"ModelManagerService",
|
||||||
|
@ -5,14 +5,13 @@ from abc import ABC, abstractmethod
|
|||||||
import torch
|
import torch
|
||||||
from typing_extensions import Self
|
from typing_extensions import Self
|
||||||
|
|
||||||
|
from invokeai.app.services.config.config_default import InvokeAIAppConfig
|
||||||
|
from invokeai.app.services.download.download_base import DownloadQueueServiceBase
|
||||||
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||||
from ..config import InvokeAIAppConfig
|
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||||
from ..download import DownloadQueueServiceBase
|
from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
|
||||||
from ..events.events_base import EventServiceBase
|
|
||||||
from ..model_install import ModelInstallServiceBase
|
|
||||||
from ..model_load import ModelLoadServiceBase
|
|
||||||
from ..model_records import ModelRecordServiceBase
|
|
||||||
|
|
||||||
|
|
||||||
class ModelManagerServiceBase(ABC):
|
class ModelManagerServiceBase(ABC):
|
||||||
|
@ -6,19 +6,20 @@ from typing import Optional
|
|||||||
import torch
|
import torch
|
||||||
from typing_extensions import Self
|
from typing_extensions import Self
|
||||||
|
|
||||||
|
from invokeai.app.services.config.config_default import InvokeAIAppConfig
|
||||||
|
from invokeai.app.services.download.download_base import DownloadQueueServiceBase
|
||||||
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
|
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||||
|
from invokeai.app.services.model_install.model_install_default import ModelInstallService
|
||||||
|
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||||
|
from invokeai.app.services.model_load.model_load_default import ModelLoadService
|
||||||
|
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
|
||||||
|
from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
|
||||||
from invokeai.backend.model_manager.load import ModelCache, ModelLoaderRegistry
|
from invokeai.backend.model_manager.load import ModelCache, ModelLoaderRegistry
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from ..config import InvokeAIAppConfig
|
|
||||||
from ..download import DownloadQueueServiceBase
|
|
||||||
from ..events.events_base import EventServiceBase
|
|
||||||
from ..model_install import ModelInstallService, ModelInstallServiceBase
|
|
||||||
from ..model_load import ModelLoadService, ModelLoadServiceBase
|
|
||||||
from ..model_records import ModelRecordServiceBase
|
|
||||||
from .model_manager_base import ModelManagerServiceBase
|
|
||||||
|
|
||||||
|
|
||||||
class ModelManagerService(ModelManagerServiceBase):
|
class ModelManagerService(ModelManagerServiceBase):
|
||||||
"""
|
"""
|
||||||
|
@ -45,17 +45,7 @@ from math import ceil
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import List, Optional, Union
|
from typing import List, Optional, Union
|
||||||
|
|
||||||
from invokeai.app.services.shared.pagination import PaginatedResults
|
from invokeai.app.services.model_records.model_records_base import (
|
||||||
from invokeai.backend.model_manager.config import (
|
|
||||||
AnyModelConfig,
|
|
||||||
BaseModelType,
|
|
||||||
ModelConfigFactory,
|
|
||||||
ModelFormat,
|
|
||||||
ModelType,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ..shared.sqlite.sqlite_database import SqliteDatabase
|
|
||||||
from .model_records_base import (
|
|
||||||
DuplicateModelException,
|
DuplicateModelException,
|
||||||
ModelRecordChanges,
|
ModelRecordChanges,
|
||||||
ModelRecordOrderBy,
|
ModelRecordOrderBy,
|
||||||
@ -63,6 +53,15 @@ from .model_records_base import (
|
|||||||
ModelSummary,
|
ModelSummary,
|
||||||
UnknownModelException,
|
UnknownModelException,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.shared.pagination import PaginatedResults
|
||||||
|
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||||
|
from invokeai.backend.model_manager.config import (
|
||||||
|
AnyModelConfig,
|
||||||
|
BaseModelType,
|
||||||
|
ModelConfigFactory,
|
||||||
|
ModelFormat,
|
||||||
|
ModelType,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class ModelRecordServiceSQL(ModelRecordServiceBase):
|
class ModelRecordServiceSQL(ModelRecordServiceBase):
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
|
from invokeai.app.services.names.names_base import NameServiceBase
|
||||||
from invokeai.app.util.misc import uuid_string
|
from invokeai.app.util.misc import uuid_string
|
||||||
|
|
||||||
from .names_base import NameServiceBase
|
|
||||||
|
|
||||||
|
|
||||||
class SimpleNameService(NameServiceBase):
|
class SimpleNameService(NameServiceBase):
|
||||||
"""Creates image names from UUIDs."""
|
"""Creates image names from UUIDs."""
|
||||||
|
@ -13,24 +13,24 @@ from invokeai.app.services.events.events_common import (
|
|||||||
register_events,
|
register_events,
|
||||||
)
|
)
|
||||||
from invokeai.app.services.invocation_stats.invocation_stats_common import GESStatsNotFoundError
|
from invokeai.app.services.invocation_stats.invocation_stats_common import GESStatsNotFoundError
|
||||||
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.app.services.session_processor.session_processor_base import (
|
from invokeai.app.services.session_processor.session_processor_base import (
|
||||||
|
InvocationServices,
|
||||||
OnAfterRunNode,
|
OnAfterRunNode,
|
||||||
OnAfterRunSession,
|
OnAfterRunSession,
|
||||||
OnBeforeRunNode,
|
OnBeforeRunNode,
|
||||||
OnBeforeRunSession,
|
OnBeforeRunSession,
|
||||||
OnNodeError,
|
OnNodeError,
|
||||||
OnNonFatalProcessorError,
|
OnNonFatalProcessorError,
|
||||||
|
SessionProcessorBase,
|
||||||
|
SessionRunnerBase,
|
||||||
)
|
)
|
||||||
from invokeai.app.services.session_processor.session_processor_common import CanceledException
|
from invokeai.app.services.session_processor.session_processor_common import CanceledException, SessionProcessorStatus
|
||||||
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem, SessionQueueItemNotFoundError
|
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem, SessionQueueItemNotFoundError
|
||||||
from invokeai.app.services.shared.graph import NodeInputError
|
from invokeai.app.services.shared.graph import NodeInputError
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContextData, build_invocation_context
|
from invokeai.app.services.shared.invocation_context import InvocationContextData, build_invocation_context
|
||||||
from invokeai.app.util.profiler import Profiler
|
from invokeai.app.util.profiler import Profiler
|
||||||
|
|
||||||
from ..invoker import Invoker
|
|
||||||
from .session_processor_base import InvocationServices, SessionProcessorBase, SessionRunnerBase
|
|
||||||
from .session_processor_common import SessionProcessorStatus
|
|
||||||
|
|
||||||
|
|
||||||
class DefaultSessionRunner(SessionRunnerBase):
|
class DefaultSessionRunner(SessionRunnerBase):
|
||||||
"""Processes a single session's invocations."""
|
"""Processes a single session's invocations."""
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
from .urls_base import UrlServiceBase
|
from invokeai.app.services.urls.urls_base import UrlServiceBase
|
||||||
|
|
||||||
|
|
||||||
class LocalUrlService(UrlServiceBase):
|
class LocalUrlService(UrlServiceBase):
|
||||||
|
@ -5,9 +5,8 @@ from PIL import Image
|
|||||||
|
|
||||||
from invokeai.app.services.session_processor.session_processor_common import CanceledException, ProgressImage
|
from invokeai.app.services.session_processor.session_processor_common import CanceledException, ProgressImage
|
||||||
from invokeai.backend.model_manager.config import BaseModelType
|
from invokeai.backend.model_manager.config import BaseModelType
|
||||||
|
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
|
||||||
from ...backend.stable_diffusion import PipelineIntermediateState
|
from invokeai.backend.util.util import image_to_dataURL
|
||||||
from ...backend.util.util import image_to_dataURL
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from invokeai.app.services.events.events_base import EventServiceBase
|
from invokeai.app.services.events.events_base import EventServiceBase
|
||||||
|
@ -2,6 +2,11 @@
|
|||||||
Initialization file for invokeai.backend.image_util methods.
|
Initialization file for invokeai.backend.image_util methods.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .infill_methods.patchmatch import PatchMatch # noqa: F401
|
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch # noqa: F401
|
||||||
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
|
from invokeai.backend.image_util.pngwriter import ( # noqa: F401
|
||||||
from .util import InitImageResizer, make_grid # noqa: F401
|
PngWriter,
|
||||||
|
PromptFormatter,
|
||||||
|
retrieve_metadata,
|
||||||
|
write_metadata,
|
||||||
|
)
|
||||||
|
from invokeai.backend.image_util.util import InitImageResizer, make_grid # noqa: F401
|
||||||
|
@ -2,7 +2,7 @@ import torch
|
|||||||
from torch import nn as nn
|
from torch import nn as nn
|
||||||
from torch.nn import functional as F
|
from torch.nn import functional as F
|
||||||
|
|
||||||
from .arch_util import default_init_weights, make_layer, pixel_unshuffle
|
from invokeai.backend.image_util.basicsr.arch_util import default_init_weights, make_layer, pixel_unshuffle
|
||||||
|
|
||||||
|
|
||||||
class ResidualDenseBlock(nn.Module):
|
class ResidualDenseBlock(nn.Module):
|
||||||
|
@ -4,7 +4,7 @@ import torch
|
|||||||
import torch.nn as nn
|
import torch.nn as nn
|
||||||
import torch.nn.functional as F
|
import torch.nn.functional as F
|
||||||
|
|
||||||
from .blocks import FeatureFusionBlock, _make_scratch
|
from invokeai.backend.image_util.depth_anything.model.blocks import FeatureFusionBlock, _make_scratch
|
||||||
|
|
||||||
torchhub_path = Path(__file__).parent.parent / "torchhub"
|
torchhub_path = Path(__file__).parent.parent / "torchhub"
|
||||||
|
|
||||||
|
@ -8,11 +8,10 @@ import numpy as np
|
|||||||
import onnxruntime as ort
|
import onnxruntime as ort
|
||||||
|
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
|
from invokeai.backend.image_util.dw_openpose.onnxdet import inference_detector
|
||||||
|
from invokeai.backend.image_util.dw_openpose.onnxpose import inference_pose
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .onnxdet import inference_detector
|
|
||||||
from .onnxpose import inference_pose
|
|
||||||
|
|
||||||
config = get_config()
|
config = get_config()
|
||||||
|
|
||||||
|
|
||||||
|
@ -11,9 +11,8 @@ from PIL import Image
|
|||||||
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
|
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
|
||||||
|
|
||||||
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionWeights
|
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionWeights
|
||||||
|
from invokeai.backend.ip_adapter.resampler import Resampler
|
||||||
from ..raw_model import RawModel
|
from invokeai.backend.raw_model import RawModel
|
||||||
from .resampler import Resampler
|
|
||||||
|
|
||||||
|
|
||||||
class IPAdapterStateDict(TypedDict):
|
class IPAdapterStateDict(TypedDict):
|
||||||
|
@ -10,10 +10,9 @@ from safetensors.torch import load_file
|
|||||||
from typing_extensions import Self
|
from typing_extensions import Self
|
||||||
|
|
||||||
from invokeai.backend.model_manager import BaseModelType
|
from invokeai.backend.model_manager import BaseModelType
|
||||||
|
from invokeai.backend.raw_model import RawModel
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .raw_model import RawModel
|
|
||||||
|
|
||||||
|
|
||||||
class LoRALayerBase:
|
class LoRALayerBase:
|
||||||
# rank: Optional[int]
|
# rank: Optional[int]
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
"""Re-export frequently-used symbols from the Model Manager backend."""
|
"""Re-export frequently-used symbols from the Model Manager backend."""
|
||||||
|
|
||||||
from .config import (
|
from invokeai.backend.model_manager.config import (
|
||||||
AnyModel,
|
AnyModel,
|
||||||
AnyModelConfig,
|
AnyModelConfig,
|
||||||
BaseModelType,
|
BaseModelType,
|
||||||
@ -13,9 +13,9 @@ from .config import (
|
|||||||
SchedulerPredictionType,
|
SchedulerPredictionType,
|
||||||
SubModelType,
|
SubModelType,
|
||||||
)
|
)
|
||||||
from .load import LoadedModel
|
from invokeai.backend.model_manager.load import LoadedModel
|
||||||
from .probe import ModelProbe
|
from invokeai.backend.model_manager.probe import ModelProbe
|
||||||
from .search import ModelSearch
|
from invokeai.backend.model_manager.search import ModelSearch
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"AnyModel",
|
"AnyModel",
|
||||||
|
@ -30,11 +30,10 @@ from diffusers.models.modeling_utils import ModelMixin
|
|||||||
from pydantic import BaseModel, ConfigDict, Discriminator, Field, Tag, TypeAdapter
|
from pydantic import BaseModel, ConfigDict, Discriminator, Field, Tag, TypeAdapter
|
||||||
from typing_extensions import Annotated, Any, Dict
|
from typing_extensions import Annotated, Any, Dict
|
||||||
|
|
||||||
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
|
|
||||||
from invokeai.app.util.misc import uuid_string
|
from invokeai.app.util.misc import uuid_string
|
||||||
from invokeai.backend.model_hash.hash_validator import validate_hash
|
from invokeai.backend.model_hash.hash_validator import validate_hash
|
||||||
|
from invokeai.backend.raw_model import RawModel
|
||||||
from ..raw_model import RawModel
|
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||||
|
|
||||||
# ModelMixin is the base class for all diffusers and transformers models
|
# ModelMixin is the base class for all diffusers and transformers models
|
||||||
# RawModel is the InvokeAI wrapper class for ip_adapters, loras, textual_inversion and onnx runtime
|
# RawModel is the InvokeAI wrapper class for ip_adapters, loras, textual_inversion and onnx runtime
|
||||||
|
@ -1,75 +0,0 @@
|
|||||||
import ctypes
|
|
||||||
|
|
||||||
|
|
||||||
class Struct_mallinfo2(ctypes.Structure):
|
|
||||||
"""A ctypes Structure that matches the libc mallinfo2 struct.
|
|
||||||
|
|
||||||
Docs:
|
|
||||||
- https://man7.org/linux/man-pages/man3/mallinfo.3.html
|
|
||||||
- https://www.gnu.org/software/libc/manual/html_node/Statistics-of-Malloc.html
|
|
||||||
|
|
||||||
struct mallinfo2 {
|
|
||||||
size_t arena; /* Non-mmapped space allocated (bytes) */
|
|
||||||
size_t ordblks; /* Number of free chunks */
|
|
||||||
size_t smblks; /* Number of free fastbin blocks */
|
|
||||||
size_t hblks; /* Number of mmapped regions */
|
|
||||||
size_t hblkhd; /* Space allocated in mmapped regions (bytes) */
|
|
||||||
size_t usmblks; /* See below */
|
|
||||||
size_t fsmblks; /* Space in freed fastbin blocks (bytes) */
|
|
||||||
size_t uordblks; /* Total allocated space (bytes) */
|
|
||||||
size_t fordblks; /* Total free space (bytes) */
|
|
||||||
size_t keepcost; /* Top-most, releasable space (bytes) */
|
|
||||||
};
|
|
||||||
"""
|
|
||||||
|
|
||||||
_fields_ = [
|
|
||||||
("arena", ctypes.c_size_t),
|
|
||||||
("ordblks", ctypes.c_size_t),
|
|
||||||
("smblks", ctypes.c_size_t),
|
|
||||||
("hblks", ctypes.c_size_t),
|
|
||||||
("hblkhd", ctypes.c_size_t),
|
|
||||||
("usmblks", ctypes.c_size_t),
|
|
||||||
("fsmblks", ctypes.c_size_t),
|
|
||||||
("uordblks", ctypes.c_size_t),
|
|
||||||
("fordblks", ctypes.c_size_t),
|
|
||||||
("keepcost", ctypes.c_size_t),
|
|
||||||
]
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
s = ""
|
|
||||||
s += f"{'arena': <10}= {(self.arena/2**30):15.5f} # Non-mmapped space allocated (GB) (uordblks + fordblks)\n"
|
|
||||||
s += f"{'ordblks': <10}= {(self.ordblks): >15} # Number of free chunks\n"
|
|
||||||
s += f"{'smblks': <10}= {(self.smblks): >15} # Number of free fastbin blocks \n"
|
|
||||||
s += f"{'hblks': <10}= {(self.hblks): >15} # Number of mmapped regions \n"
|
|
||||||
s += f"{'hblkhd': <10}= {(self.hblkhd/2**30):15.5f} # Space allocated in mmapped regions (GB)\n"
|
|
||||||
s += f"{'usmblks': <10}= {(self.usmblks): >15} # Unused\n"
|
|
||||||
s += f"{'fsmblks': <10}= {(self.fsmblks/2**30):15.5f} # Space in freed fastbin blocks (GB)\n"
|
|
||||||
s += (
|
|
||||||
f"{'uordblks': <10}= {(self.uordblks/2**30):15.5f} # Space used by in-use allocations (non-mmapped)"
|
|
||||||
" (GB)\n"
|
|
||||||
)
|
|
||||||
s += f"{'fordblks': <10}= {(self.fordblks/2**30):15.5f} # Space in free blocks (non-mmapped) (GB)\n"
|
|
||||||
s += f"{'keepcost': <10}= {(self.keepcost/2**30):15.5f} # Top-most, releasable space (GB)\n"
|
|
||||||
return s
|
|
||||||
|
|
||||||
|
|
||||||
class LibcUtil:
|
|
||||||
"""A utility class for interacting with the C Standard Library (`libc`) via ctypes.
|
|
||||||
|
|
||||||
Note that this class will raise on __init__() if 'libc.so.6' can't be found. Take care to handle environments where
|
|
||||||
this shared library is not available.
|
|
||||||
|
|
||||||
TODO: Improve cross-OS compatibility of this class.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self._libc = ctypes.cdll.LoadLibrary("libc.so.6")
|
|
||||||
|
|
||||||
def mallinfo2(self) -> Struct_mallinfo2:
|
|
||||||
"""Calls `libc` `mallinfo2`.
|
|
||||||
|
|
||||||
Docs: https://man7.org/linux/man-pages/man3/mallinfo.3.html
|
|
||||||
"""
|
|
||||||
mallinfo2 = self._libc.mallinfo2
|
|
||||||
mallinfo2.restype = Struct_mallinfo2
|
|
||||||
return mallinfo2()
|
|
@ -6,10 +6,10 @@ Init file for the model loader.
|
|||||||
from importlib import import_module
|
from importlib import import_module
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from .load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
|
from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
|
||||||
from .load_default import ModelLoader
|
from invokeai.backend.model_manager.load.load_default import ModelLoader
|
||||||
from .model_cache.model_cache_default import ModelCache
|
from invokeai.backend.model_manager.load.model_cache.model_cache_default import ModelCache
|
||||||
from .model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
|
||||||
|
|
||||||
# This registers the subclasses that implement loaders of specific model types
|
# This registers the subclasses that implement loaders of specific model types
|
||||||
loaders = [x.stem for x in Path(Path(__file__).parent, "model_loaders").glob("*.py") if x.stem != "__init__"]
|
loaders = [x.stem for x in Path(Path(__file__).parent, "model_loaders").glob("*.py") if x.stem != "__init__"]
|
||||||
|
@ -5,7 +5,7 @@ import psutil
|
|||||||
import torch
|
import torch
|
||||||
from typing_extensions import Self
|
from typing_extensions import Self
|
||||||
|
|
||||||
from ..util.libc_util import LibcUtil, Struct_mallinfo2
|
from invokeai.backend.model_manager.util.libc_util import LibcUtil, Struct_mallinfo2
|
||||||
|
|
||||||
GB = 2**30 # 1 GB
|
GB = 2**30 # 1 GB
|
||||||
|
|
||||||
|
@ -29,13 +29,17 @@ import torch
|
|||||||
|
|
||||||
from invokeai.backend.model_manager import AnyModel, SubModelType
|
from invokeai.backend.model_manager import AnyModel, SubModelType
|
||||||
from invokeai.backend.model_manager.load.memory_snapshot import MemorySnapshot, get_pretty_snapshot_diff
|
from invokeai.backend.model_manager.load.memory_snapshot import MemorySnapshot, get_pretty_snapshot_diff
|
||||||
|
from invokeai.backend.model_manager.load.model_cache.model_cache_base import (
|
||||||
|
CacheRecord,
|
||||||
|
CacheStats,
|
||||||
|
ModelCacheBase,
|
||||||
|
ModelLockerBase,
|
||||||
|
)
|
||||||
|
from invokeai.backend.model_manager.load.model_cache.model_locker import ModelLocker
|
||||||
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
|
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from .model_cache_base import CacheRecord, CacheStats, ModelCacheBase, ModelLockerBase
|
|
||||||
from .model_locker import ModelLocker
|
|
||||||
|
|
||||||
# Maximum size of the cache, in gigs
|
# Maximum size of the cache, in gigs
|
||||||
# Default is roughly enough to hold three fp16 diffusers models in RAM simultaneously
|
# Default is roughly enough to hold three fp16 diffusers models in RAM simultaneously
|
||||||
DEFAULT_MAX_CACHE_SIZE = 6.0
|
DEFAULT_MAX_CACHE_SIZE = 6.0
|
||||||
|
@ -7,8 +7,11 @@ from typing import Dict, Optional
|
|||||||
import torch
|
import torch
|
||||||
|
|
||||||
from invokeai.backend.model_manager import AnyModel
|
from invokeai.backend.model_manager import AnyModel
|
||||||
|
from invokeai.backend.model_manager.load.model_cache.model_cache_base import (
|
||||||
from .model_cache_base import CacheRecord, ModelCacheBase, ModelLockerBase
|
CacheRecord,
|
||||||
|
ModelCacheBase,
|
||||||
|
ModelLockerBase,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class ModelLocker(ModelLockerBase):
|
class ModelLocker(ModelLockerBase):
|
||||||
|
@ -18,7 +18,7 @@ Use like this:
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import Callable, Dict, Optional, Tuple, Type, TypeVar
|
from typing import Callable, Dict, Optional, Tuple, Type, TypeVar
|
||||||
|
|
||||||
from ..config import (
|
from invokeai.backend.model_manager.config import (
|
||||||
AnyModelConfig,
|
AnyModelConfig,
|
||||||
BaseModelType,
|
BaseModelType,
|
||||||
ModelConfigBase,
|
ModelConfigBase,
|
||||||
@ -26,7 +26,7 @@ from ..config import (
|
|||||||
ModelType,
|
ModelType,
|
||||||
SubModelType,
|
SubModelType,
|
||||||
)
|
)
|
||||||
from . import ModelLoaderBase
|
from invokeai.backend.model_manager.load import ModelLoaderBase
|
||||||
|
|
||||||
|
|
||||||
class ModelLoaderRegistryBase(ABC):
|
class ModelLoaderRegistryBase(ABC):
|
||||||
|
@ -13,9 +13,8 @@ from invokeai.backend.model_manager import (
|
|||||||
ModelType,
|
ModelType,
|
||||||
)
|
)
|
||||||
from invokeai.backend.model_manager.config import ControlNetCheckpointConfig, SubModelType
|
from invokeai.backend.model_manager.config import ControlNetCheckpointConfig, SubModelType
|
||||||
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
from .. import ModelLoaderRegistry
|
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
|
||||||
from .generic_diffusers import GenericDiffusersLoader
|
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.ControlNet, format=ModelFormat.Diffusers)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.ControlNet, format=ModelFormat.Diffusers)
|
||||||
|
@ -18,8 +18,8 @@ from invokeai.backend.model_manager import (
|
|||||||
SubModelType,
|
SubModelType,
|
||||||
)
|
)
|
||||||
from invokeai.backend.model_manager.config import DiffusersConfigBase
|
from invokeai.backend.model_manager.config import DiffusersConfigBase
|
||||||
|
from invokeai.backend.model_manager.load.load_default import ModelLoader
|
||||||
from .. import ModelLoader, ModelLoaderRegistry
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.CLIPVision, format=ModelFormat.Diffusers)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.CLIPVision, format=ModelFormat.Diffusers)
|
||||||
|
@ -15,9 +15,9 @@ from invokeai.backend.model_manager import (
|
|||||||
ModelType,
|
ModelType,
|
||||||
SubModelType,
|
SubModelType,
|
||||||
)
|
)
|
||||||
|
from invokeai.backend.model_manager.load.load_default import ModelLoader
|
||||||
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
|
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
|
||||||
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
from .. import ModelLoader, ModelLoaderRegistry
|
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.LoRA, format=ModelFormat.Diffusers)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.LoRA, format=ModelFormat.Diffusers)
|
||||||
|
@ -13,9 +13,8 @@ from invokeai.backend.model_manager import (
|
|||||||
ModelType,
|
ModelType,
|
||||||
SubModelType,
|
SubModelType,
|
||||||
)
|
)
|
||||||
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
from .. import ModelLoaderRegistry
|
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
|
||||||
from .generic_diffusers import GenericDiffusersLoader
|
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.ONNX, format=ModelFormat.ONNX)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.ONNX, format=ModelFormat.ONNX)
|
||||||
|
@ -25,11 +25,10 @@ from invokeai.backend.model_manager.config import (
|
|||||||
DiffusersConfigBase,
|
DiffusersConfigBase,
|
||||||
MainCheckpointConfig,
|
MainCheckpointConfig,
|
||||||
)
|
)
|
||||||
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
|
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
|
||||||
from invokeai.backend.util.silence_warnings import SilenceWarnings
|
from invokeai.backend.util.silence_warnings import SilenceWarnings
|
||||||
|
|
||||||
from .. import ModelLoaderRegistry
|
|
||||||
from .generic_diffusers import GenericDiffusersLoader
|
|
||||||
|
|
||||||
VARIANT_TO_IN_CHANNEL_MAP = {
|
VARIANT_TO_IN_CHANNEL_MAP = {
|
||||||
ModelVariantType.Normal: 4,
|
ModelVariantType.Normal: 4,
|
||||||
ModelVariantType.Depth: 5,
|
ModelVariantType.Depth: 5,
|
||||||
|
@ -12,10 +12,10 @@ from invokeai.backend.model_manager import (
|
|||||||
ModelType,
|
ModelType,
|
||||||
SubModelType,
|
SubModelType,
|
||||||
)
|
)
|
||||||
|
from invokeai.backend.model_manager.load.load_default import ModelLoader
|
||||||
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
from invokeai.backend.textual_inversion import TextualInversionModelRaw
|
from invokeai.backend.textual_inversion import TextualInversionModelRaw
|
||||||
|
|
||||||
from .. import ModelLoader, ModelLoaderRegistry
|
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.TextualInversion, format=ModelFormat.EmbeddingFile)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.TextualInversion, format=ModelFormat.EmbeddingFile)
|
||||||
@ModelLoaderRegistry.register(
|
@ModelLoaderRegistry.register(
|
||||||
|
@ -12,9 +12,8 @@ from invokeai.backend.model_manager import (
|
|||||||
ModelType,
|
ModelType,
|
||||||
)
|
)
|
||||||
from invokeai.backend.model_manager.config import AnyModel, SubModelType, VAECheckpointConfig
|
from invokeai.backend.model_manager.config import AnyModel, SubModelType, VAECheckpointConfig
|
||||||
|
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
|
||||||
from .. import ModelLoaderRegistry
|
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
|
||||||
from .generic_diffusers import GenericDiffusersLoader
|
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Diffusers)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Diffusers)
|
||||||
|
@ -39,7 +39,7 @@ def calc_model_size_by_data(logger: logging.Logger, model: AnyModel) -> int:
|
|||||||
else:
|
else:
|
||||||
# TODO(ryand): Promote this from a log to an exception once we are confident that we are handling all of the
|
# TODO(ryand): Promote this from a log to an exception once we are confident that we are handling all of the
|
||||||
# supported model types.
|
# supported model types.
|
||||||
logger.error(
|
logger.warning(
|
||||||
f"Failed to calculate model size for unexpected model type: {type(model)}. The model will be treated as "
|
f"Failed to calculate model size for unexpected model type: {type(model)}. The model will be treated as "
|
||||||
"having size 0."
|
"having size 0."
|
||||||
)
|
)
|
||||||
|
@ -17,16 +17,10 @@ from diffusers.utils import logging as dlogging
|
|||||||
|
|
||||||
from invokeai.app.services.model_install import ModelInstallServiceBase
|
from invokeai.app.services.model_install import ModelInstallServiceBase
|
||||||
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
||||||
|
from invokeai.backend.model_manager import AnyModelConfig, BaseModelType, ModelType, ModelVariantType
|
||||||
|
from invokeai.backend.model_manager.config import MainDiffusersConfig
|
||||||
from invokeai.backend.util.devices import TorchDevice
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from . import (
|
|
||||||
AnyModelConfig,
|
|
||||||
BaseModelType,
|
|
||||||
ModelType,
|
|
||||||
ModelVariantType,
|
|
||||||
)
|
|
||||||
from .config import MainDiffusersConfig
|
|
||||||
|
|
||||||
|
|
||||||
class MergeInterpolationMethod(str, Enum):
|
class MergeInterpolationMethod(str, Enum):
|
||||||
WeightedSum = "weighted_sum"
|
WeightedSum = "weighted_sum"
|
||||||
|
@ -16,8 +16,8 @@ data = HuggingFaceMetadataFetch().from_id("<REPO_ID>")
|
|||||||
assert isinstance(data, HuggingFaceMetadata)
|
assert isinstance(data, HuggingFaceMetadata)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .fetch import HuggingFaceMetadataFetch, ModelMetadataFetchBase
|
from invokeai.backend.model_manager.metadata.fetch import HuggingFaceMetadataFetch, ModelMetadataFetchBase
|
||||||
from .metadata_base import (
|
from invokeai.backend.model_manager.metadata.metadata_base import (
|
||||||
AnyModelRepoMetadata,
|
AnyModelRepoMetadata,
|
||||||
AnyModelRepoMetadataValidator,
|
AnyModelRepoMetadataValidator,
|
||||||
BaseMetadata,
|
BaseMetadata,
|
||||||
|
@ -10,7 +10,7 @@ data = HuggingFaceMetadataFetch().from_id("<repo_id>")
|
|||||||
assert isinstance(data, HuggingFaceMetadata)
|
assert isinstance(data, HuggingFaceMetadata)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .fetch_base import ModelMetadataFetchBase
|
from invokeai.backend.model_manager.metadata.fetch.fetch_base import ModelMetadataFetchBase
|
||||||
from .huggingface import HuggingFaceMetadataFetch
|
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
|
||||||
|
|
||||||
__all__ = ["ModelMetadataFetchBase", "HuggingFaceMetadataFetch"]
|
__all__ = ["ModelMetadataFetchBase", "HuggingFaceMetadataFetch"]
|
||||||
|
@ -18,8 +18,11 @@ from pydantic.networks import AnyHttpUrl
|
|||||||
from requests.sessions import Session
|
from requests.sessions import Session
|
||||||
|
|
||||||
from invokeai.backend.model_manager import ModelRepoVariant
|
from invokeai.backend.model_manager import ModelRepoVariant
|
||||||
|
from invokeai.backend.model_manager.metadata.metadata_base import (
|
||||||
from ..metadata_base import AnyModelRepoMetadata, AnyModelRepoMetadataValidator, BaseMetadata
|
AnyModelRepoMetadata,
|
||||||
|
AnyModelRepoMetadataValidator,
|
||||||
|
BaseMetadata,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class ModelMetadataFetchBase(ABC):
|
class ModelMetadataFetchBase(ABC):
|
||||||
|
@ -25,14 +25,13 @@ from pydantic.networks import AnyHttpUrl
|
|||||||
from requests.sessions import Session
|
from requests.sessions import Session
|
||||||
|
|
||||||
from invokeai.backend.model_manager.config import ModelRepoVariant
|
from invokeai.backend.model_manager.config import ModelRepoVariant
|
||||||
|
from invokeai.backend.model_manager.metadata.fetch.fetch_base import ModelMetadataFetchBase
|
||||||
from ..metadata_base import (
|
from invokeai.backend.model_manager.metadata.metadata_base import (
|
||||||
AnyModelRepoMetadata,
|
AnyModelRepoMetadata,
|
||||||
HuggingFaceMetadata,
|
HuggingFaceMetadata,
|
||||||
RemoteModelFile,
|
RemoteModelFile,
|
||||||
UnknownMetadataException,
|
UnknownMetadataException,
|
||||||
)
|
)
|
||||||
from .fetch_base import ModelMetadataFetchBase
|
|
||||||
|
|
||||||
HF_MODEL_RE = r"https?://huggingface.co/([\w\-.]+/[\w\-.]+)"
|
HF_MODEL_RE = r"https?://huggingface.co/([\w\-.]+/[\w\-.]+)"
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user