mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into feat_compel_and
This commit is contained in:
commit
e7d9e552a7
@ -16,7 +16,7 @@ If you don't feel ready to make a code contribution yet, no problem! You can als
|
||||
There are two paths to making a development contribution:
|
||||
|
||||
1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors.
|
||||
1. Additional items can be found on our roadmap <******************************link to roadmap>******************************. The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help.
|
||||
1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help.
|
||||
2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.**
|
||||
|
||||
*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no one’s time is being misspent.*
|
||||
|
@ -4,6 +4,9 @@ title: Overview
|
||||
|
||||
Here you can find the documentation for InvokeAI's various features.
|
||||
|
||||
## The [Getting Started Guide](../help/gettingStartedWithAI)
|
||||
A getting started guide for those new to AI image generation.
|
||||
|
||||
## The Basics
|
||||
### * The [Web User Interface](WEB.md)
|
||||
Guide to the Web interface. Also see the [WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
|
||||
@ -46,7 +49,7 @@ Personalize models by adding your own style or subjects.
|
||||
|
||||
## Other Features
|
||||
|
||||
### * [The NSFW Checker](NSFW.md)
|
||||
### * [The NSFW Checker](WATERMARK+NSFW.md)
|
||||
Prevent InvokeAI from displaying unwanted racy images.
|
||||
|
||||
### * [Controlling Logging](LOGGING.md)
|
||||
|
95
docs/help/gettingStartedWithAI.md
Normal file
95
docs/help/gettingStartedWithAI.md
Normal file
@ -0,0 +1,95 @@
|
||||
# Getting Started with AI Image Generation
|
||||
|
||||
New to image generation with AI? You’re in the right place!
|
||||
|
||||
This is a high level walkthrough of some of the concepts and terms you’ll see as you start using InvokeAI. Please note, this is not an exhaustive guide and may be out of date due to the rapidly changing nature of the space.
|
||||
|
||||
## Using InvokeAI
|
||||
|
||||
### **Prompt Crafting**
|
||||
|
||||
- Prompts are the basis of using InvokeAI, providing the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be.
|
||||
|
||||
*To get started, here’s an easy template to use for structuring your prompts:*
|
||||
|
||||
- Subject, Style, Quality, Aesthetic
|
||||
- **Subject:** What your image will be about. E.g. “a futuristic city with trains”, “penguins floating on icebergs”, “friends sharing beers”
|
||||
- **Style:** The style or medium in which your image will be in. E.g. “photograph”, “pencil sketch”, “oil paints”, or “pop art”, “cubism”, “abstract”
|
||||
- **Quality:** A particular aspect or trait that you would like to see emphasized in your image. E.g. "award-winning", "featured in {relevant set of high quality works}", "professionally acclaimed". Many people often use "masterpiece".
|
||||
- **Aesthetics:** The visual impact and design of the artwork. This can be colors, mood, lighting, setting, etc.
|
||||
- There are two prompt boxes: *Positive Prompt* & *Negative Prompt*.
|
||||
- A **Positive** Prompt includes words you want the model to reference when creating an image.
|
||||
- Negative Prompt is for anything you want the model to eliminate when creating an image. It doesn’t always interpret things exactly the way you would, but helps control the generation process. Always try to include a few terms - you can typically use lower quality image terms like “blurry” or “distorted” with good success.
|
||||
- Some examples prompts you can try on your own:
|
||||
- A detailed oil painting of a tranquil forest at sunset with vibrant+ colors and soft, golden light filtering through the trees
|
||||
- friends sharing beers in a busy city, realistic colored pencil sketch, twilight, masterpiece, bright, lively
|
||||
|
||||
### Generation Workflows
|
||||
|
||||
- Invoke offers a number of different workflows for interacting with models to produce images. Each is extremely powerful on its own, but together provide you an unparalleled way of producing high quality creative outputs that align with your vision.
|
||||
- **Text to Image:** The text to image tab focuses on the key workflow of using a prompt to generate a new image. It includes other features that help control the generation process as well.
|
||||
- **Image to Image:** With image to image, you provide an image as a reference (called the “initial image”), which provides more guidance around color and structure to the AI as it generates a new image. This is provided alongside the same features as Text to Image.
|
||||
- **Unified Canvas:** The Unified Canvas is an advanced AI-first image editing tool that is easy to use, but hard to master. Drag an image onto the canvas from your gallery in order to regenerate certain elements, edit content or colors (known as inpainting), or extend the image with an exceptional degree of consistency and clarity (called outpainting).
|
||||
|
||||
### Improving Image Quality
|
||||
|
||||
- Fine tuning your prompt - the more specific you are, the closer the image will turn out to what is in your head! Adding more details in the Positive Prompt or Negative Prompt can help add / remove pieces of your image to improve it - You can also use advanced techniques like upweighting and downweighting to control the influence of certain words. [Learn more here](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-syntax-features).
|
||||
- **Tip: If you’re seeing poor results, try adding the things you don’t like about the image to your negative prompt may help. E.g. distorted, low quality, unrealistic, etc.**
|
||||
- Explore different models - Other models can produce different results due to the data they’ve been trained on. Each model has specific language and settings it works best with; a model’s documentation is your friend here. Play around with some and see what works best for you!
|
||||
- Increasing Steps - The number of steps used controls how much time the model is given to produce an image, and depends on the “Scheduler” used. The schedule controls how each step is processed by the model. More steps tends to mean better results, but will take longer - We recommend at least 30 steps for most
|
||||
- Tweak and Iterate - Remember, it’s best to change one thing at a time so you know what is working and what isn't. Sometimes you just need to try a new image, and other times using a new prompt might be the ticket. For testing, consider turning off the “random” Seed - Using the same seed with the same settings will produce the same image, which makes it the perfect way to learn exactly what your changes are doing.
|
||||
- Explore Advanced Settings - InvokeAI has a full suite of tools available to allow you complete control over your image creation process - Check out our [docs if you want to learn more](https://invoke-ai.github.io/InvokeAI/features/).
|
||||
|
||||
|
||||
## Terms & Concepts
|
||||
|
||||
If you're interested in learning more, check out [this presentation](https://docs.google.com/presentation/d/1IO78i8oEXFTZ5peuHHYkVF-Y3e2M6iM5tCnc-YBfcCM/edit?usp=sharing) from one of our maintainers (@lstein).
|
||||
|
||||
### Stable Diffusion
|
||||
|
||||
Stable Diffusion is deep learning, text-to-image model that is the foundation of the capabilities found in InvokeAI. Since the release of Stable Diffusion, there have been many subsequent models created based on Stable Diffusion that are designed to generate specific types of images.
|
||||
|
||||
### Prompts
|
||||
|
||||
Prompts provide the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be.
|
||||
|
||||
### Models
|
||||
|
||||
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what you’d like to see. (Like Stable Diffusion!)
|
||||
|
||||
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
|
||||
|
||||
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*
|
||||
|
||||
### Scheduler
|
||||
|
||||
Schedulers guide the process of removing noise (de-noising) from data. They determine:
|
||||
|
||||
1. The number of steps to take to remove the noise.
|
||||
2. Whether the steps are random (stochastic) or predictable (deterministic).
|
||||
3. The specific method (algorithm) used for de-noising.
|
||||
|
||||
Experimenting with different schedulers is recommended as each will produce different outputs!
|
||||
|
||||
### Steps
|
||||
|
||||
The number of de-noising steps each generation through.
|
||||
|
||||
Schedulers can be intricate and there's often a balance to strike between how quickly they can de-noise data and how well they can do it. It's typically advised to experiment with different schedulers to see which one gives the best results. There has been a lot written on the internet about different schedulers, as well as exploring what the right level of "steps" are for each. You can save generation time by reducing the number of steps used, but you'll want to make sure that you are satisfied with the quality of images produced!
|
||||
|
||||
### Low-Rank Adaptations / LoRAs
|
||||
|
||||
Low-Rank Adaptations (LoRAs) are like a smaller, more focused version of models, intended to focus on training a better understanding of how a specific character, style, or concept looks.
|
||||
|
||||
### Textual Inversion Embeddings
|
||||
|
||||
Textual Inversion Embeddings, like LoRAs, assist with more easily prompting for certain characters, styles, or concepts. However, embeddings are trained to update the relationship between a specific word (known as the “trigger”) and the intended output.
|
||||
|
||||
### ControlNet
|
||||
|
||||
ControlNets are neural network models that are able to extract key features from an existing image and use these features to guide the output of the image generation model.
|
||||
|
||||
### VAE
|
||||
|
||||
Variational auto-encoder (VAE) is a encode/decode model that translates the "latents" image produced during the image generation procees to the large pixel images that we see.
|
||||
|
@ -11,6 +11,33 @@ title: Home
|
||||
```
|
||||
-->
|
||||
|
||||
<!-- CSS styling -->
|
||||
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@6.2.1/css/fontawesome.min.css">
|
||||
<style>
|
||||
.button {
|
||||
width: 300px;
|
||||
height: 50px;
|
||||
background-color: #448AFF;
|
||||
color: #fff;
|
||||
font-size: 16px;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
border-radius: 0.2rem;
|
||||
}
|
||||
|
||||
.button-container {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(3, 300px);
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
.button:hover {
|
||||
background-color: #526CFE;
|
||||
}
|
||||
</style>
|
||||
|
||||
|
||||
|
||||
<div align="center" markdown>
|
||||
|
||||
|
||||
@ -70,63 +97,23 @@ image-to-image generator. It provides a streamlined process with various new
|
||||
features and options to aid the image generation process. It runs on Windows,
|
||||
Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
|
||||
|
||||
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>]
|
||||
[<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a
|
||||
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a
|
||||
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas &
|
||||
Q&A</a>]
|
||||
|
||||
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
|
||||
|
||||
!!! note
|
||||
!!! Note
|
||||
|
||||
This software is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
|
||||
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
|
||||
|
||||
## :octicons-package-dependencies-24: Installation
|
||||
## :octicons-link-24: Quick Links
|
||||
|
||||
This software is supported across Linux, Windows and Macintosh. Linux users can use
|
||||
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
|
||||
driver).
|
||||
|
||||
### [Installation Getting Started Guide](installation)
|
||||
#### **[Automated Installer](installation/010_INSTALL_AUTOMATED.md)**
|
||||
✅ This is the recommended installation method for first-time users.
|
||||
#### [Manual Installation](installation/020_INSTALL_MANUAL.md)
|
||||
This method is recommended for experienced users and developers
|
||||
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
|
||||
This method is recommended for those familiar with running Docker containers
|
||||
#### [Installation Troubleshooting](installation/010_INSTALL_AUTOMATED.md#troubleshooting)
|
||||
Installation troubleshooting guide.
|
||||
### Other Installation Guides
|
||||
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
|
||||
- [XFormers](installation/070_INSTALL_XFORMERS.md)
|
||||
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
|
||||
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
|
||||
|
||||
## :fontawesome-solid-computer: Hardware Requirements
|
||||
|
||||
### :octicons-cpu-24: System
|
||||
|
||||
You wil need one of the following:
|
||||
|
||||
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
|
||||
only)
|
||||
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
|
||||
|
||||
We do **not recommend** the following video cards due to issues with their
|
||||
running in half-precision mode and having insufficient VRAM to render 512x512
|
||||
images in full-precision mode:
|
||||
|
||||
- NVIDIA 10xx series cards such as the 1080ti
|
||||
- GTX 1650 series cards
|
||||
- GTX 1660 series cards
|
||||
|
||||
### :fontawesome-solid-memory: Memory and Disk
|
||||
|
||||
- At least 12 GB Main Memory RAM.
|
||||
- At least 18 GB of free disk space for the machine learning model, Python, and
|
||||
all its dependencies.
|
||||
<div class="button-container">
|
||||
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
|
||||
<a href="features/"> <button class="button">Features</button> </a>
|
||||
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
|
||||
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
|
||||
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
|
||||
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>
|
||||
<a href="https://discord.gg/ZmtBAhwWhy"> <button class="button"> Join the Discord Server!</button> </a>
|
||||
</div>
|
||||
|
||||
|
||||
## :octicons-gift-24: InvokeAI Features
|
||||
|
@ -394,7 +394,7 @@ rm .\.venv -r -force
|
||||
python -mvenv .venv
|
||||
.\.venv\Scripts\activate
|
||||
pip install invokeai
|
||||
invokeai-configure --root .
|
||||
invokeai-configure --yes --root .
|
||||
```
|
||||
|
||||
If you see anything marked as an error during this process please stop
|
||||
|
@ -1,6 +1,4 @@
|
||||
---
|
||||
title: Overview
|
||||
---
|
||||
# Overview
|
||||
|
||||
We offer several ways to install InvokeAI, each one suited to your
|
||||
experience and preferences. We suggest that everyone start by
|
||||
@ -15,6 +13,56 @@ See the [troubleshooting
|
||||
section](010_INSTALL_AUTOMATED.md#troubleshooting) of the automated
|
||||
install guide for frequently-encountered installation issues.
|
||||
|
||||
This fork is supported across Linux, Windows and Macintosh. Linux users can use
|
||||
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
|
||||
driver).
|
||||
|
||||
### [Installation Getting Started Guide](installation)
|
||||
#### **[Automated Installer](010_INSTALL_AUTOMATED.md)**
|
||||
✅ This is the recommended installation method for first-time users.
|
||||
#### [Manual Installation](020_INSTALL_MANUAL.md)
|
||||
This method is recommended for experienced users and developers
|
||||
#### [Docker Installation](040_INSTALL_DOCKER.md)
|
||||
This method is recommended for those familiar with running Docker containers
|
||||
### Other Installation Guides
|
||||
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
|
||||
- [XFormers](installation/070_INSTALL_XFORMERS.md)
|
||||
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
|
||||
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
|
||||
|
||||
## :fontawesome-solid-computer: Hardware Requirements
|
||||
|
||||
### :octicons-cpu-24: System
|
||||
|
||||
You wil need one of the following:
|
||||
|
||||
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
|
||||
only)
|
||||
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
|
||||
|
||||
** SDXL 1.0 Requirements*
|
||||
To use SDXL, user must have one of the following:
|
||||
- :simple-nvidia: An NVIDIA-based graphics card with 8 GB or more VRAM memory.
|
||||
- :simple-amd: An AMD-based graphics card with 16 GB or more VRAM memory (Linux
|
||||
only)
|
||||
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
|
||||
|
||||
|
||||
### :fontawesome-solid-memory: Memory and Disk
|
||||
|
||||
- At least 12 GB Main Memory RAM.
|
||||
- At least 18 GB of free disk space for the machine learning model, Python, and
|
||||
all its dependencies.
|
||||
|
||||
We do **not recommend** the following video cards due to issues with their
|
||||
running in half-precision mode and having insufficient VRAM to render 512x512
|
||||
images in full-precision mode:
|
||||
|
||||
- NVIDIA 10xx series cards such as the 1080ti
|
||||
- GTX 1650 series cards
|
||||
- GTX 1660 series cards
|
||||
|
||||
## Installation options
|
||||
|
||||
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
|
@ -14,23 +14,28 @@ The nodes linked below have been developed and contributed by members of the Inv
|
||||
|
||||
## List of Nodes
|
||||
|
||||
### Face Mask
|
||||
### FaceTools
|
||||
|
||||
**Description:** This node autodetects a face in the image using MediaPipe and masks it by making it transparent. Via outpainting you can swap faces with other faces, or invert the mask and swap things around the face with other things. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control. The node also outputs an all-white mask in the same dimensions as the input image. This is needed by the inpaint node (and unified canvas) for outpainting.
|
||||
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
|
||||
|
||||
**Node Link:** https://github.com/ymgenesis/InvokeAI/blob/facemaskmediapipe/invokeai/app/invocations/facemask.py
|
||||
**Node Link:** https://github.com/ymgenesis/FaceTools/
|
||||
|
||||
**Example Node Graph:** https://www.mediafire.com/file/gohn5sb1bfp8use/21-July_2023-FaceMask.json/file
|
||||
**FaceMask Output Examples**
|
||||
|
||||
**Output Examples**
|
||||
![5cc8abce-53b0-487a-b891-3bf94dcc8960](https://github.com/invoke-ai/InvokeAI/assets/25252829/43f36d24-1429-4ab1-bd06-a4bedfe0955e)
|
||||
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
|
||||
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
|
||||
|
||||
![2e3168cb-af6a-475d-bfac-c7b7fd67b4c2](https://github.com/ymgenesis/InvokeAI/assets/25252829/a5ad7d44-2ada-4b3c-a56e-a21f8244a1ac)
|
||||
![2_annotated](https://github.com/ymgenesis/InvokeAI/assets/25252829/53416c8a-a23b-4d76-bb6d-3cfd776e0096)
|
||||
![2fe2150c-fd08-4e26-8c36-f0610bf441bb](https://github.com/ymgenesis/InvokeAI/assets/25252829/b0f7ecfe-f093-4147-a904-b9f131b41dc9)
|
||||
![831b6b98-4f0f-4360-93c8-69a9c1338cbe](https://github.com/ymgenesis/InvokeAI/assets/25252829/fc7b0622-e361-4155-8a76-082894d084f0)
|
||||
<hr>
|
||||
|
||||
### Ideal Size
|
||||
|
||||
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
|
||||
|
||||
**Node Link:** https://github.com/JPPhoto/ideal-size-node
|
||||
|
||||
--------------------------------
|
||||
### Super Cool Node Template
|
||||
### Example Node Template
|
||||
|
||||
**Description:** This node allows you to do super cool things with InvokeAI.
|
||||
|
||||
@ -40,13 +45,9 @@ The nodes linked below have been developed and contributed by members of the Inv
|
||||
|
||||
**Output Examples**
|
||||
|
||||
![Invoke AI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
|
||||
|
||||
### Ideal Size
|
||||
|
||||
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
|
||||
|
||||
**Node Link:** https://github.com/JPPhoto/ideal-size-node
|
||||
![Example Image](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png){: style="height:115px;width:240px"}
|
||||
|
||||
## Help
|
||||
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
|
||||
|
||||
|
||||
|
25
flake.lock
Normal file
25
flake.lock
Normal file
@ -0,0 +1,25 @@
|
||||
{
|
||||
"nodes": {
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1690630721,
|
||||
"narHash": "sha256-Y04onHyBQT4Erfr2fc82dbJTfXGYrf4V0ysLUYnPOP8=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "d2b52322f35597c62abf56de91b0236746b2a03d",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"id": "nixpkgs",
|
||||
"type": "indirect"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
81
flake.nix
Normal file
81
flake.nix
Normal file
@ -0,0 +1,81 @@
|
||||
# Important note: this flake does not attempt to create a fully isolated, 'pure'
|
||||
# Python environment for InvokeAI. Instead, it depends on local invocations of
|
||||
# virtualenv/pip to install the required (binary) packages, most importantly the
|
||||
# prebuilt binary pytorch packages with CUDA support.
|
||||
# ML Python packages with CUDA support, like pytorch, are notoriously expensive
|
||||
# to compile so it's purposefuly not what this flake does.
|
||||
|
||||
{
|
||||
description = "An (impure) flake to develop on InvokeAI.";
|
||||
|
||||
outputs = { self, nixpkgs }:
|
||||
let
|
||||
system = "x86_64-linux";
|
||||
pkgs = import nixpkgs {
|
||||
inherit system;
|
||||
config.allowUnfree = true;
|
||||
};
|
||||
|
||||
python = pkgs.python310;
|
||||
|
||||
mkShell = { dir, install }:
|
||||
let
|
||||
setupScript = pkgs.writeScript "setup-invokai" ''
|
||||
# This must be sourced using 'source', not executed.
|
||||
${python}/bin/python -m venv ${dir}
|
||||
${dir}/bin/python -m pip install ${install}
|
||||
# ${dir}/bin/python -c 'import torch; assert(torch.cuda.is_available())'
|
||||
source ${dir}/bin/activate
|
||||
'';
|
||||
in
|
||||
pkgs.mkShell rec {
|
||||
buildInputs = with pkgs; [
|
||||
# Backend: graphics, CUDA.
|
||||
cudaPackages.cudnn
|
||||
cudaPackages.cuda_nvrtc
|
||||
cudatoolkit
|
||||
freeglut
|
||||
glib
|
||||
gperf
|
||||
procps
|
||||
libGL
|
||||
libGLU
|
||||
linuxPackages.nvidia_x11
|
||||
python
|
||||
stdenv.cc
|
||||
stdenv.cc.cc.lib
|
||||
xorg.libX11
|
||||
xorg.libXext
|
||||
xorg.libXi
|
||||
xorg.libXmu
|
||||
xorg.libXrandr
|
||||
xorg.libXv
|
||||
zlib
|
||||
|
||||
# Pre-commit hooks.
|
||||
black
|
||||
|
||||
# Frontend.
|
||||
yarn
|
||||
nodejs
|
||||
];
|
||||
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath buildInputs;
|
||||
CUDA_PATH = pkgs.cudatoolkit;
|
||||
EXTRA_LDFLAGS = "-L${pkgs.linuxPackages.nvidia_x11}/lib";
|
||||
shellHook = ''
|
||||
if [[ -f "${dir}/bin/activate" ]]; then
|
||||
source "${dir}/bin/activate"
|
||||
echo "Using Python: $(which python)"
|
||||
else
|
||||
echo "Use 'source ${setupScript}' to set up the environment."
|
||||
fi
|
||||
'';
|
||||
};
|
||||
in
|
||||
{
|
||||
devShells.${system} = rec {
|
||||
develop = mkShell { dir = "venv"; install = "-e '.[xformers]' --extra-index-url https://download.pytorch.org/whl/cu118"; };
|
||||
default = develop;
|
||||
};
|
||||
};
|
||||
}
|
@ -13,7 +13,7 @@ from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from typing import Union
|
||||
|
||||
SUPPORTED_PYTHON = ">=3.9.0,<3.11"
|
||||
SUPPORTED_PYTHON = ">=3.9.0,<=3.11.100"
|
||||
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
|
||||
BOOTSTRAP_VENV_PREFIX = "invokeai-installer-tmp"
|
||||
|
||||
@ -149,7 +149,7 @@ class Installer:
|
||||
return venv_dir
|
||||
|
||||
def install(
|
||||
self, root: str = "~/invokeai-3", version: str = "latest", yes_to_all=False, find_links: Path = None
|
||||
self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None
|
||||
) -> None:
|
||||
"""
|
||||
Install the InvokeAI application into the given runtime path
|
||||
@ -168,7 +168,8 @@ class Installer:
|
||||
|
||||
messages.welcome()
|
||||
|
||||
self.dest = Path(root).expanduser().resolve() if yes_to_all else messages.dest_path(root)
|
||||
default_path = os.environ.get("INVOKEAI_ROOT") or Path(root).expanduser().resolve()
|
||||
self.dest = default_path if yes_to_all else messages.dest_path(root)
|
||||
|
||||
# create the venv for the app
|
||||
self.venv = self.app_venv()
|
||||
@ -248,6 +249,9 @@ class InvokeAiInstance:
|
||||
pip[
|
||||
"install",
|
||||
"--require-virtualenv",
|
||||
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
|
||||
"urllib3~=1.26.0",
|
||||
"requests~=2.28.0",
|
||||
"torch~=2.0.0",
|
||||
"torchmetrics==0.11.4",
|
||||
"torchvision>=0.14.1",
|
||||
@ -451,7 +455,7 @@ def get_torch_source() -> (Union[str, None], str):
|
||||
device = graphical_accelerator()
|
||||
|
||||
url = None
|
||||
optional_modules = None
|
||||
optional_modules = "[onnx]"
|
||||
if OS == "Linux":
|
||||
if device == "rocm":
|
||||
url = "https://download.pytorch.org/whl/rocm5.4.2"
|
||||
@ -460,7 +464,10 @@ def get_torch_source() -> (Union[str, None], str):
|
||||
|
||||
if device == "cuda":
|
||||
url = "https://download.pytorch.org/whl/cu117"
|
||||
optional_modules = "[xformers]"
|
||||
optional_modules = "[xformers,onnx-cuda]"
|
||||
if device == "cuda_and_dml":
|
||||
url = "https://download.pytorch.org/whl/cu117"
|
||||
optional_modules = "[xformers,onnx-directml]"
|
||||
|
||||
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13
|
||||
|
||||
|
@ -3,6 +3,7 @@ InvokeAI Installer
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from pathlib import Path
|
||||
from installer import Installer
|
||||
|
||||
@ -15,7 +16,7 @@ if __name__ == "__main__":
|
||||
dest="root",
|
||||
type=str,
|
||||
help="Destination path for installation",
|
||||
default="~/invokeai",
|
||||
default=os.environ.get("INVOKEAI_ROOT") or "~/invokeai",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-y",
|
||||
|
@ -167,6 +167,10 @@ def graphical_accelerator():
|
||||
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
|
||||
"cuda",
|
||||
)
|
||||
nvidia_with_dml = (
|
||||
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
|
||||
"cuda_and_dml",
|
||||
)
|
||||
amd = (
|
||||
"an [gold1 b]AMD[/] GPU (using ROCm™)",
|
||||
"rocm",
|
||||
@ -181,7 +185,7 @@ def graphical_accelerator():
|
||||
)
|
||||
|
||||
if OS == "Windows":
|
||||
options = [nvidia, cpu]
|
||||
options = [nvidia, nvidia_with_dml, cpu]
|
||||
if OS == "Linux":
|
||||
options = [nvidia, amd, cpu]
|
||||
elif OS == "Darwin":
|
||||
|
@ -41,7 +41,7 @@ IF /I "%choice%" == "1" (
|
||||
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
|
||||
) ELSE IF /I "%choice%" == "7" (
|
||||
echo Running invokeai-configure...
|
||||
python .venv\Scripts\invokeai-configure.exe --yes --default_only
|
||||
python .venv\Scripts\invokeai-configure.exe --yes --skip-sd-weight
|
||||
) ELSE IF /I "%choice%" == "8" (
|
||||
echo Developer Console
|
||||
echo Python command is:
|
||||
|
@ -82,7 +82,7 @@ do_choice() {
|
||||
7)
|
||||
clear
|
||||
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
|
||||
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
|
||||
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only --skip-sd-weights
|
||||
;;
|
||||
8)
|
||||
clear
|
||||
|
@ -1,5 +1,6 @@
|
||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
|
||||
|
||||
from typing import Optional
|
||||
from logging import Logger
|
||||
import os
|
||||
from invokeai.app.services.board_image_record_storage import (
|
||||
@ -54,7 +55,7 @@ logger = InvokeAILogger.getLogger()
|
||||
class ApiDependencies:
|
||||
"""Contains and initializes all dependencies for the API"""
|
||||
|
||||
invoker: Invoker = None
|
||||
invoker: Optional[Invoker] = None
|
||||
|
||||
@staticmethod
|
||||
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger):
|
||||
|
@ -1,6 +1,14 @@
|
||||
from typing import Literal, Optional, Union, List, Annotated
|
||||
from pydantic import BaseModel, Field
|
||||
import re
|
||||
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext, InvocationConfig
|
||||
from .model import ClipField
|
||||
|
||||
from ...backend.util.devices import torch_dtype
|
||||
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
|
||||
from ...backend.model_management import BaseModelType, ModelType, SubModelType, ModelPatcher
|
||||
|
||||
import torch
|
||||
from compel import Compel, ReturnedEmbeddingsType
|
||||
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
|
||||
|
@ -24,6 +24,7 @@ from ...backend.stable_diffusion.diffusers_pipeline import (
|
||||
)
|
||||
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings
|
||||
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
||||
from ...backend.model_management import ModelPatcher
|
||||
from ...backend.util.devices import choose_torch_device, torch_dtype, choose_precision
|
||||
from ..models.image import ImageCategory, ImageField, ResourceOrigin
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationConfig, InvocationContext
|
||||
|
@ -53,6 +53,7 @@ class MainModelField(BaseModel):
|
||||
|
||||
model_name: str = Field(description="Name of the model")
|
||||
base_model: BaseModelType = Field(description="Base model")
|
||||
model_type: ModelType = Field(description="Model Type")
|
||||
|
||||
|
||||
class LoRAModelField(BaseModel):
|
||||
|
578
invokeai/app/invocations/onnx.py
Normal file
578
invokeai/app/invocations/onnx.py
Normal file
@ -0,0 +1,578 @@
|
||||
# Copyright (c) 2023 Borisov Sergey (https://github.com/StAlKeR7779)
|
||||
|
||||
from contextlib import ExitStack
|
||||
from typing import List, Literal, Optional, Union
|
||||
|
||||
import re
|
||||
import inspect
|
||||
|
||||
from pydantic import BaseModel, Field, validator
|
||||
import torch
|
||||
import numpy as np
|
||||
from diffusers import ControlNetModel, DPMSolverMultistepScheduler
|
||||
from diffusers.image_processor import VaeImageProcessor
|
||||
from diffusers.schedulers import SchedulerMixin as Scheduler
|
||||
|
||||
from ..models.image import ImageCategory, ImageField, ResourceOrigin
|
||||
from ...backend.model_management import ONNXModelPatcher
|
||||
from ...backend.util import choose_torch_device
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationConfig, InvocationContext
|
||||
from .compel import ConditioningField
|
||||
from .controlnet_image_processors import ControlField
|
||||
from .image import ImageOutput
|
||||
from .model import ModelInfo, UNetField, VaeField
|
||||
|
||||
from invokeai.app.invocations.metadata import CoreMetadata
|
||||
from invokeai.backend import BaseModelType, ModelType, SubModelType
|
||||
from invokeai.app.util.step_callback import stable_diffusion_step_callback
|
||||
from ...backend.stable_diffusion import PipelineIntermediateState
|
||||
|
||||
from tqdm import tqdm
|
||||
from .model import ClipField
|
||||
from .latent import LatentsField, LatentsOutput, build_latents_output, get_scheduler, SAMPLER_NAME_VALUES
|
||||
from .compel import CompelOutput
|
||||
|
||||
|
||||
ORT_TO_NP_TYPE = {
|
||||
"tensor(bool)": np.bool_,
|
||||
"tensor(int8)": np.int8,
|
||||
"tensor(uint8)": np.uint8,
|
||||
"tensor(int16)": np.int16,
|
||||
"tensor(uint16)": np.uint16,
|
||||
"tensor(int32)": np.int32,
|
||||
"tensor(uint32)": np.uint32,
|
||||
"tensor(int64)": np.int64,
|
||||
"tensor(uint64)": np.uint64,
|
||||
"tensor(float16)": np.float16,
|
||||
"tensor(float)": np.float32,
|
||||
"tensor(double)": np.float64,
|
||||
}
|
||||
|
||||
PRECISION_VALUES = Literal[tuple(list(ORT_TO_NP_TYPE.keys()))]
|
||||
|
||||
|
||||
class ONNXPromptInvocation(BaseInvocation):
|
||||
type: Literal["prompt_onnx"] = "prompt_onnx"
|
||||
|
||||
prompt: str = Field(default="", description="Prompt")
|
||||
clip: ClipField = Field(None, description="Clip to use")
|
||||
|
||||
def invoke(self, context: InvocationContext) -> CompelOutput:
|
||||
tokenizer_info = context.services.model_manager.get_model(
|
||||
**self.clip.tokenizer.dict(),
|
||||
)
|
||||
text_encoder_info = context.services.model_manager.get_model(
|
||||
**self.clip.text_encoder.dict(),
|
||||
)
|
||||
with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder, ExitStack() as stack:
|
||||
# loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.clip.loras]
|
||||
loras = [
|
||||
(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
|
||||
for lora in self.clip.loras
|
||||
]
|
||||
|
||||
ti_list = []
|
||||
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
|
||||
name = trigger[1:-1]
|
||||
try:
|
||||
ti_list.append(
|
||||
# stack.enter_context(
|
||||
# context.services.model_manager.get_model(
|
||||
# model_name=name,
|
||||
# base_model=self.clip.text_encoder.base_model,
|
||||
# model_type=ModelType.TextualInversion,
|
||||
# )
|
||||
# )
|
||||
context.services.model_manager.get_model(
|
||||
model_name=name,
|
||||
base_model=self.clip.text_encoder.base_model,
|
||||
model_type=ModelType.TextualInversion,
|
||||
).context.model
|
||||
)
|
||||
except Exception:
|
||||
# print(e)
|
||||
# import traceback
|
||||
# print(traceback.format_exc())
|
||||
print(f'Warn: trigger: "{trigger}" not found')
|
||||
if loras or ti_list:
|
||||
text_encoder.release_session()
|
||||
with ONNXModelPatcher.apply_lora_text_encoder(text_encoder, loras), ONNXModelPatcher.apply_ti(
|
||||
orig_tokenizer, text_encoder, ti_list
|
||||
) as (tokenizer, ti_manager):
|
||||
text_encoder.create_session()
|
||||
|
||||
# copy from
|
||||
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L153
|
||||
text_inputs = tokenizer(
|
||||
self.prompt,
|
||||
padding="max_length",
|
||||
max_length=tokenizer.model_max_length,
|
||||
truncation=True,
|
||||
return_tensors="np",
|
||||
)
|
||||
text_input_ids = text_inputs.input_ids
|
||||
"""
|
||||
untruncated_ids = tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
|
||||
|
||||
if not np.array_equal(text_input_ids, untruncated_ids):
|
||||
removed_text = self.tokenizer.batch_decode(
|
||||
untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
|
||||
)
|
||||
logger.warning(
|
||||
"The following part of your input was truncated because CLIP can only handle sequences up to"
|
||||
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
|
||||
)
|
||||
"""
|
||||
|
||||
prompt_embeds = text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
|
||||
|
||||
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
|
||||
|
||||
# TODO: hacky but works ;D maybe rename latents somehow?
|
||||
context.services.latents.save(conditioning_name, (prompt_embeds, None))
|
||||
|
||||
return CompelOutput(
|
||||
conditioning=ConditioningField(
|
||||
conditioning_name=conditioning_name,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
# Text to image
|
||||
class ONNXTextToLatentsInvocation(BaseInvocation):
|
||||
"""Generates latents from conditionings."""
|
||||
|
||||
type: Literal["t2l_onnx"] = "t2l_onnx"
|
||||
|
||||
# Inputs
|
||||
# fmt: off
|
||||
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
|
||||
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
|
||||
noise: Optional[LatentsField] = Field(description="The noise to use")
|
||||
steps: int = Field(default=10, gt=0, description="The number of steps to use to generate the image")
|
||||
cfg_scale: Union[float, List[float]] = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
|
||||
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
|
||||
precision: PRECISION_VALUES = Field(default = "tensor(float16)", description="The precision to use when generating latents")
|
||||
unet: UNetField = Field(default=None, description="UNet submodel")
|
||||
control: Union[ControlField, list[ControlField]] = Field(default=None, description="The control to use")
|
||||
# seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
|
||||
# seamless_axes: str = Field(default="", description="The axes to tile the image on, 'x' and/or 'y'")
|
||||
# fmt: on
|
||||
|
||||
@validator("cfg_scale")
|
||||
def ge_one(cls, v):
|
||||
"""validate that all cfg_scale values are >= 1"""
|
||||
if isinstance(v, list):
|
||||
for i in v:
|
||||
if i < 1:
|
||||
raise ValueError("cfg_scale must be greater than 1")
|
||||
else:
|
||||
if v < 1:
|
||||
raise ValueError("cfg_scale must be greater than 1")
|
||||
return v
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"tags": ["latents"],
|
||||
"type_hints": {
|
||||
"model": "model",
|
||||
"control": "control",
|
||||
# "cfg_scale": "float",
|
||||
"cfg_scale": "number",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
# based on
|
||||
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
|
||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
c, _ = context.services.latents.get(self.positive_conditioning.conditioning_name)
|
||||
uc, _ = context.services.latents.get(self.negative_conditioning.conditioning_name)
|
||||
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
|
||||
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
|
||||
if isinstance(c, torch.Tensor):
|
||||
c = c.cpu().numpy()
|
||||
if isinstance(uc, torch.Tensor):
|
||||
uc = uc.cpu().numpy()
|
||||
device = torch.device(choose_torch_device())
|
||||
prompt_embeds = np.concatenate([uc, c])
|
||||
|
||||
latents = context.services.latents.get(self.noise.latents_name)
|
||||
if isinstance(latents, torch.Tensor):
|
||||
latents = latents.cpu().numpy()
|
||||
|
||||
# TODO: better execution device handling
|
||||
latents = latents.astype(ORT_TO_NP_TYPE[self.precision])
|
||||
|
||||
# get the initial random noise unless the user supplied it
|
||||
do_classifier_free_guidance = True
|
||||
# latents_dtype = prompt_embeds.dtype
|
||||
# latents_shape = (batch_size * num_images_per_prompt, 4, height // 8, width // 8)
|
||||
# if latents.shape != latents_shape:
|
||||
# raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
|
||||
|
||||
scheduler = get_scheduler(
|
||||
context=context,
|
||||
scheduler_info=self.unet.scheduler,
|
||||
scheduler_name=self.scheduler,
|
||||
)
|
||||
|
||||
def torch2numpy(latent: torch.Tensor):
|
||||
return latent.cpu().numpy()
|
||||
|
||||
def numpy2torch(latent, device):
|
||||
return torch.from_numpy(latent).to(device)
|
||||
|
||||
def dispatch_progress(
|
||||
self, context: InvocationContext, source_node_id: str, intermediate_state: PipelineIntermediateState
|
||||
) -> None:
|
||||
stable_diffusion_step_callback(
|
||||
context=context,
|
||||
intermediate_state=intermediate_state,
|
||||
node=self.dict(),
|
||||
source_node_id=source_node_id,
|
||||
)
|
||||
|
||||
scheduler.set_timesteps(self.steps)
|
||||
latents = latents * np.float64(scheduler.init_noise_sigma)
|
||||
|
||||
extra_step_kwargs = dict()
|
||||
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
|
||||
extra_step_kwargs.update(
|
||||
eta=0.0,
|
||||
)
|
||||
|
||||
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
|
||||
|
||||
with unet_info as unet, ExitStack() as stack:
|
||||
# loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
|
||||
loras = [
|
||||
(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
|
||||
for lora in self.unet.loras
|
||||
]
|
||||
|
||||
if loras:
|
||||
unet.release_session()
|
||||
with ONNXModelPatcher.apply_lora_unet(unet, loras):
|
||||
# TODO:
|
||||
_, _, h, w = latents.shape
|
||||
unet.create_session(h, w)
|
||||
|
||||
timestep_dtype = next(
|
||||
(input.type for input in unet.session.get_inputs() if input.name == "timestep"), "tensor(float16)"
|
||||
)
|
||||
timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
|
||||
for i in tqdm(range(len(scheduler.timesteps))):
|
||||
t = scheduler.timesteps[i]
|
||||
# expand the latents if we are doing classifier free guidance
|
||||
latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
|
||||
latent_model_input = scheduler.scale_model_input(numpy2torch(latent_model_input, device), t)
|
||||
latent_model_input = latent_model_input.cpu().numpy()
|
||||
|
||||
# predict the noise residual
|
||||
timestep = np.array([t], dtype=timestep_dtype)
|
||||
noise_pred = unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)
|
||||
noise_pred = noise_pred[0]
|
||||
|
||||
# perform guidance
|
||||
if do_classifier_free_guidance:
|
||||
noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
|
||||
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
|
||||
|
||||
# compute the previous noisy sample x_t -> x_t-1
|
||||
scheduler_output = scheduler.step(
|
||||
numpy2torch(noise_pred, device), t, numpy2torch(latents, device), **extra_step_kwargs
|
||||
)
|
||||
latents = torch2numpy(scheduler_output.prev_sample)
|
||||
|
||||
state = PipelineIntermediateState(
|
||||
run_id="test", step=i, timestep=timestep, latents=scheduler_output.prev_sample
|
||||
)
|
||||
dispatch_progress(self, context=context, source_node_id=source_node_id, intermediate_state=state)
|
||||
|
||||
# call the callback, if provided
|
||||
# if callback is not None and i % callback_steps == 0:
|
||||
# callback(i, t, latents)
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
name = f"{context.graph_execution_state_id}__{self.id}"
|
||||
context.services.latents.save(name, latents)
|
||||
return build_latents_output(latents_name=name, latents=torch.from_numpy(latents))
|
||||
|
||||
|
||||
# Latent to image
|
||||
class ONNXLatentsToImageInvocation(BaseInvocation):
|
||||
"""Generates an image from latents."""
|
||||
|
||||
type: Literal["l2i_onnx"] = "l2i_onnx"
|
||||
|
||||
# Inputs
|
||||
latents: Optional[LatentsField] = Field(description="The latents to generate an image from")
|
||||
vae: VaeField = Field(default=None, description="Vae submodel")
|
||||
metadata: Optional[CoreMetadata] = Field(
|
||||
default=None, description="Optional core metadata to be written to the image"
|
||||
)
|
||||
# tiled: bool = Field(default=False, description="Decode latents by overlaping tiles(less memory consumption)")
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"tags": ["latents", "image"],
|
||||
},
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
latents = context.services.latents.get(self.latents.latents_name)
|
||||
|
||||
if self.vae.vae.submodel != SubModelType.VaeDecoder:
|
||||
raise Exception(f"Expected vae_decoder, found: {self.vae.vae.model_type}")
|
||||
|
||||
vae_info = context.services.model_manager.get_model(
|
||||
**self.vae.vae.dict(),
|
||||
)
|
||||
|
||||
# clear memory as vae decode can request a lot
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
with vae_info as vae:
|
||||
vae.create_session()
|
||||
|
||||
# copied from
|
||||
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L427
|
||||
latents = 1 / 0.18215 * latents
|
||||
# image = self.vae_decoder(latent_sample=latents)[0]
|
||||
# it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
|
||||
image = np.concatenate([vae(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])])
|
||||
|
||||
image = np.clip(image / 2 + 0.5, 0, 1)
|
||||
image = image.transpose((0, 2, 3, 1))
|
||||
image = VaeImageProcessor.numpy_to_pil(image)[0]
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
image_dto = context.services.images.create(
|
||||
image=image,
|
||||
image_origin=ResourceOrigin.INTERNAL,
|
||||
image_category=ImageCategory.GENERAL,
|
||||
node_id=self.id,
|
||||
session_id=context.graph_execution_state_id,
|
||||
is_intermediate=self.is_intermediate,
|
||||
metadata=self.metadata.dict() if self.metadata else None,
|
||||
)
|
||||
|
||||
return ImageOutput(
|
||||
image=ImageField(image_name=image_dto.image_name),
|
||||
width=image_dto.width,
|
||||
height=image_dto.height,
|
||||
)
|
||||
|
||||
|
||||
class ONNXModelLoaderOutput(BaseInvocationOutput):
|
||||
"""Model loader output"""
|
||||
|
||||
# fmt: off
|
||||
type: Literal["model_loader_output_onnx"] = "model_loader_output_onnx"
|
||||
|
||||
unet: UNetField = Field(default=None, description="UNet submodel")
|
||||
clip: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
|
||||
vae_decoder: VaeField = Field(default=None, description="Vae submodel")
|
||||
vae_encoder: VaeField = Field(default=None, description="Vae submodel")
|
||||
# fmt: on
|
||||
|
||||
|
||||
class ONNXSD1ModelLoaderInvocation(BaseInvocation):
|
||||
"""Loading submodels of selected model."""
|
||||
|
||||
type: Literal["sd1_model_loader_onnx"] = "sd1_model_loader_onnx"
|
||||
|
||||
model_name: str = Field(default="", description="Model to load")
|
||||
# TODO: precision?
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {"tags": ["model", "loader"], "type_hints": {"model_name": "model"}}, # TODO: rename to model_name?
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ONNXModelLoaderOutput:
|
||||
model_name = "stable-diffusion-v1-5"
|
||||
base_model = BaseModelType.StableDiffusion1
|
||||
|
||||
# TODO: not found exceptions
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=model_name,
|
||||
base_model=BaseModelType.StableDiffusion1,
|
||||
model_type=ModelType.ONNX,
|
||||
):
|
||||
raise Exception(f"Unkown model name: {model_name}!")
|
||||
|
||||
return ONNXModelLoaderOutput(
|
||||
unet=UNetField(
|
||||
unet=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.UNet,
|
||||
),
|
||||
scheduler=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.Scheduler,
|
||||
),
|
||||
loras=[],
|
||||
),
|
||||
clip=ClipField(
|
||||
tokenizer=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.Tokenizer,
|
||||
),
|
||||
text_encoder=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.TextEncoder,
|
||||
),
|
||||
loras=[],
|
||||
),
|
||||
vae_decoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.VaeDecoder,
|
||||
),
|
||||
),
|
||||
vae_encoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=ModelType.ONNX,
|
||||
submodel=SubModelType.VaeEncoder,
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
class OnnxModelField(BaseModel):
|
||||
"""Onnx model field"""
|
||||
|
||||
model_name: str = Field(description="Name of the model")
|
||||
base_model: BaseModelType = Field(description="Base model")
|
||||
model_type: ModelType = Field(description="Model Type")
|
||||
|
||||
|
||||
class OnnxModelLoaderInvocation(BaseInvocation):
|
||||
"""Loads a main model, outputting its submodels."""
|
||||
|
||||
type: Literal["onnx_model_loader"] = "onnx_model_loader"
|
||||
|
||||
model: OnnxModelField = Field(description="The model to load")
|
||||
|
||||
# Schema customisation
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {
|
||||
"title": "Onnx Model Loader",
|
||||
"tags": ["model", "loader"],
|
||||
"type_hints": {"model": "model"},
|
||||
},
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> ONNXModelLoaderOutput:
|
||||
base_model = self.model.base_model
|
||||
model_name = self.model.model_name
|
||||
model_type = ModelType.ONNX
|
||||
|
||||
# TODO: not found exceptions
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
):
|
||||
raise Exception(f"Unknown {base_model} {model_type} model: {model_name}")
|
||||
|
||||
"""
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=self.model_name,
|
||||
model_type=SDModelType.Diffusers,
|
||||
submodel=SDModelType.Tokenizer,
|
||||
):
|
||||
raise Exception(
|
||||
f"Failed to find tokenizer submodel in {self.model_name}! Check if model corrupted"
|
||||
)
|
||||
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=self.model_name,
|
||||
model_type=SDModelType.Diffusers,
|
||||
submodel=SDModelType.TextEncoder,
|
||||
):
|
||||
raise Exception(
|
||||
f"Failed to find text_encoder submodel in {self.model_name}! Check if model corrupted"
|
||||
)
|
||||
|
||||
if not context.services.model_manager.model_exists(
|
||||
model_name=self.model_name,
|
||||
model_type=SDModelType.Diffusers,
|
||||
submodel=SDModelType.UNet,
|
||||
):
|
||||
raise Exception(
|
||||
f"Failed to find unet submodel from {self.model_name}! Check if model corrupted"
|
||||
)
|
||||
"""
|
||||
|
||||
return ONNXModelLoaderOutput(
|
||||
unet=UNetField(
|
||||
unet=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.UNet,
|
||||
),
|
||||
scheduler=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.Scheduler,
|
||||
),
|
||||
loras=[],
|
||||
),
|
||||
clip=ClipField(
|
||||
tokenizer=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.Tokenizer,
|
||||
),
|
||||
text_encoder=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.TextEncoder,
|
||||
),
|
||||
loras=[],
|
||||
skipped_layers=0,
|
||||
),
|
||||
vae_decoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.VaeDecoder,
|
||||
),
|
||||
),
|
||||
vae_encoder=VaeField(
|
||||
vae=ModelInfo(
|
||||
model_name=model_name,
|
||||
base_model=base_model,
|
||||
model_type=model_type,
|
||||
submodel=SubModelType.VaeEncoder,
|
||||
),
|
||||
),
|
||||
)
|
@ -4,6 +4,8 @@ from typing import Literal
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from invokeai.app.invocations.prompt import PromptOutput
|
||||
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationConfig, InvocationContext
|
||||
from .math import FloatOutput, IntOutput
|
||||
|
||||
@ -64,3 +66,18 @@ class ParamStringInvocation(BaseInvocation):
|
||||
|
||||
def invoke(self, context: InvocationContext) -> StringOutput:
|
||||
return StringOutput(text=self.text)
|
||||
|
||||
|
||||
class ParamPromptInvocation(BaseInvocation):
|
||||
"""A prompt input parameter"""
|
||||
|
||||
type: Literal["param_prompt"] = "param_prompt"
|
||||
prompt: str = Field(default="", description="The prompt value")
|
||||
|
||||
class Config(InvocationConfig):
|
||||
schema_extra = {
|
||||
"ui": {"tags": ["param", "prompt"], "title": "Prompt"},
|
||||
}
|
||||
|
||||
def invoke(self, context: InvocationContext) -> PromptOutput:
|
||||
return PromptOutput(prompt=self.prompt)
|
||||
|
@ -171,7 +171,6 @@ from pydantic import BaseSettings, Field, parse_obj_as
|
||||
from typing import ClassVar, Dict, List, Set, Literal, Union, get_origin, get_type_hints, get_args
|
||||
|
||||
INIT_FILE = Path("invokeai.yaml")
|
||||
MODEL_CORE = Path("models/core")
|
||||
DB_FILE = Path("invokeai.db")
|
||||
LEGACY_INIT_FILE = Path("invokeai.init")
|
||||
|
||||
@ -275,7 +274,7 @@ class InvokeAISettings(BaseSettings):
|
||||
@classmethod
|
||||
def _excluded(self) -> List[str]:
|
||||
# internal fields that shouldn't be exposed as command line options
|
||||
return ["type", "initconf"]
|
||||
return ["type", "initconf", "cached_root"]
|
||||
|
||||
@classmethod
|
||||
def _excluded_from_yaml(self) -> List[str]:
|
||||
@ -291,6 +290,7 @@ class InvokeAISettings(BaseSettings):
|
||||
"restore",
|
||||
"root",
|
||||
"nsfw_checker",
|
||||
"cached_root",
|
||||
]
|
||||
|
||||
class Config:
|
||||
@ -357,7 +357,7 @@ def _find_root() -> Path:
|
||||
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
|
||||
if os.environ.get("INVOKEAI_ROOT"):
|
||||
root = Path(os.environ.get("INVOKEAI_ROOT")).resolve()
|
||||
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE, MODEL_CORE]]):
|
||||
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]]):
|
||||
root = (venv.parent).resolve()
|
||||
else:
|
||||
root = Path("~/invokeai").expanduser().resolve()
|
||||
@ -424,6 +424,7 @@ class InvokeAIAppConfig(InvokeAISettings):
|
||||
log_level : Literal[tuple(["debug","info","warning","error","critical"])] = Field(default="info", description="Emit logging messages at this level or higher", category="Logging")
|
||||
|
||||
version : bool = Field(default=False, description="Show InvokeAI version and exit", category="Other")
|
||||
cached_root : Path = Field(default=None, description="internal use only", category="DEPRECATED")
|
||||
# fmt: on
|
||||
|
||||
def parse_args(self, argv: List[str] = None, conf: DictConfig = None, clobber=False):
|
||||
@ -471,10 +472,15 @@ class InvokeAIAppConfig(InvokeAISettings):
|
||||
"""
|
||||
Path to the runtime root directory
|
||||
"""
|
||||
if self.root:
|
||||
return Path(self.root).expanduser().absolute()
|
||||
# we cache value of root to protect against it being '.' and the cwd changing
|
||||
if self.cached_root:
|
||||
root = self.cached_root
|
||||
elif self.root:
|
||||
root = Path(self.root).expanduser().absolute()
|
||||
else:
|
||||
return self.find_root()
|
||||
root = self.find_root()
|
||||
self.cached_root = root
|
||||
return self.cached_root
|
||||
|
||||
@property
|
||||
def root_dir(self) -> Path:
|
||||
|
@ -181,7 +181,7 @@ def download_with_progress_bar(model_url: str, model_dest: str, label: str = "th
|
||||
|
||||
|
||||
def download_conversion_models():
|
||||
target_dir = config.root_path / "models/core/convert"
|
||||
target_dir = config.models_path / "core/convert"
|
||||
kwargs = dict() # for future use
|
||||
try:
|
||||
logger.info("Downloading core tokenizers and text encoders")
|
||||
|
@ -7,11 +7,12 @@ import warnings
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from typing import List, Dict, Callable, Union, Set
|
||||
from typing import Optional, List, Dict, Callable, Union, Set
|
||||
|
||||
import requests
|
||||
from diffusers import DiffusionPipeline
|
||||
from diffusers import logging as dlogging
|
||||
import onnx
|
||||
from huggingface_hub import hf_hub_url, HfFolder, HfApi
|
||||
from omegaconf import OmegaConf
|
||||
from tqdm import tqdm
|
||||
@ -86,8 +87,8 @@ class ModelLoadInfo:
|
||||
name: str
|
||||
model_type: ModelType
|
||||
base_type: BaseModelType
|
||||
path: Path = None
|
||||
repo_id: str = None
|
||||
path: Optional[Path] = None
|
||||
repo_id: Optional[str] = None
|
||||
description: str = ""
|
||||
installed: bool = False
|
||||
recommended: bool = False
|
||||
@ -128,7 +129,9 @@ class ModelInstall(object):
|
||||
model_dict[key] = ModelLoadInfo(**value)
|
||||
|
||||
# supplement with entries in models.yaml
|
||||
installed_models = self.mgr.list_models()
|
||||
installed_models = [x for x in self.mgr.list_models()]
|
||||
# suppresses autoloaded models
|
||||
# installed_models = [x for x in self.mgr.list_models() if not self._is_autoloaded(x)]
|
||||
|
||||
for md in installed_models:
|
||||
base = md["base_model"]
|
||||
@ -147,6 +150,17 @@ class ModelInstall(object):
|
||||
)
|
||||
return {x: model_dict[x] for x in sorted(model_dict.keys(), key=lambda y: model_dict[y].name.lower())}
|
||||
|
||||
def _is_autoloaded(self, model_info: dict) -> bool:
|
||||
path = model_info.get("path")
|
||||
if not path:
|
||||
return False
|
||||
for autodir in ["autoimport_dir", "lora_dir", "embedding_dir", "controlnet_dir"]:
|
||||
if autodir_path := getattr(self.config, autodir):
|
||||
autodir_path = self.config.root_path / autodir_path
|
||||
if Path(path).is_relative_to(autodir_path):
|
||||
return True
|
||||
return False
|
||||
|
||||
def list_models(self, model_type):
|
||||
installed = self.mgr.list_models(model_type=model_type)
|
||||
print(f"Installed models of type `{model_type}`:")
|
||||
@ -273,6 +287,7 @@ class ModelInstall(object):
|
||||
logger.error(f"Unable to download {url}. Skipping.")
|
||||
info = ModelProbe().heuristic_probe(location)
|
||||
dest = self.config.models_path / info.base_type.value / info.model_type.value / location.name
|
||||
dest.parent.mkdir(parents=True, exist_ok=True)
|
||||
models_path = shutil.move(location, dest)
|
||||
|
||||
# staged version will be garbage-collected at this time
|
||||
@ -288,8 +303,10 @@ class ModelInstall(object):
|
||||
|
||||
with TemporaryDirectory(dir=self.config.models_path) as staging:
|
||||
staging = Path(staging)
|
||||
if "model_index.json" in files:
|
||||
if "model_index.json" in files and "unet/model.onnx" not in files:
|
||||
location = self._download_hf_pipeline(repo_id, staging) # pipeline
|
||||
elif "unet/model.onnx" in files:
|
||||
location = self._download_hf_model(repo_id, files, staging)
|
||||
else:
|
||||
for suffix in ["safetensors", "bin"]:
|
||||
if f"pytorch_lora_weights.{suffix}" in files:
|
||||
@ -346,7 +363,7 @@ class ModelInstall(object):
|
||||
if key in self.datasets:
|
||||
description = self.datasets[key].get("description") or description
|
||||
|
||||
rel_path = self.relative_to_root(path)
|
||||
rel_path = self.relative_to_root(path, self.config.models_path)
|
||||
|
||||
attributes = dict(
|
||||
path=str(rel_path),
|
||||
@ -354,7 +371,7 @@ class ModelInstall(object):
|
||||
model_format=info.format,
|
||||
)
|
||||
legacy_conf = None
|
||||
if info.model_type == ModelType.Main:
|
||||
if info.model_type == ModelType.Main or info.model_type == ModelType.ONNX:
|
||||
attributes.update(
|
||||
dict(
|
||||
variant=info.variant_type,
|
||||
@ -386,8 +403,8 @@ class ModelInstall(object):
|
||||
attributes.update(dict(config=str(legacy_conf)))
|
||||
return attributes
|
||||
|
||||
def relative_to_root(self, path: Path) -> Path:
|
||||
root = self.config.root_path
|
||||
def relative_to_root(self, path: Path, root: Optional[Path] = None) -> Path:
|
||||
root = root or self.config.root_path
|
||||
if path.is_relative_to(root):
|
||||
return path.relative_to(root)
|
||||
else:
|
||||
@ -419,8 +436,13 @@ class ModelInstall(object):
|
||||
location = staging / name
|
||||
paths = list()
|
||||
for filename in files:
|
||||
filePath = Path(filename)
|
||||
p = hf_download_with_resume(
|
||||
repo_id, model_dir=location, model_name=filename, access_token=self.access_token
|
||||
repo_id,
|
||||
model_dir=location / filePath.parent,
|
||||
model_name=filePath.name,
|
||||
access_token=self.access_token,
|
||||
subfolder=filePath.parent,
|
||||
)
|
||||
if p:
|
||||
paths.append(p)
|
||||
@ -468,11 +490,12 @@ def hf_download_with_resume(
|
||||
model_name: str,
|
||||
model_dest: Path = None,
|
||||
access_token: str = None,
|
||||
subfolder: str = None,
|
||||
) -> Path:
|
||||
model_dest = model_dest or Path(os.path.join(model_dir, model_name))
|
||||
os.makedirs(model_dir, exist_ok=True)
|
||||
|
||||
url = hf_hub_url(repo_id, model_name)
|
||||
url = hf_hub_url(repo_id, model_name, subfolder=subfolder)
|
||||
|
||||
header = {"Authorization": f"Bearer {access_token}"} if access_token else {}
|
||||
open_mode = "wb"
|
||||
|
@ -3,6 +3,7 @@ Initialization file for invokeai.backend.model_management
|
||||
"""
|
||||
from .model_manager import ModelManager, ModelInfo, AddModelResult, SchedulerPredictionType
|
||||
from .model_cache import ModelCache
|
||||
from .lora import ModelPatcher, ONNXModelPatcher
|
||||
from .models import (
|
||||
BaseModelType,
|
||||
ModelType,
|
||||
|
@ -63,7 +63,7 @@ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionS
|
||||
from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
|
||||
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
from invokeai.app.services.config import InvokeAIAppConfig, MODEL_CORE
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
|
||||
from picklescan.scanner import scan_file_path
|
||||
from .models import BaseModelType, ModelVariantType
|
||||
@ -81,7 +81,7 @@ if is_accelerate_available():
|
||||
from accelerate.utils import set_module_tensor_to_device
|
||||
|
||||
logger = InvokeAILogger.getLogger(__name__)
|
||||
CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().root_path / MODEL_CORE / "convert"
|
||||
CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().models_path / "core/convert"
|
||||
|
||||
|
||||
def shave_segments(path, n_shave_prefix_segments=1):
|
||||
@ -1070,7 +1070,7 @@ def convert_controlnet_checkpoint(
|
||||
extract_ema,
|
||||
use_linear_projection=None,
|
||||
cross_attention_dim=None,
|
||||
precision: torch.dtype = torch.float32,
|
||||
precision: Optional[torch.dtype] = None,
|
||||
):
|
||||
ctrlnet_config = create_unet_diffusers_config(original_config, image_size=image_size, controlnet=True)
|
||||
ctrlnet_config["upcast_attention"] = upcast_attention
|
||||
@ -1111,7 +1111,6 @@ def convert_controlnet_checkpoint(
|
||||
return controlnet.to(precision)
|
||||
|
||||
|
||||
# TO DO - PASS PRECISION
|
||||
def download_from_original_stable_diffusion_ckpt(
|
||||
checkpoint_path: str,
|
||||
model_version: BaseModelType,
|
||||
@ -1121,7 +1120,7 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
prediction_type: str = None,
|
||||
model_type: str = None,
|
||||
extract_ema: bool = False,
|
||||
precision: torch.dtype = torch.float32,
|
||||
precision: Optional[torch.dtype] = None,
|
||||
scheduler_type: str = "pndm",
|
||||
num_in_channels: Optional[int] = None,
|
||||
upcast_attention: Optional[bool] = None,
|
||||
@ -1194,6 +1193,8 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer)
|
||||
to use. If this parameter is `None`, the function will load a new instance of [CLIPTokenizer] by itself, if
|
||||
needed.
|
||||
precision (`torch.dtype`, *optional*, defauts to `None`):
|
||||
If not provided the precision will be set to the precision of the original file.
|
||||
return: A StableDiffusionPipeline object representing the passed-in `.ckpt`/`.safetensors` file.
|
||||
"""
|
||||
|
||||
@ -1252,6 +1253,10 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
|
||||
logger.debug(f"model_type = {model_type}; original_config_file = {original_config_file}")
|
||||
|
||||
precision_probing_key = "model.diffusion_model.input_blocks.0.0.bias"
|
||||
logger.debug(f"original checkpoint precision == {checkpoint[precision_probing_key].dtype}")
|
||||
precision = precision or checkpoint[precision_probing_key].dtype
|
||||
|
||||
if original_config_file is None:
|
||||
key_name_v2_1 = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
|
||||
key_name_sd_xl_base = "conditioner.embedders.1.model.transformer.resblocks.9.mlp.c_proj.bias"
|
||||
@ -1279,9 +1284,12 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
original_config_file = BytesIO(requests.get(config_url).content)
|
||||
|
||||
original_config = OmegaConf.load(original_config_file)
|
||||
if original_config["model"]["params"].get("use_ema") is not None:
|
||||
extract_ema = original_config["model"]["params"]["use_ema"]
|
||||
|
||||
if (
|
||||
model_version == BaseModelType.StableDiffusion2
|
||||
and original_config["model"]["params"]["parameterization"] == "v"
|
||||
and original_config["model"]["params"].get("parameterization") == "v"
|
||||
):
|
||||
prediction_type = "v_prediction"
|
||||
upcast_attention = True
|
||||
@ -1447,7 +1455,7 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
if controlnet:
|
||||
pipe = pipeline_class(
|
||||
vae=vae.to(precision),
|
||||
text_encoder=text_model,
|
||||
text_encoder=text_model.to(precision),
|
||||
tokenizer=tokenizer,
|
||||
unet=unet.to(precision),
|
||||
scheduler=scheduler,
|
||||
@ -1459,7 +1467,7 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
else:
|
||||
pipe = pipeline_class(
|
||||
vae=vae.to(precision),
|
||||
text_encoder=text_model,
|
||||
text_encoder=text_model.to(precision),
|
||||
tokenizer=tokenizer,
|
||||
unet=unet.to(precision),
|
||||
scheduler=scheduler,
|
||||
@ -1484,8 +1492,8 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
image_noising_scheduler=image_noising_scheduler,
|
||||
# regular denoising components
|
||||
tokenizer=tokenizer,
|
||||
text_encoder=text_model,
|
||||
unet=unet,
|
||||
text_encoder=text_model.to(precision),
|
||||
unet=unet.to(precision),
|
||||
scheduler=scheduler,
|
||||
# vae
|
||||
vae=vae,
|
||||
@ -1560,7 +1568,7 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
if controlnet:
|
||||
pipe = pipeline_class(
|
||||
vae=vae.to(precision),
|
||||
text_encoder=text_model,
|
||||
text_encoder=text_model.to(precision),
|
||||
tokenizer=tokenizer,
|
||||
unet=unet.to(precision),
|
||||
controlnet=controlnet,
|
||||
@ -1571,7 +1579,7 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
else:
|
||||
pipe = pipeline_class(
|
||||
vae=vae.to(precision),
|
||||
text_encoder=text_model,
|
||||
text_encoder=text_model.to(precision),
|
||||
tokenizer=tokenizer,
|
||||
unet=unet.to(precision),
|
||||
scheduler=scheduler,
|
||||
@ -1594,9 +1602,9 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
|
||||
pipe = StableDiffusionXLPipeline(
|
||||
vae=vae.to(precision),
|
||||
text_encoder=text_encoder,
|
||||
text_encoder=text_encoder.to(precision),
|
||||
tokenizer=tokenizer,
|
||||
text_encoder_2=text_encoder_2,
|
||||
text_encoder_2=text_encoder_2.to(precision),
|
||||
tokenizer_2=tokenizer_2,
|
||||
unet=unet.to(precision),
|
||||
scheduler=scheduler,
|
||||
@ -1639,7 +1647,7 @@ def download_controlnet_from_original_ckpt(
|
||||
original_config_file: str,
|
||||
image_size: int = 512,
|
||||
extract_ema: bool = False,
|
||||
precision: torch.dtype = torch.float32,
|
||||
precision: Optional[torch.dtype] = None,
|
||||
num_in_channels: Optional[int] = None,
|
||||
upcast_attention: Optional[bool] = None,
|
||||
device: str = None,
|
||||
@ -1680,6 +1688,12 @@ def download_controlnet_from_original_ckpt(
|
||||
while "state_dict" in checkpoint:
|
||||
checkpoint = checkpoint["state_dict"]
|
||||
|
||||
# use original precision
|
||||
precision_probing_key = "input_blocks.0.0.bias"
|
||||
ckpt_precision = checkpoint[precision_probing_key].dtype
|
||||
logger.debug(f"original controlnet precision = {ckpt_precision}")
|
||||
precision = precision or ckpt_precision
|
||||
|
||||
original_config = OmegaConf.load(original_config_file)
|
||||
|
||||
if num_in_channels is not None:
|
||||
@ -1699,7 +1713,7 @@ def download_controlnet_from_original_ckpt(
|
||||
cross_attention_dim=cross_attention_dim,
|
||||
)
|
||||
|
||||
return controlnet
|
||||
return controlnet.to(precision)
|
||||
|
||||
|
||||
def convert_ldm_vae_to_diffusers(checkpoint, vae_config: DictConfig, image_size: int) -> AutoencoderKL:
|
||||
|
@ -6,11 +6,22 @@ from typing import Optional, Dict, Tuple, Any, Union, List
|
||||
from pathlib import Path
|
||||
|
||||
import torch
|
||||
from safetensors.torch import load_file
|
||||
from torch.utils.hooks import RemovableHandle
|
||||
|
||||
from diffusers.models import UNet2DConditionModel
|
||||
from transformers import CLIPTextModel
|
||||
from onnx import numpy_helper
|
||||
from onnxruntime import OrtValue
|
||||
import numpy as np
|
||||
|
||||
from compel.embeddings_provider import BaseTextualInversionManager
|
||||
from diffusers.models import UNet2DConditionModel
|
||||
from safetensors.torch import load_file
|
||||
from transformers import CLIPTextModel, CLIPTokenizer
|
||||
|
||||
# TODO: rename and split this file
|
||||
|
||||
|
||||
class LoRALayerBase:
|
||||
# rank: Optional[int]
|
||||
@ -698,3 +709,186 @@ class TextualInversionManager(BaseTextualInversionManager):
|
||||
new_token_ids.extend(self.pad_tokens[token_id])
|
||||
|
||||
return new_token_ids
|
||||
|
||||
|
||||
class ONNXModelPatcher:
|
||||
from .models.base import IAIOnnxRuntimeModel, OnnxRuntimeModel
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora_unet(
|
||||
cls,
|
||||
unet: OnnxRuntimeModel,
|
||||
loras: List[Tuple[LoRAModel, float]],
|
||||
):
|
||||
with cls.apply_lora(unet, loras, "lora_unet_"):
|
||||
yield
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora_text_encoder(
|
||||
cls,
|
||||
text_encoder: OnnxRuntimeModel,
|
||||
loras: List[Tuple[LoRAModel, float]],
|
||||
):
|
||||
with cls.apply_lora(text_encoder, loras, "lora_te_"):
|
||||
yield
|
||||
|
||||
# based on
|
||||
# https://github.com/ssube/onnx-web/blob/ca2e436f0623e18b4cfe8a0363fcfcf10508acf7/api/onnx_web/convert/diffusion/lora.py#L323
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora(
|
||||
cls,
|
||||
model: IAIOnnxRuntimeModel,
|
||||
loras: List[Tuple[LoraModel, float]],
|
||||
prefix: str,
|
||||
):
|
||||
from .models.base import IAIOnnxRuntimeModel
|
||||
|
||||
if not isinstance(model, IAIOnnxRuntimeModel):
|
||||
raise Exception("Only IAIOnnxRuntimeModel models supported")
|
||||
|
||||
orig_weights = dict()
|
||||
|
||||
try:
|
||||
blended_loras = dict()
|
||||
|
||||
for lora, lora_weight in loras:
|
||||
for layer_key, layer in lora.layers.items():
|
||||
if not layer_key.startswith(prefix):
|
||||
continue
|
||||
|
||||
layer.to(dtype=torch.float32)
|
||||
layer_key = layer_key.replace(prefix, "")
|
||||
layer_weight = layer.get_weight().detach().cpu().numpy() * lora_weight
|
||||
if layer_key is blended_loras:
|
||||
blended_loras[layer_key] += layer_weight
|
||||
else:
|
||||
blended_loras[layer_key] = layer_weight
|
||||
|
||||
node_names = dict()
|
||||
for node in model.nodes.values():
|
||||
node_names[node.name.replace("/", "_").replace(".", "_").lstrip("_")] = node.name
|
||||
|
||||
for layer_key, lora_weight in blended_loras.items():
|
||||
conv_key = layer_key + "_Conv"
|
||||
gemm_key = layer_key + "_Gemm"
|
||||
matmul_key = layer_key + "_MatMul"
|
||||
|
||||
if conv_key in node_names or gemm_key in node_names:
|
||||
if conv_key in node_names:
|
||||
conv_node = model.nodes[node_names[conv_key]]
|
||||
else:
|
||||
conv_node = model.nodes[node_names[gemm_key]]
|
||||
|
||||
weight_name = [n for n in conv_node.input if ".weight" in n][0]
|
||||
orig_weight = model.tensors[weight_name]
|
||||
|
||||
if orig_weight.shape[-2:] == (1, 1):
|
||||
if lora_weight.shape[-2:] == (1, 1):
|
||||
new_weight = orig_weight.squeeze((3, 2)) + lora_weight.squeeze((3, 2))
|
||||
else:
|
||||
new_weight = orig_weight.squeeze((3, 2)) + lora_weight
|
||||
|
||||
new_weight = np.expand_dims(new_weight, (2, 3))
|
||||
else:
|
||||
if orig_weight.shape != lora_weight.shape:
|
||||
new_weight = orig_weight + lora_weight.reshape(orig_weight.shape)
|
||||
else:
|
||||
new_weight = orig_weight + lora_weight
|
||||
|
||||
orig_weights[weight_name] = orig_weight
|
||||
model.tensors[weight_name] = new_weight.astype(orig_weight.dtype)
|
||||
|
||||
elif matmul_key in node_names:
|
||||
weight_node = model.nodes[node_names[matmul_key]]
|
||||
matmul_name = [n for n in weight_node.input if "MatMul" in n][0]
|
||||
|
||||
orig_weight = model.tensors[matmul_name]
|
||||
new_weight = orig_weight + lora_weight.transpose()
|
||||
|
||||
orig_weights[matmul_name] = orig_weight
|
||||
model.tensors[matmul_name] = new_weight.astype(orig_weight.dtype)
|
||||
|
||||
else:
|
||||
# warn? err?
|
||||
pass
|
||||
|
||||
yield
|
||||
|
||||
finally:
|
||||
# restore original weights
|
||||
for name, orig_weight in orig_weights.items():
|
||||
model.tensors[name] = orig_weight
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_ti(
|
||||
cls,
|
||||
tokenizer: CLIPTokenizer,
|
||||
text_encoder: IAIOnnxRuntimeModel,
|
||||
ti_list: List[Any],
|
||||
) -> Tuple[CLIPTokenizer, TextualInversionManager]:
|
||||
from .models.base import IAIOnnxRuntimeModel
|
||||
|
||||
if not isinstance(text_encoder, IAIOnnxRuntimeModel):
|
||||
raise Exception("Only IAIOnnxRuntimeModel models supported")
|
||||
|
||||
orig_embeddings = None
|
||||
|
||||
try:
|
||||
ti_tokenizer = copy.deepcopy(tokenizer)
|
||||
ti_manager = TextualInversionManager(ti_tokenizer)
|
||||
|
||||
def _get_trigger(ti, index):
|
||||
trigger = ti.name
|
||||
if index > 0:
|
||||
trigger += f"-!pad-{i}"
|
||||
return f"<{trigger}>"
|
||||
|
||||
# modify tokenizer
|
||||
new_tokens_added = 0
|
||||
for ti in ti_list:
|
||||
for i in range(ti.embedding.shape[0]):
|
||||
new_tokens_added += ti_tokenizer.add_tokens(_get_trigger(ti, i))
|
||||
|
||||
# modify text_encoder
|
||||
orig_embeddings = text_encoder.tensors["text_model.embeddings.token_embedding.weight"]
|
||||
|
||||
embeddings = np.concatenate(
|
||||
(np.copy(orig_embeddings), np.zeros((new_tokens_added, orig_embeddings.shape[1]))),
|
||||
axis=0,
|
||||
)
|
||||
|
||||
for ti in ti_list:
|
||||
ti_tokens = []
|
||||
for i in range(ti.embedding.shape[0]):
|
||||
embedding = ti.embedding[i].detach().numpy()
|
||||
trigger = _get_trigger(ti, i)
|
||||
|
||||
token_id = ti_tokenizer.convert_tokens_to_ids(trigger)
|
||||
if token_id == ti_tokenizer.unk_token_id:
|
||||
raise RuntimeError(f"Unable to find token id for token '{trigger}'")
|
||||
|
||||
if embeddings[token_id].shape != embedding.shape:
|
||||
raise ValueError(
|
||||
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {embeddings[token_id].shape[0]}."
|
||||
)
|
||||
|
||||
embeddings[token_id] = embedding
|
||||
ti_tokens.append(token_id)
|
||||
|
||||
if len(ti_tokens) > 1:
|
||||
ti_manager.pad_tokens[ti_tokens[0]] = ti_tokens[1:]
|
||||
|
||||
text_encoder.tensors["text_model.embeddings.token_embedding.weight"] = embeddings.astype(
|
||||
orig_embeddings.dtype
|
||||
)
|
||||
|
||||
yield ti_tokenizer, ti_manager
|
||||
|
||||
finally:
|
||||
# restore
|
||||
if orig_embeddings is not None:
|
||||
text_encoder.tensors["text_model.embeddings.token_embedding.weight"] = orig_embeddings
|
||||
|
@ -187,7 +187,9 @@ class ModelCache(object):
|
||||
# TODO: lock for no copies on simultaneous calls?
|
||||
cache_entry = self._cached_models.get(key, None)
|
||||
if cache_entry is None:
|
||||
self.logger.info(f"Loading model {model_path}, type {base_model}:{model_type}:{submodel}")
|
||||
self.logger.info(
|
||||
f"Loading model {model_path}, type {base_model.value}:{model_type.value}:{submodel.value if submodel else ''}"
|
||||
)
|
||||
|
||||
# this will remove older cached models until
|
||||
# there is sufficient room to load the requested model
|
||||
@ -358,7 +360,8 @@ class ModelCache(object):
|
||||
# 2 refs:
|
||||
# 1 from cache_entry
|
||||
# 1 from getrefcount function
|
||||
if not cache_entry.locked and refs <= 2:
|
||||
# 1 from onnx runtime object
|
||||
if not cache_entry.locked and refs <= 3 if "onnx" in model_key else 2:
|
||||
self.logger.debug(
|
||||
f"Unloading model {model_key} to free {(model_size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
|
||||
)
|
||||
|
@ -276,7 +276,7 @@ class ModelInfo:
|
||||
hash: str
|
||||
location: Union[Path, str]
|
||||
precision: torch.dtype
|
||||
_cache: ModelCache = None
|
||||
_cache: Optional[ModelCache] = None
|
||||
|
||||
def __enter__(self):
|
||||
return self.context.__enter__()
|
||||
@ -423,7 +423,7 @@ class ModelManager(object):
|
||||
return (model_name, base_model, model_type)
|
||||
|
||||
def _get_model_cache_path(self, model_path):
|
||||
return self.app_config.models_path / ".cache" / hashlib.md5(str(model_path).encode()).hexdigest()
|
||||
return self.resolve_model_path(Path(".cache") / hashlib.md5(str(model_path).encode()).hexdigest())
|
||||
|
||||
@classmethod
|
||||
def initialize_model_config(cls, config_path: Path):
|
||||
@ -456,7 +456,7 @@ class ModelManager(object):
|
||||
raise ModelNotFoundException(f"Model not found - {model_key}")
|
||||
|
||||
model_config = self.models[model_key]
|
||||
model_path = self.app_config.root_path / model_config.path
|
||||
model_path = self.resolve_model_path(model_config.path)
|
||||
|
||||
if not model_path.exists():
|
||||
if model_class.save_to_config:
|
||||
@ -586,7 +586,7 @@ class ModelManager(object):
|
||||
|
||||
# expose paths as absolute to help web UI
|
||||
if path := model_dict.get("path"):
|
||||
model_dict["path"] = str(self.app_config.root_path / path)
|
||||
model_dict["path"] = str(self.resolve_model_path(path))
|
||||
models.append(model_dict)
|
||||
|
||||
return models
|
||||
@ -623,7 +623,7 @@ class ModelManager(object):
|
||||
self.cache.uncache_model(cache_id)
|
||||
|
||||
# if model inside invoke models folder - delete files
|
||||
model_path = self.app_config.root_path / model_cfg.path
|
||||
model_path = self.resolve_model_path(model_cfg.path)
|
||||
cache_path = self._get_model_cache_path(model_path)
|
||||
if cache_path.exists():
|
||||
rmtree(str(cache_path))
|
||||
@ -654,10 +654,9 @@ class ModelManager(object):
|
||||
The returned dict has the same format as the dict returned by
|
||||
model_info().
|
||||
"""
|
||||
# relativize paths as they go in - this makes it easier to move the root directory around
|
||||
# relativize paths as they go in - this makes it easier to move the models directory around
|
||||
if path := model_attributes.get("path"):
|
||||
if Path(path).is_relative_to(self.app_config.root_path):
|
||||
model_attributes["path"] = str(Path(path).relative_to(self.app_config.root_path))
|
||||
model_attributes["path"] = str(self.relative_model_path(Path(path)))
|
||||
|
||||
model_class = MODEL_CLASSES[base_model][model_type]
|
||||
model_config = model_class.create_config(**model_attributes)
|
||||
@ -715,7 +714,7 @@ class ModelManager(object):
|
||||
if not model_cfg:
|
||||
raise ModelNotFoundException(f"Unknown model: {model_key}")
|
||||
|
||||
old_path = self.app_config.root_path / model_cfg.path
|
||||
old_path = self.resolve_model_path(model_cfg.path)
|
||||
new_name = new_name or model_name
|
||||
new_base = new_base or base_model
|
||||
new_key = self.create_key(new_name, new_base, model_type)
|
||||
@ -724,15 +723,15 @@ class ModelManager(object):
|
||||
|
||||
# if this is a model file/directory that we manage ourselves, we need to move it
|
||||
if old_path.is_relative_to(self.app_config.models_path):
|
||||
new_path = (
|
||||
self.app_config.root_path
|
||||
/ "models"
|
||||
/ BaseModelType(new_base).value
|
||||
/ ModelType(model_type).value
|
||||
/ new_name
|
||||
new_path = self.resolve_model_path(
|
||||
Path(
|
||||
BaseModelType(new_base).value,
|
||||
ModelType(model_type).value,
|
||||
new_name,
|
||||
)
|
||||
)
|
||||
move(old_path, new_path)
|
||||
model_cfg.path = str(new_path.relative_to(self.app_config.root_path))
|
||||
model_cfg.path = str(new_path.relative_to(self.app_config.models_path))
|
||||
|
||||
# clean up caches
|
||||
old_model_cache = self._get_model_cache_path(old_path)
|
||||
@ -782,7 +781,7 @@ class ModelManager(object):
|
||||
**submodel,
|
||||
)
|
||||
checkpoint_path = self.app_config.root_path / info["path"]
|
||||
old_diffusers_path = self.app_config.models_path / model.location
|
||||
old_diffusers_path = self.resolve_model_path(model.location)
|
||||
new_diffusers_path = (
|
||||
dest_directory or self.app_config.models_path / base_model.value / model_type.value
|
||||
) / model_name
|
||||
@ -795,7 +794,7 @@ class ModelManager(object):
|
||||
info["path"] = (
|
||||
str(new_diffusers_path)
|
||||
if dest_directory
|
||||
else str(new_diffusers_path.relative_to(self.app_config.root_path))
|
||||
else str(new_diffusers_path.relative_to(self.app_config.models_path))
|
||||
)
|
||||
info.pop("config")
|
||||
|
||||
@ -810,6 +809,15 @@ class ModelManager(object):
|
||||
|
||||
return result
|
||||
|
||||
def resolve_model_path(self, path: Union[Path, str]) -> Path:
|
||||
"""return relative paths based on configured models_path"""
|
||||
return self.app_config.models_path / path
|
||||
|
||||
def relative_model_path(self, model_path: Path) -> Path:
|
||||
if model_path.is_relative_to(self.app_config.models_path):
|
||||
model_path = model_path.relative_to(self.app_config.models_path)
|
||||
return model_path
|
||||
|
||||
def search_models(self, search_folder):
|
||||
self.logger.info(f"Finding Models In: {search_folder}")
|
||||
models_folder_ckpt = Path(search_folder).glob("**/*.ckpt")
|
||||
@ -883,10 +891,17 @@ class ModelManager(object):
|
||||
new_models_found = False
|
||||
|
||||
self.logger.info(f"Scanning {self.app_config.models_path} for new models")
|
||||
with Chdir(self.app_config.root_path):
|
||||
with Chdir(self.app_config.models_path):
|
||||
for model_key, model_config in list(self.models.items()):
|
||||
model_name, cur_base_model, cur_model_type = self.parse_key(model_key)
|
||||
model_path = self.app_config.root_path.absolute() / model_config.path
|
||||
|
||||
# Patch for relative path bug in older models.yaml - paths should not
|
||||
# be starting with a hard-coded 'models'. This will also fix up
|
||||
# models.yaml when committed.
|
||||
if model_config.path.startswith("models"):
|
||||
model_config.path = str(Path(*Path(model_config.path).parts[1:]))
|
||||
|
||||
model_path = self.resolve_model_path(model_config.path).absolute()
|
||||
if not model_path.exists():
|
||||
model_class = MODEL_CLASSES[cur_base_model][cur_model_type]
|
||||
if model_class.save_to_config:
|
||||
@ -905,7 +920,7 @@ class ModelManager(object):
|
||||
if model_type is not None and cur_model_type != model_type:
|
||||
continue
|
||||
model_class = MODEL_CLASSES[cur_base_model][cur_model_type]
|
||||
models_dir = self.app_config.models_path / cur_base_model.value / cur_model_type.value
|
||||
models_dir = self.resolve_model_path(Path(cur_base_model.value, cur_model_type.value))
|
||||
|
||||
if not models_dir.exists():
|
||||
continue # TODO: or create all folders?
|
||||
@ -919,9 +934,7 @@ class ModelManager(object):
|
||||
if model_key in self.models:
|
||||
raise DuplicateModelException(f"Model with key {model_key} added twice")
|
||||
|
||||
if model_path.is_relative_to(self.app_config.root_path):
|
||||
model_path = model_path.relative_to(self.app_config.root_path)
|
||||
|
||||
model_path = self.relative_model_path(model_path)
|
||||
model_config: ModelConfigBase = model_class.probe_config(str(model_path))
|
||||
self.models[model_key] = model_config
|
||||
new_models_found = True
|
||||
@ -932,12 +945,11 @@ class ModelManager(object):
|
||||
except NotImplementedError as e:
|
||||
self.logger.warning(e)
|
||||
|
||||
imported_models = self.autoimport()
|
||||
|
||||
imported_models = self.scan_autoimport_directory()
|
||||
if (new_models_found or imported_models) and self.config_path:
|
||||
self.commit()
|
||||
|
||||
def autoimport(self) -> Dict[str, AddModelResult]:
|
||||
def scan_autoimport_directory(self) -> Dict[str, AddModelResult]:
|
||||
"""
|
||||
Scan the autoimport directory (if defined) and import new models, delete defunct models.
|
||||
"""
|
||||
@ -971,7 +983,7 @@ class ModelManager(object):
|
||||
# LS: hacky
|
||||
# Patch in the SD VAE from core so that it is available for use by the UI
|
||||
try:
|
||||
self.heuristic_import({config.root_path / "models/core/convert/sd-vae-ft-mse"})
|
||||
self.heuristic_import({self.resolve_model_path("core/convert/sd-vae-ft-mse")})
|
||||
except:
|
||||
pass
|
||||
|
||||
|
@ -27,7 +27,7 @@ class ModelProbeInfo(object):
|
||||
variant_type: ModelVariantType
|
||||
prediction_type: SchedulerPredictionType
|
||||
upcast_attention: bool
|
||||
format: Literal["diffusers", "checkpoint", "lycoris"]
|
||||
format: Literal["diffusers", "checkpoint", "lycoris", "olive", "onnx"]
|
||||
image_size: int
|
||||
|
||||
|
||||
@ -41,6 +41,7 @@ class ModelProbe(object):
|
||||
PROBES = {
|
||||
"diffusers": {},
|
||||
"checkpoint": {},
|
||||
"onnx": {},
|
||||
}
|
||||
|
||||
CLASS2TYPE = {
|
||||
@ -53,7 +54,9 @@ class ModelProbe(object):
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def register_probe(cls, format: Literal["diffusers", "checkpoint"], model_type: ModelType, probe_class: ProbeBase):
|
||||
def register_probe(
|
||||
cls, format: Literal["diffusers", "checkpoint", "onnx"], model_type: ModelType, probe_class: ProbeBase
|
||||
):
|
||||
cls.PROBES[format][model_type] = probe_class
|
||||
|
||||
@classmethod
|
||||
@ -95,6 +98,7 @@ class ModelProbe(object):
|
||||
if format_type == "diffusers"
|
||||
else cls.get_model_type_from_checkpoint(model_path, model)
|
||||
)
|
||||
format_type = "onnx" if model_type == ModelType.ONNX else format_type
|
||||
probe_class = cls.PROBES[format_type].get(model_type)
|
||||
if not probe_class:
|
||||
return None
|
||||
@ -168,6 +172,8 @@ class ModelProbe(object):
|
||||
if model:
|
||||
class_name = model.__class__.__name__
|
||||
else:
|
||||
if (folder_path / "unet/model.onnx").exists():
|
||||
return ModelType.ONNX
|
||||
if (folder_path / "learned_embeds.bin").exists():
|
||||
return ModelType.TextualInversion
|
||||
|
||||
@ -460,6 +466,17 @@ class TextualInversionFolderProbe(FolderProbeBase):
|
||||
return TextualInversionCheckpointProbe(None, checkpoint=checkpoint).get_base_type()
|
||||
|
||||
|
||||
class ONNXFolderProbe(FolderProbeBase):
|
||||
def get_format(self) -> str:
|
||||
return "onnx"
|
||||
|
||||
def get_base_type(self) -> BaseModelType:
|
||||
return BaseModelType.StableDiffusion1
|
||||
|
||||
def get_variant_type(self) -> ModelVariantType:
|
||||
return ModelVariantType.Normal
|
||||
|
||||
|
||||
class ControlNetFolderProbe(FolderProbeBase):
|
||||
def get_base_type(self) -> BaseModelType:
|
||||
config_file = self.folder_path / "config.json"
|
||||
@ -497,3 +514,4 @@ ModelProbe.register_probe("checkpoint", ModelType.Vae, VaeCheckpointProbe)
|
||||
ModelProbe.register_probe("checkpoint", ModelType.Lora, LoRACheckpointProbe)
|
||||
ModelProbe.register_probe("checkpoint", ModelType.TextualInversion, TextualInversionCheckpointProbe)
|
||||
ModelProbe.register_probe("checkpoint", ModelType.ControlNet, ControlNetCheckpointProbe)
|
||||
ModelProbe.register_probe("onnx", ModelType.ONNX, ONNXFolderProbe)
|
||||
|
@ -23,8 +23,11 @@ from .lora import LoRAModel
|
||||
from .controlnet import ControlNetModel # TODO:
|
||||
from .textual_inversion import TextualInversionModel
|
||||
|
||||
from .stable_diffusion_onnx import ONNXStableDiffusion1Model, ONNXStableDiffusion2Model
|
||||
|
||||
MODEL_CLASSES = {
|
||||
BaseModelType.StableDiffusion1: {
|
||||
ModelType.ONNX: ONNXStableDiffusion1Model,
|
||||
ModelType.Main: StableDiffusion1Model,
|
||||
ModelType.Vae: VaeModel,
|
||||
ModelType.Lora: LoRAModel,
|
||||
@ -32,6 +35,7 @@ MODEL_CLASSES = {
|
||||
ModelType.TextualInversion: TextualInversionModel,
|
||||
},
|
||||
BaseModelType.StableDiffusion2: {
|
||||
ModelType.ONNX: ONNXStableDiffusion2Model,
|
||||
ModelType.Main: StableDiffusion2Model,
|
||||
ModelType.Vae: VaeModel,
|
||||
ModelType.Lora: LoRAModel,
|
||||
@ -45,6 +49,7 @@ MODEL_CLASSES = {
|
||||
ModelType.Lora: LoRAModel,
|
||||
ModelType.ControlNet: ControlNetModel,
|
||||
ModelType.TextualInversion: TextualInversionModel,
|
||||
ModelType.ONNX: ONNXStableDiffusion2Model,
|
||||
},
|
||||
BaseModelType.StableDiffusionXLRefiner: {
|
||||
ModelType.Main: StableDiffusionXLModel,
|
||||
@ -53,6 +58,7 @@ MODEL_CLASSES = {
|
||||
ModelType.Lora: LoRAModel,
|
||||
ModelType.ControlNet: ControlNetModel,
|
||||
ModelType.TextualInversion: TextualInversionModel,
|
||||
ModelType.ONNX: ONNXStableDiffusion2Model,
|
||||
},
|
||||
# BaseModelType.Kandinsky2_1: {
|
||||
# ModelType.Main: Kandinsky2_1Model,
|
||||
|
@ -8,13 +8,23 @@ from abc import ABCMeta, abstractmethod
|
||||
from pathlib import Path
|
||||
from picklescan.scanner import scan_file_path
|
||||
import torch
|
||||
import numpy as np
|
||||
import safetensors.torch
|
||||
from diffusers import DiffusionPipeline, ConfigMixin
|
||||
from pathlib import Path
|
||||
from diffusers import DiffusionPipeline, ConfigMixin, OnnxRuntimeModel
|
||||
|
||||
from contextlib import suppress
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Dict, Optional, Type, Literal, TypeVar, Generic, Callable, Any, Union
|
||||
|
||||
import onnx
|
||||
from onnx import numpy_helper
|
||||
from onnxruntime import (
|
||||
InferenceSession,
|
||||
SessionOptions,
|
||||
get_available_providers,
|
||||
)
|
||||
|
||||
|
||||
class DuplicateModelException(Exception):
|
||||
pass
|
||||
@ -37,6 +47,7 @@ class BaseModelType(str, Enum):
|
||||
|
||||
|
||||
class ModelType(str, Enum):
|
||||
ONNX = "onnx"
|
||||
Main = "main"
|
||||
Vae = "vae"
|
||||
Lora = "lora"
|
||||
@ -51,6 +62,8 @@ class SubModelType(str, Enum):
|
||||
Tokenizer = "tokenizer"
|
||||
Tokenizer2 = "tokenizer_2"
|
||||
Vae = "vae"
|
||||
VaeDecoder = "vae_decoder"
|
||||
VaeEncoder = "vae_encoder"
|
||||
Scheduler = "scheduler"
|
||||
SafetyChecker = "safety_checker"
|
||||
# MoVQ = "movq"
|
||||
@ -362,6 +375,8 @@ def calc_model_size_by_data(model) -> int:
|
||||
return _calc_pipeline_by_data(model)
|
||||
elif isinstance(model, torch.nn.Module):
|
||||
return _calc_model_by_data(model)
|
||||
elif isinstance(model, IAIOnnxRuntimeModel):
|
||||
return _calc_onnx_model_by_data(model)
|
||||
else:
|
||||
return 0
|
||||
|
||||
@ -382,6 +397,12 @@ def _calc_model_by_data(model) -> int:
|
||||
return mem
|
||||
|
||||
|
||||
def _calc_onnx_model_by_data(model) -> int:
|
||||
tensor_size = model.tensors.size() * 2 # The session doubles this
|
||||
mem = tensor_size # in bytes
|
||||
return mem
|
||||
|
||||
|
||||
def _fast_safetensors_reader(path: str):
|
||||
checkpoint = dict()
|
||||
device = torch.device("meta")
|
||||
@ -449,3 +470,208 @@ class SilenceWarnings(object):
|
||||
transformers_logging.set_verbosity(self.transformers_verbosity)
|
||||
diffusers_logging.set_verbosity(self.diffusers_verbosity)
|
||||
warnings.simplefilter("default")
|
||||
|
||||
|
||||
ONNX_WEIGHTS_NAME = "model.onnx"
|
||||
|
||||
|
||||
class IAIOnnxRuntimeModel:
|
||||
class _tensor_access:
|
||||
def __init__(self, model):
|
||||
self.model = model
|
||||
self.indexes = dict()
|
||||
for idx, obj in enumerate(self.model.proto.graph.initializer):
|
||||
self.indexes[obj.name] = idx
|
||||
|
||||
def __getitem__(self, key: str):
|
||||
value = self.model.proto.graph.initializer[self.indexes[key]]
|
||||
return numpy_helper.to_array(value)
|
||||
|
||||
def __setitem__(self, key: str, value: np.ndarray):
|
||||
new_node = numpy_helper.from_array(value)
|
||||
# set_external_data(new_node, location="in-memory-location")
|
||||
new_node.name = key
|
||||
# new_node.ClearField("raw_data")
|
||||
del self.model.proto.graph.initializer[self.indexes[key]]
|
||||
self.model.proto.graph.initializer.insert(self.indexes[key], new_node)
|
||||
# self.model.data[key] = OrtValue.ortvalue_from_numpy(value)
|
||||
|
||||
# __delitem__
|
||||
|
||||
def __contains__(self, key: str):
|
||||
return self.indexes[key] in self.model.proto.graph.initializer
|
||||
|
||||
def items(self):
|
||||
raise NotImplementedError("tensor.items")
|
||||
# return [(obj.name, obj) for obj in self.raw_proto]
|
||||
|
||||
def keys(self):
|
||||
return self.indexes.keys()
|
||||
|
||||
def values(self):
|
||||
raise NotImplementedError("tensor.values")
|
||||
# return [obj for obj in self.raw_proto]
|
||||
|
||||
def size(self):
|
||||
bytesSum = 0
|
||||
for node in self.model.proto.graph.initializer:
|
||||
bytesSum += sys.getsizeof(node.raw_data)
|
||||
return bytesSum
|
||||
|
||||
class _access_helper:
|
||||
def __init__(self, raw_proto):
|
||||
self.indexes = dict()
|
||||
self.raw_proto = raw_proto
|
||||
for idx, obj in enumerate(raw_proto):
|
||||
self.indexes[obj.name] = idx
|
||||
|
||||
def __getitem__(self, key: str):
|
||||
return self.raw_proto[self.indexes[key]]
|
||||
|
||||
def __setitem__(self, key: str, value):
|
||||
index = self.indexes[key]
|
||||
del self.raw_proto[index]
|
||||
self.raw_proto.insert(index, value)
|
||||
|
||||
# __delitem__
|
||||
|
||||
def __contains__(self, key: str):
|
||||
return key in self.indexes
|
||||
|
||||
def items(self):
|
||||
return [(obj.name, obj) for obj in self.raw_proto]
|
||||
|
||||
def keys(self):
|
||||
return self.indexes.keys()
|
||||
|
||||
def values(self):
|
||||
return [obj for obj in self.raw_proto]
|
||||
|
||||
def __init__(self, model_path: str, provider: Optional[str]):
|
||||
self.path = model_path
|
||||
self.session = None
|
||||
self.provider = provider
|
||||
"""
|
||||
self.data_path = self.path + "_data"
|
||||
if not os.path.exists(self.data_path):
|
||||
print(f"Moving model tensors to separate file: {self.data_path}")
|
||||
tmp_proto = onnx.load(model_path, load_external_data=True)
|
||||
onnx.save_model(tmp_proto, self.path, save_as_external_data=True, all_tensors_to_one_file=True, location=os.path.basename(self.data_path), size_threshold=1024, convert_attribute=False)
|
||||
del tmp_proto
|
||||
gc.collect()
|
||||
|
||||
self.proto = onnx.load(model_path, load_external_data=False)
|
||||
"""
|
||||
|
||||
self.proto = onnx.load(model_path, load_external_data=True)
|
||||
# self.data = dict()
|
||||
# for tensor in self.proto.graph.initializer:
|
||||
# name = tensor.name
|
||||
|
||||
# if tensor.HasField("raw_data"):
|
||||
# npt = numpy_helper.to_array(tensor)
|
||||
# orv = OrtValue.ortvalue_from_numpy(npt)
|
||||
# # self.data[name] = orv
|
||||
# # set_external_data(tensor, location="in-memory-location")
|
||||
# tensor.name = name
|
||||
# # tensor.ClearField("raw_data")
|
||||
|
||||
self.nodes = self._access_helper(self.proto.graph.node)
|
||||
# self.initializers = self._access_helper(self.proto.graph.initializer)
|
||||
# print(self.proto.graph.input)
|
||||
# print(self.proto.graph.initializer)
|
||||
|
||||
self.tensors = self._tensor_access(self)
|
||||
|
||||
# TODO: integrate with model manager/cache
|
||||
def create_session(self, height=None, width=None):
|
||||
if self.session is None or self.session_width != width or self.session_height != height:
|
||||
# onnx.save(self.proto, "tmp.onnx")
|
||||
# onnx.save_model(self.proto, "tmp.onnx", save_as_external_data=True, all_tensors_to_one_file=True, location="tmp.onnx_data", size_threshold=1024, convert_attribute=False)
|
||||
# TODO: something to be able to get weight when they already moved outside of model proto
|
||||
# (trimmed_model, external_data) = buffer_external_data_tensors(self.proto)
|
||||
sess = SessionOptions()
|
||||
# self._external_data.update(**external_data)
|
||||
# sess.add_external_initializers(list(self.data.keys()), list(self.data.values()))
|
||||
# sess.enable_profiling = True
|
||||
|
||||
# sess.intra_op_num_threads = 1
|
||||
# sess.inter_op_num_threads = 1
|
||||
# sess.execution_mode = ExecutionMode.ORT_SEQUENTIAL
|
||||
# sess.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
|
||||
# sess.enable_cpu_mem_arena = True
|
||||
# sess.enable_mem_pattern = True
|
||||
# sess.add_session_config_entry("session.intra_op.use_xnnpack_threadpool", "1") ########### It's the key code
|
||||
self.session_height = height
|
||||
self.session_width = width
|
||||
if height and width:
|
||||
sess.add_free_dimension_override_by_name("unet_sample_batch", 2)
|
||||
sess.add_free_dimension_override_by_name("unet_sample_channels", 4)
|
||||
sess.add_free_dimension_override_by_name("unet_hidden_batch", 2)
|
||||
sess.add_free_dimension_override_by_name("unet_hidden_sequence", 77)
|
||||
sess.add_free_dimension_override_by_name("unet_sample_height", self.session_height)
|
||||
sess.add_free_dimension_override_by_name("unet_sample_width", self.session_width)
|
||||
sess.add_free_dimension_override_by_name("unet_time_batch", 1)
|
||||
providers = []
|
||||
if self.provider:
|
||||
providers.append(self.provider)
|
||||
else:
|
||||
providers = get_available_providers()
|
||||
if "TensorrtExecutionProvider" in providers:
|
||||
providers.remove("TensorrtExecutionProvider")
|
||||
try:
|
||||
self.session = InferenceSession(self.proto.SerializeToString(), providers=providers, sess_options=sess)
|
||||
except Exception as e:
|
||||
raise e
|
||||
# self.session = InferenceSession("tmp.onnx", providers=[self.provider], sess_options=self.sess_options)
|
||||
# self.io_binding = self.session.io_binding()
|
||||
|
||||
def release_session(self):
|
||||
self.session = None
|
||||
import gc
|
||||
|
||||
gc.collect()
|
||||
return
|
||||
|
||||
def __call__(self, **kwargs):
|
||||
if self.session is None:
|
||||
raise Exception("You should call create_session before running model")
|
||||
|
||||
inputs = {k: np.array(v) for k, v in kwargs.items()}
|
||||
output_names = self.session.get_outputs()
|
||||
# for k in inputs:
|
||||
# self.io_binding.bind_cpu_input(k, inputs[k])
|
||||
# for name in output_names:
|
||||
# self.io_binding.bind_output(name.name)
|
||||
# self.session.run_with_iobinding(self.io_binding, None)
|
||||
# return self.io_binding.copy_outputs_to_cpu()
|
||||
return self.session.run(None, inputs)
|
||||
|
||||
# compatability with diffusers load code
|
||||
@classmethod
|
||||
def from_pretrained(
|
||||
cls,
|
||||
model_id: Union[str, Path],
|
||||
subfolder: Union[str, Path] = None,
|
||||
file_name: Optional[str] = None,
|
||||
provider: Optional[str] = None,
|
||||
sess_options: Optional["SessionOptions"] = None,
|
||||
**kwargs,
|
||||
):
|
||||
file_name = file_name or ONNX_WEIGHTS_NAME
|
||||
|
||||
if os.path.isdir(model_id):
|
||||
model_path = model_id
|
||||
if subfolder is not None:
|
||||
model_path = os.path.join(model_path, subfolder)
|
||||
model_path = os.path.join(model_path, file_name)
|
||||
|
||||
else:
|
||||
model_path = model_id
|
||||
|
||||
# load model from local directory
|
||||
if not os.path.isfile(model_path):
|
||||
raise Exception(f"Model not found: {model_path}")
|
||||
|
||||
# TODO: session options
|
||||
return cls(model_path, provider=provider)
|
||||
|
@ -17,6 +17,7 @@ from .base import (
|
||||
ModelNotFoundException,
|
||||
)
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
import invokeai.backend.util.logging as logger
|
||||
|
||||
|
||||
class ControlNetModelFormat(str, Enum):
|
||||
@ -66,7 +67,7 @@ class ControlNetModel(ModelBase):
|
||||
child_type: Optional[SubModelType] = None,
|
||||
):
|
||||
if child_type is not None:
|
||||
raise Exception("There is no child models in controlnet model")
|
||||
raise Exception("There are no child models in controlnet model")
|
||||
|
||||
model = None
|
||||
for variant in ["fp16", None]:
|
||||
@ -124,9 +125,7 @@ class ControlNetModel(ModelBase):
|
||||
return model_path
|
||||
|
||||
|
||||
@classmethod
|
||||
def _convert_controlnet_ckpt_and_cache(
|
||||
cls,
|
||||
model_path: str,
|
||||
output_path: str,
|
||||
base_model: BaseModelType,
|
||||
@ -141,6 +140,7 @@ def _convert_controlnet_ckpt_and_cache(
|
||||
weights = app_config.root_path / model_path
|
||||
output_path = Path(output_path)
|
||||
|
||||
logger.info(f"Converting {weights} to diffusers format")
|
||||
# return cached version if it exists
|
||||
if output_path.exists():
|
||||
return output_path
|
||||
|
@ -123,6 +123,7 @@ class StableDiffusion1Model(DiffusersModel):
|
||||
return _convert_ckpt_and_cache(
|
||||
version=BaseModelType.StableDiffusion1,
|
||||
model_config=config,
|
||||
load_safety_checker=False,
|
||||
output_path=output_path,
|
||||
)
|
||||
else:
|
||||
@ -259,7 +260,7 @@ def _convert_ckpt_and_cache(
|
||||
"""
|
||||
app_config = InvokeAIAppConfig.get_config()
|
||||
|
||||
weights = app_config.root_path / model_config.path
|
||||
weights = app_config.models_path / model_config.path
|
||||
config_file = app_config.root_path / model_config.config
|
||||
output_path = Path(output_path)
|
||||
|
||||
|
@ -0,0 +1,157 @@
|
||||
import os
|
||||
import json
|
||||
from enum import Enum
|
||||
from pydantic import Field
|
||||
from pathlib import Path
|
||||
from typing import Literal, Optional, Union
|
||||
from .base import (
|
||||
ModelBase,
|
||||
ModelConfigBase,
|
||||
BaseModelType,
|
||||
ModelType,
|
||||
SubModelType,
|
||||
ModelVariantType,
|
||||
DiffusersModel,
|
||||
SchedulerPredictionType,
|
||||
SilenceWarnings,
|
||||
read_checkpoint_meta,
|
||||
classproperty,
|
||||
OnnxRuntimeModel,
|
||||
IAIOnnxRuntimeModel,
|
||||
)
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
|
||||
|
||||
class StableDiffusionOnnxModelFormat(str, Enum):
|
||||
Olive = "olive"
|
||||
Onnx = "onnx"
|
||||
|
||||
|
||||
class ONNXStableDiffusion1Model(DiffusersModel):
|
||||
class Config(ModelConfigBase):
|
||||
model_format: Literal[StableDiffusionOnnxModelFormat.Onnx]
|
||||
variant: ModelVariantType
|
||||
|
||||
def __init__(self, model_path: str, base_model: BaseModelType, model_type: ModelType):
|
||||
assert base_model == BaseModelType.StableDiffusion1
|
||||
assert model_type == ModelType.ONNX
|
||||
super().__init__(
|
||||
model_path=model_path,
|
||||
base_model=BaseModelType.StableDiffusion1,
|
||||
model_type=ModelType.ONNX,
|
||||
)
|
||||
|
||||
for child_name, child_type in self.child_types.items():
|
||||
if child_type is OnnxRuntimeModel:
|
||||
self.child_types[child_name] = IAIOnnxRuntimeModel
|
||||
|
||||
# TODO: check that no optimum models provided
|
||||
|
||||
@classmethod
|
||||
def probe_config(cls, path: str, **kwargs):
|
||||
model_format = cls.detect_format(path)
|
||||
in_channels = 4 # TODO:
|
||||
|
||||
if in_channels == 9:
|
||||
variant = ModelVariantType.Inpaint
|
||||
elif in_channels == 4:
|
||||
variant = ModelVariantType.Normal
|
||||
else:
|
||||
raise Exception("Unkown stable diffusion 1.* model format")
|
||||
|
||||
return cls.create_config(
|
||||
path=path,
|
||||
model_format=model_format,
|
||||
variant=variant,
|
||||
)
|
||||
|
||||
@classproperty
|
||||
def save_to_config(cls) -> bool:
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def detect_format(cls, model_path: str):
|
||||
# TODO: Detect onnx vs olive
|
||||
return StableDiffusionOnnxModelFormat.Onnx
|
||||
|
||||
@classmethod
|
||||
def convert_if_required(
|
||||
cls,
|
||||
model_path: str,
|
||||
output_path: str,
|
||||
config: ModelConfigBase,
|
||||
base_model: BaseModelType,
|
||||
) -> str:
|
||||
return model_path
|
||||
|
||||
|
||||
class ONNXStableDiffusion2Model(DiffusersModel):
|
||||
# TODO: check that configs overwriten properly
|
||||
class Config(ModelConfigBase):
|
||||
model_format: Literal[StableDiffusionOnnxModelFormat.Onnx]
|
||||
variant: ModelVariantType
|
||||
prediction_type: SchedulerPredictionType
|
||||
upcast_attention: bool
|
||||
|
||||
def __init__(self, model_path: str, base_model: BaseModelType, model_type: ModelType):
|
||||
assert base_model == BaseModelType.StableDiffusion2
|
||||
assert model_type == ModelType.ONNX
|
||||
super().__init__(
|
||||
model_path=model_path,
|
||||
base_model=BaseModelType.StableDiffusion2,
|
||||
model_type=ModelType.ONNX,
|
||||
)
|
||||
|
||||
for child_name, child_type in self.child_types.items():
|
||||
if child_type is OnnxRuntimeModel:
|
||||
self.child_types[child_name] = IAIOnnxRuntimeModel
|
||||
# TODO: check that no optimum models provided
|
||||
|
||||
@classmethod
|
||||
def probe_config(cls, path: str, **kwargs):
|
||||
model_format = cls.detect_format(path)
|
||||
in_channels = 4 # TODO:
|
||||
|
||||
if in_channels == 9:
|
||||
variant = ModelVariantType.Inpaint
|
||||
elif in_channels == 5:
|
||||
variant = ModelVariantType.Depth
|
||||
elif in_channels == 4:
|
||||
variant = ModelVariantType.Normal
|
||||
else:
|
||||
raise Exception("Unkown stable diffusion 2.* model format")
|
||||
|
||||
if variant == ModelVariantType.Normal:
|
||||
prediction_type = SchedulerPredictionType.VPrediction
|
||||
upcast_attention = True
|
||||
|
||||
else:
|
||||
prediction_type = SchedulerPredictionType.Epsilon
|
||||
upcast_attention = False
|
||||
|
||||
return cls.create_config(
|
||||
path=path,
|
||||
model_format=model_format,
|
||||
variant=variant,
|
||||
prediction_type=prediction_type,
|
||||
upcast_attention=upcast_attention,
|
||||
)
|
||||
|
||||
@classproperty
|
||||
def save_to_config(cls) -> bool:
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def detect_format(cls, model_path: str):
|
||||
# TODO: Detect onnx vs olive
|
||||
return StableDiffusionOnnxModelFormat.Onnx
|
||||
|
||||
@classmethod
|
||||
def convert_if_required(
|
||||
cls,
|
||||
model_path: str,
|
||||
output_path: str,
|
||||
config: ModelConfigBase,
|
||||
base_model: BaseModelType,
|
||||
) -> str:
|
||||
return model_path
|
@ -112,7 +112,7 @@ def main():
|
||||
|
||||
extras = get_extras()
|
||||
|
||||
print(f":crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]")
|
||||
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
|
||||
if release:
|
||||
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
|
||||
elif tag:
|
||||
|
@ -58,6 +58,9 @@ logger = InvokeAILogger.getLogger()
|
||||
# from https://stackoverflow.com/questions/92438/stripping-non-printable-characters-from-a-string-in-python
|
||||
NOPRINT_TRANS_TABLE = {i: None for i in range(0, sys.maxunicode + 1) if not chr(i).isprintable()}
|
||||
|
||||
# maximum number of installed models we can display before overflowing vertically
|
||||
MAX_OTHER_MODELS = 72
|
||||
|
||||
|
||||
def make_printable(s: str) -> str:
|
||||
"""Replace non-printable characters in a string"""
|
||||
@ -102,7 +105,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
SingleSelectColumns,
|
||||
values=[
|
||||
"STARTER MODELS",
|
||||
"MORE MODELS",
|
||||
"MAIN MODELS",
|
||||
"CONTROLNETS",
|
||||
"LORA/LYCORIS",
|
||||
"TEXTUAL INVERSION",
|
||||
@ -153,7 +156,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
BufferBox,
|
||||
name="Log Messages",
|
||||
editable=False,
|
||||
max_height=8,
|
||||
max_height=15,
|
||||
)
|
||||
|
||||
self.nextrely += 1
|
||||
@ -253,6 +256,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
model_labels = [self.model_labels[x] for x in model_list]
|
||||
|
||||
show_recommended = len(self.installed_models) == 0
|
||||
truncated = False
|
||||
if len(model_list) > 0:
|
||||
max_width = max([len(x) for x in model_labels])
|
||||
columns = window_width // (max_width + 8) # 8 characters for "[x] " and padding
|
||||
@ -271,6 +275,10 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
)
|
||||
)
|
||||
|
||||
if len(model_labels) > MAX_OTHER_MODELS:
|
||||
model_labels = model_labels[0:MAX_OTHER_MODELS]
|
||||
truncated = True
|
||||
|
||||
widgets.update(
|
||||
models_selected=self.add_widget_intelligent(
|
||||
MultiSelectColumns,
|
||||
@ -289,6 +297,16 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
models=model_list,
|
||||
)
|
||||
|
||||
if truncated:
|
||||
widgets.update(
|
||||
warning_message=self.add_widget_intelligent(
|
||||
npyscreen.FixedText,
|
||||
value=f"Too many models to display (max={MAX_OTHER_MODELS}). Some are not displayed.",
|
||||
editable=False,
|
||||
color="CAUTION",
|
||||
)
|
||||
)
|
||||
|
||||
self.nextrely += 1
|
||||
widgets.update(
|
||||
download_ids=self.add_widget_intelligent(
|
||||
@ -313,7 +331,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
widgets = self.add_model_widgets(
|
||||
model_type=model_type,
|
||||
window_width=window_width,
|
||||
install_prompt=f"Additional {model_type.value.title()} models already installed.",
|
||||
install_prompt=f"Installed {model_type.value.title()} models. Unchecked models in the InvokeAI root directory will be deleted. Enter URLs, paths or repo_ids to import.",
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@ -399,7 +417,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
self.ok_button.hidden = True
|
||||
self.display()
|
||||
|
||||
# for communication with the subprocess
|
||||
# TO DO: Spawn a worker thread, not a subprocess
|
||||
parent_conn, child_conn = Pipe()
|
||||
p = Process(
|
||||
target=process_and_execute,
|
||||
@ -414,7 +432,6 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
self.subprocess_connection = parent_conn
|
||||
self.subprocess = p
|
||||
app.install_selections = InstallSelections()
|
||||
# process_and_execute(app.opt, app.install_selections)
|
||||
|
||||
def on_back(self):
|
||||
self.parentApp.switchFormPrevious()
|
||||
@ -489,8 +506,6 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
|
||||
# rebuild the form, saving and restoring some of the fields that need to be preserved.
|
||||
saved_messages = self.monitor.entry_widget.values
|
||||
# autoload_dir = str(config.root_path / self.pipeline_models['autoload_directory'].value)
|
||||
# autoscan = self.pipeline_models['autoscan_on_startup'].value
|
||||
|
||||
app.main_form = app.addForm(
|
||||
"MAIN",
|
||||
@ -544,12 +559,6 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
if downloads := section.get("download_ids"):
|
||||
selections.install_models.extend(downloads.value.split())
|
||||
|
||||
# load directory and whether to scan on startup
|
||||
# if self.parentApp.autoload_pending:
|
||||
# selections.scan_directory = str(config.root_path / self.pipeline_models['autoload_directory'].value)
|
||||
# self.parentApp.autoload_pending = False
|
||||
# selections.autoscan_on_startup = self.pipeline_models['autoscan_on_startup'].value
|
||||
|
||||
|
||||
class AddModelApplication(npyscreen.NPSAppManaged):
|
||||
def __init__(self, opt):
|
||||
@ -639,6 +648,11 @@ def process_and_execute(
|
||||
selections: InstallSelections,
|
||||
conn_out: Connection = None,
|
||||
):
|
||||
# need to reinitialize config in subprocess
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
args = ["--root", opt.root] if opt.root else []
|
||||
config.parse_args(args)
|
||||
|
||||
# set up so that stderr is sent to conn_out
|
||||
if conn_out:
|
||||
translator = StderrToMessage(conn_out)
|
||||
@ -656,38 +670,11 @@ def process_and_execute(
|
||||
conn_out.close()
|
||||
|
||||
|
||||
def do_listings(opt) -> bool:
|
||||
"""List installed models of various sorts, and return
|
||||
True if any were requested."""
|
||||
model_manager = ModelManager(config.model_conf_path)
|
||||
if opt.list_models == "diffusers":
|
||||
print("Diffuser models:")
|
||||
model_manager.print_models()
|
||||
elif opt.list_models == "controlnets":
|
||||
print("Installed Controlnet Models:")
|
||||
cnm = model_manager.list_controlnet_models()
|
||||
print(textwrap.indent("\n".join([x for x in cnm if cnm[x]]), prefix=" "))
|
||||
elif opt.list_models == "loras":
|
||||
print("Installed LoRA/LyCORIS Models:")
|
||||
cnm = model_manager.list_lora_models()
|
||||
print(textwrap.indent("\n".join([x for x in cnm if cnm[x]]), prefix=" "))
|
||||
elif opt.list_models == "tis":
|
||||
print("Installed Textual Inversion Embeddings:")
|
||||
cnm = model_manager.list_ti_models()
|
||||
print(textwrap.indent("\n".join([x for x in cnm if cnm[x]]), prefix=" "))
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
# --------------------------------------------------------
|
||||
def select_and_download_models(opt: Namespace):
|
||||
precision = "float32" if opt.full_precision else choose_precision(torch.device(choose_torch_device()))
|
||||
config.precision = precision
|
||||
helper = lambda x: ask_user_for_prediction_type(x)
|
||||
# if do_listings(opt):
|
||||
# pass
|
||||
|
||||
installer = ModelInstall(config, prediction_type_helper=helper)
|
||||
if opt.list_models:
|
||||
installer.list_models(opt.list_models)
|
||||
@ -706,8 +693,6 @@ def select_and_download_models(opt: Namespace):
|
||||
# needed to support the probe() method running under a subprocess
|
||||
torch.multiprocessing.set_start_method("spawn")
|
||||
|
||||
# the third argument is needed in the Windows 11 environment in
|
||||
# order to launch and resize a console window running this program
|
||||
set_min_terminal_size(MIN_COLS, MIN_LINES)
|
||||
installApp = AddModelApplication(opt)
|
||||
try:
|
||||
|
@ -320,7 +320,7 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
|
||||
|
||||
def get_model_names(self, base_model: BaseModelType = None) -> List[str]:
|
||||
model_names = [
|
||||
info["name"]
|
||||
info["model_name"]
|
||||
for info in self.model_manager.list_models(model_type=ModelType.Main, base_model=base_model)
|
||||
if info["model_format"] == "diffusers"
|
||||
]
|
||||
|
@ -1,4 +1,6 @@
|
||||
#!/usr/bin/env sh
|
||||
. "$(dirname -- "$0")/_/husky.sh"
|
||||
|
||||
python -m black . --check
|
||||
|
||||
cd invokeai/frontend/web/ && npm run lint-staged
|
||||
|
@ -21,7 +21,7 @@ export const packageConfig: UserConfig = {
|
||||
fileName: (format) => `invoke-ai-ui.${format}.js`,
|
||||
},
|
||||
rollupOptions: {
|
||||
external: ['react', 'react-dom', '@emotion/react'],
|
||||
external: ['react', 'react-dom', '@emotion/react', '@chakra-ui/react'],
|
||||
output: {
|
||||
globals: {
|
||||
react: 'React',
|
||||
|
169
invokeai/frontend/web/dist/assets/App-44cdaaf3.js
vendored
Normal file
169
invokeai/frontend/web/dist/assets/App-44cdaaf3.js
vendored
Normal file
File diff suppressed because one or more lines are too long
169
invokeai/frontend/web/dist/assets/App-d6f88f50.js
vendored
169
invokeai/frontend/web/dist/assets/App-d6f88f50.js
vendored
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
125
invokeai/frontend/web/dist/assets/index-18f2f740.js
vendored
Normal file
125
invokeai/frontend/web/dist/assets/index-18f2f740.js
vendored
Normal file
File diff suppressed because one or more lines are too long
125
invokeai/frontend/web/dist/assets/index-bad7ff83.js
vendored
125
invokeai/frontend/web/dist/assets/index-bad7ff83.js
vendored
File diff suppressed because one or more lines are too long
2
invokeai/frontend/web/dist/index.html
vendored
2
invokeai/frontend/web/dist/index.html
vendored
@ -12,7 +12,7 @@
|
||||
margin: 0;
|
||||
}
|
||||
</style>
|
||||
<script type="module" crossorigin src="./assets/index-bad7ff83.js"></script>
|
||||
<script type="module" crossorigin src="./assets/index-18f2f740.js"></script>
|
||||
</head>
|
||||
|
||||
<body dir="ltr">
|
||||
|
2
invokeai/frontend/web/dist/locales/en.json
vendored
2
invokeai/frontend/web/dist/locales/en.json
vendored
@ -342,6 +342,8 @@
|
||||
"diffusersModels": "Diffusers",
|
||||
"loraModels": "LoRAs",
|
||||
"safetensorModels": "SafeTensors",
|
||||
"onnxModels": "Onnx",
|
||||
"oliveModels": "Olives",
|
||||
"modelAdded": "Model Added",
|
||||
"modelUpdated": "Model Updated",
|
||||
"modelUpdateFailed": "Model Update Failed",
|
||||
|
@ -116,6 +116,7 @@
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@chakra-ui/cli": "^2.4.0",
|
||||
"@chakra-ui/react": "^2.8.0",
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"ts-toolbelt": "^9.6.0"
|
||||
|
@ -342,6 +342,8 @@
|
||||
"diffusersModels": "Diffusers",
|
||||
"loraModels": "LoRAs",
|
||||
"safetensorModels": "SafeTensors",
|
||||
"onnxModels": "Onnx",
|
||||
"oliveModels": "Olives",
|
||||
"modelAdded": "Model Added",
|
||||
"modelUpdated": "Model Updated",
|
||||
"modelUpdateFailed": "Model Update Failed",
|
||||
|
@ -1,3 +1,4 @@
|
||||
import { store } from 'app/store/store';
|
||||
import React, {
|
||||
lazy,
|
||||
memo,
|
||||
@ -6,18 +7,17 @@ import React, {
|
||||
useEffect,
|
||||
} from 'react';
|
||||
import { Provider } from 'react-redux';
|
||||
import { store } from 'app/store/store';
|
||||
|
||||
import Loading from '../../common/components/Loading/Loading';
|
||||
import { addMiddleware, resetMiddlewares } from 'redux-dynamic-middlewares';
|
||||
import { PartialAppConfig } from 'app/types/invokeai';
|
||||
import { addMiddleware, resetMiddlewares } from 'redux-dynamic-middlewares';
|
||||
import Loading from '../../common/components/Loading/Loading';
|
||||
|
||||
import '../../i18n';
|
||||
import { socketMiddleware } from 'services/events/middleware';
|
||||
import { Middleware } from '@reduxjs/toolkit';
|
||||
import ImageDndContext from './ImageDnd/ImageDndContext';
|
||||
import { AddImageToBoardContextProvider } from '../contexts/AddImageToBoardContext';
|
||||
import { $authToken, $baseUrl } from 'services/api/client';
|
||||
import { socketMiddleware } from 'services/events/middleware';
|
||||
import '../../i18n';
|
||||
import { AddImageToBoardContextProvider } from '../contexts/AddImageToBoardContext';
|
||||
import ImageDndContext from './ImageDnd/ImageDndContext';
|
||||
|
||||
const App = lazy(() => import('./App'));
|
||||
const ThemeLocaleProvider = lazy(() => import('./ThemeLocaleProvider'));
|
||||
@ -28,6 +28,7 @@ interface Props extends PropsWithChildren {
|
||||
config?: PartialAppConfig;
|
||||
headerComponent?: ReactNode;
|
||||
middleware?: Middleware[];
|
||||
projectId?: string;
|
||||
}
|
||||
|
||||
const InvokeAIUI = ({
|
||||
|
@ -36,7 +36,8 @@ export const addModelsLoadedListener = () => {
|
||||
action.payload.entities,
|
||||
(m) =>
|
||||
m?.model_name === currentModel?.model_name &&
|
||||
m?.base_model === currentModel?.base_model
|
||||
m?.base_model === currentModel?.base_model &&
|
||||
m?.model_type === currentModel?.model_type
|
||||
);
|
||||
|
||||
if (isCurrentModelAvailable) {
|
||||
@ -83,7 +84,8 @@ export const addModelsLoadedListener = () => {
|
||||
action.payload.entities,
|
||||
(m) =>
|
||||
m?.model_name === currentModel?.model_name &&
|
||||
m?.base_model === currentModel?.base_model
|
||||
m?.base_model === currentModel?.base_model &&
|
||||
m?.model_type === currentModel?.model_type
|
||||
);
|
||||
|
||||
if (isCurrentModelAvailable) {
|
||||
|
@ -47,9 +47,9 @@ export const addTabChangedListener = () => {
|
||||
}
|
||||
|
||||
// only store the model name and base model in redux
|
||||
const { base_model, model_name } = firstValidCanvasModel;
|
||||
const { base_model, model_name, model_type } = firstValidCanvasModel;
|
||||
|
||||
dispatch(modelChanged({ base_model, model_name }));
|
||||
dispatch(modelChanged({ base_model, model_name, model_type }));
|
||||
}
|
||||
},
|
||||
});
|
||||
|
@ -139,8 +139,19 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
|
||||
useHotkeys('s', handleUseSeed, [imageDTO]);
|
||||
|
||||
const handleUsePrompt = useCallback(() => {
|
||||
recallBothPrompts(metadata?.positive_prompt, metadata?.negative_prompt);
|
||||
}, [metadata?.negative_prompt, metadata?.positive_prompt, recallBothPrompts]);
|
||||
recallBothPrompts(
|
||||
metadata?.positive_prompt,
|
||||
metadata?.negative_prompt,
|
||||
metadata?.positive_style_prompt,
|
||||
metadata?.negative_style_prompt
|
||||
);
|
||||
}, [
|
||||
metadata?.negative_prompt,
|
||||
metadata?.positive_prompt,
|
||||
metadata?.positive_style_prompt,
|
||||
metadata?.negative_style_prompt,
|
||||
recallBothPrompts,
|
||||
]);
|
||||
|
||||
useHotkeys('p', handleUsePrompt, [imageDTO]);
|
||||
|
||||
|
@ -102,8 +102,19 @@ const SingleSelectionMenuItems = (props: SingleSelectionMenuItemsProps) => {
|
||||
|
||||
// Recall parameters handlers
|
||||
const handleRecallPrompt = useCallback(() => {
|
||||
recallBothPrompts(metadata?.positive_prompt, metadata?.negative_prompt);
|
||||
}, [metadata?.negative_prompt, metadata?.positive_prompt, recallBothPrompts]);
|
||||
recallBothPrompts(
|
||||
metadata?.positive_prompt,
|
||||
metadata?.negative_prompt,
|
||||
metadata?.positive_style_prompt,
|
||||
metadata?.negative_style_prompt
|
||||
);
|
||||
}, [
|
||||
metadata?.negative_prompt,
|
||||
metadata?.positive_prompt,
|
||||
metadata?.positive_style_prompt,
|
||||
metadata?.negative_style_prompt,
|
||||
recallBothPrompts,
|
||||
]);
|
||||
|
||||
const handleRecallSeed = useCallback(() => {
|
||||
recallSeed(metadata?.seed);
|
||||
|
@ -14,8 +14,11 @@ import SyncModelsButton from 'features/ui/components/tabs/ModelManager/subpanels
|
||||
import { forEach } from 'lodash-es';
|
||||
import { memo, useCallback, useMemo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
} from 'services/api/endpoints/models';
|
||||
import { NON_REFINER_BASE_MODELS } from 'services/api/constants';
|
||||
import { useGetMainModelsQuery } from 'services/api/endpoints/models';
|
||||
import { FieldComponentProps } from './types';
|
||||
import { useFeatureStatus } from '../../../system/hooks/useFeatureStatus';
|
||||
|
||||
@ -28,6 +31,7 @@ const ModelInputFieldComponent = (
|
||||
const { t } = useTranslation();
|
||||
const isSyncModelEnabled = useFeatureStatus('syncModels').isFeatureEnabled;
|
||||
|
||||
const { data: onnxModels } = useGetOnnxModelsQuery(NON_REFINER_BASE_MODELS);
|
||||
const { data: mainModels, isLoading } = useGetMainModelsQuery(
|
||||
NON_REFINER_BASE_MODELS
|
||||
);
|
||||
@ -51,17 +55,39 @@ const ModelInputFieldComponent = (
|
||||
});
|
||||
});
|
||||
|
||||
if (onnxModels) {
|
||||
forEach(onnxModels.entities, (model, id) => {
|
||||
if (!model) {
|
||||
return;
|
||||
}
|
||||
|
||||
data.push({
|
||||
value: id,
|
||||
label: model.model_name,
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
}
|
||||
return data;
|
||||
}, [mainModels]);
|
||||
}, [mainModels, onnxModels]);
|
||||
|
||||
// grab the full model entity from the RTK Query cache
|
||||
// TODO: maybe we should just store the full model entity in state?
|
||||
const selectedModel = useMemo(
|
||||
() =>
|
||||
mainModels?.entities[
|
||||
(mainModels?.entities[
|
||||
`${field.value?.base_model}/main/${field.value?.model_name}`
|
||||
] ?? null,
|
||||
[field.value?.base_model, field.value?.model_name, mainModels?.entities]
|
||||
] ||
|
||||
onnxModels?.entities[
|
||||
`${field.value?.base_model}/onnx/${field.value?.model_name}`
|
||||
]) ??
|
||||
null,
|
||||
[
|
||||
field.value?.base_model,
|
||||
field.value?.model_name,
|
||||
mainModels?.entities,
|
||||
onnxModels?.entities,
|
||||
]
|
||||
);
|
||||
|
||||
const handleChangeModel = useCallback(
|
||||
|
@ -1,4 +1,4 @@
|
||||
import { Input } from '@chakra-ui/react';
|
||||
import { Input, Textarea } from '@chakra-ui/react';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
|
||||
import {
|
||||
@ -12,10 +12,11 @@ const StringInputFieldComponent = (
|
||||
props: FieldComponentProps<StringInputFieldValue, StringInputFieldTemplate>
|
||||
) => {
|
||||
const { nodeId, field } = props;
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
|
||||
const handleValueChanged = (e: ChangeEvent<HTMLInputElement>) => {
|
||||
const handleValueChanged = (
|
||||
e: ChangeEvent<HTMLInputElement | HTMLTextAreaElement>
|
||||
) => {
|
||||
dispatch(
|
||||
fieldValueChanged({
|
||||
nodeId,
|
||||
@ -25,7 +26,11 @@ const StringInputFieldComponent = (
|
||||
);
|
||||
};
|
||||
|
||||
return <Input onChange={handleValueChanged} value={field.value}></Input>;
|
||||
return ['prompt', 'style'].includes(field.name.toLowerCase()) ? (
|
||||
<Textarea onChange={handleValueChanged} value={field.value} rows={2} />
|
||||
) : (
|
||||
<Input onChange={handleValueChanged} value={field.value} />
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(StringInputFieldComponent);
|
||||
|
@ -9,6 +9,7 @@ import {
|
||||
CLIP_SKIP,
|
||||
LORA_LOADER,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
POSITIVE_CONDITIONING,
|
||||
@ -17,7 +18,8 @@ import {
|
||||
export const addLoRAsToGraph = (
|
||||
state: RootState,
|
||||
graph: NonNullableGraph,
|
||||
baseNodeId: string
|
||||
baseNodeId: string,
|
||||
modelLoader: string = MAIN_MODEL_LOADER
|
||||
): void => {
|
||||
/**
|
||||
* LoRA nodes get the UNet and CLIP models from the main model loader and apply the LoRA to them.
|
||||
@ -40,6 +42,10 @@ export const addLoRAsToGraph = (
|
||||
!(
|
||||
e.source.node_id === MAIN_MODEL_LOADER &&
|
||||
['unet'].includes(e.source.field)
|
||||
) &&
|
||||
!(
|
||||
e.source.node_id === ONNX_MODEL_LOADER &&
|
||||
['unet'].includes(e.source.field)
|
||||
)
|
||||
);
|
||||
// Remove CLIP_SKIP connections to conditionings to feed it through LoRAs
|
||||
@ -75,12 +81,11 @@ export const addLoRAsToGraph = (
|
||||
|
||||
// add to graph
|
||||
graph.nodes[currentLoraNodeId] = loraLoaderNode;
|
||||
|
||||
if (currentLoraIndex === 0) {
|
||||
// first lora = start the lora chain, attach directly to model loader
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
node_id: modelLoader,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
|
@ -9,13 +9,15 @@ import {
|
||||
LATENTS_TO_IMAGE,
|
||||
MAIN_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
ONNX_MODEL_LOADER,
|
||||
TEXT_TO_IMAGE_GRAPH,
|
||||
VAE_LOADER,
|
||||
} from './constants';
|
||||
|
||||
export const addVAEToGraph = (
|
||||
state: RootState,
|
||||
graph: NonNullableGraph
|
||||
graph: NonNullableGraph,
|
||||
modelLoader: string = MAIN_MODEL_LOADER
|
||||
): void => {
|
||||
const { vae } = state.generation;
|
||||
|
||||
@ -32,12 +34,12 @@ export const addVAEToGraph = (
|
||||
vae_model: vae,
|
||||
};
|
||||
}
|
||||
|
||||
const isOnnxModel = modelLoader == ONNX_MODEL_LOADER;
|
||||
if (graph.id === TEXT_TO_IMAGE_GRAPH || graph.id === IMAGE_TO_IMAGE_GRAPH) {
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
|
||||
field: 'vae',
|
||||
node_id: isAutoVae ? modelLoader : VAE_LOADER,
|
||||
field: isAutoVae && isOnnxModel ? 'vae_decoder' : 'vae',
|
||||
},
|
||||
destination: {
|
||||
node_id: LATENTS_TO_IMAGE,
|
||||
@ -49,8 +51,8 @@ export const addVAEToGraph = (
|
||||
if (graph.id === IMAGE_TO_IMAGE_GRAPH) {
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
|
||||
field: 'vae',
|
||||
node_id: isAutoVae ? modelLoader : VAE_LOADER,
|
||||
field: isAutoVae && isOnnxModel ? 'vae_decoder' : 'vae',
|
||||
},
|
||||
destination: {
|
||||
node_id: IMAGE_TO_LATENTS,
|
||||
@ -62,8 +64,8 @@ export const addVAEToGraph = (
|
||||
if (graph.id === INPAINT_GRAPH) {
|
||||
graph.edges.push({
|
||||
source: {
|
||||
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
|
||||
field: 'vae',
|
||||
node_id: isAutoVae ? modelLoader : VAE_LOADER,
|
||||
field: isAutoVae && isOnnxModel ? 'vae_decoder' : 'vae',
|
||||
},
|
||||
destination: {
|
||||
node_id: INPAINT,
|
||||
|
@ -12,6 +12,7 @@ import {
|
||||
CLIP_SKIP,
|
||||
LATENTS_TO_IMAGE,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
NOISE,
|
||||
@ -52,7 +53,8 @@ export const buildCanvasTextToImageGraph = (
|
||||
const use_cpu = shouldUseNoiseSettings
|
||||
? shouldUseCpuNoise
|
||||
: initialGenerationState.shouldUseCpuNoise;
|
||||
|
||||
const onnx_model_type = model.model_type.includes('onnx');
|
||||
const model_loader = onnx_model_type ? ONNX_MODEL_LOADER : MAIN_MODEL_LOADER;
|
||||
/**
|
||||
* The easiest way to build linear graphs is to do it in the node editor, then copy and paste the
|
||||
* full graph here as a template. Then use the parameters from app state and set friendlier node
|
||||
@ -63,17 +65,18 @@ export const buildCanvasTextToImageGraph = (
|
||||
*/
|
||||
|
||||
// copy-pasted graph from node editor, filled in with state values & friendly node ids
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: TEXT_TO_IMAGE_GRAPH,
|
||||
nodes: {
|
||||
[POSITIVE_CONDITIONING]: {
|
||||
type: 'compel',
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
id: POSITIVE_CONDITIONING,
|
||||
is_intermediate: true,
|
||||
prompt: positivePrompt,
|
||||
},
|
||||
[NEGATIVE_CONDITIONING]: {
|
||||
type: 'compel',
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
is_intermediate: true,
|
||||
prompt: negativePrompt,
|
||||
@ -87,16 +90,16 @@ export const buildCanvasTextToImageGraph = (
|
||||
use_cpu,
|
||||
},
|
||||
[TEXT_TO_LATENTS]: {
|
||||
type: 't2l',
|
||||
type: onnx_model_type ? 't2l_onnx' : 't2l',
|
||||
id: TEXT_TO_LATENTS,
|
||||
is_intermediate: true,
|
||||
cfg_scale,
|
||||
scheduler,
|
||||
steps,
|
||||
},
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
is_intermediate: true,
|
||||
model,
|
||||
},
|
||||
@ -107,7 +110,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
skipped_layers: clipSkip,
|
||||
},
|
||||
[LATENTS_TO_IMAGE]: {
|
||||
type: 'l2i',
|
||||
type: onnx_model_type ? 'l2i_onnx' : 'l2i',
|
||||
id: LATENTS_TO_IMAGE,
|
||||
is_intermediate: !shouldAutoSave,
|
||||
},
|
||||
@ -135,7 +138,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
node_id: model_loader,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
@ -165,7 +168,7 @@ export const buildCanvasTextToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
node_id: model_loader,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@ -229,10 +232,10 @@ export const buildCanvasTextToImageGraph = (
|
||||
});
|
||||
|
||||
// add LoRA support
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS);
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS, model_loader);
|
||||
|
||||
// optionally add custom VAE
|
||||
addVAEToGraph(state, graph);
|
||||
addVAEToGraph(state, graph, model_loader);
|
||||
|
||||
// add dynamic prompts - also sets up core iteration and seed
|
||||
addDynamicPromptsToGraph(state, graph);
|
||||
|
@ -12,6 +12,7 @@ import {
|
||||
CLIP_SKIP,
|
||||
LATENTS_TO_IMAGE,
|
||||
MAIN_MODEL_LOADER,
|
||||
ONNX_MODEL_LOADER,
|
||||
METADATA_ACCUMULATOR,
|
||||
NEGATIVE_CONDITIONING,
|
||||
NOISE,
|
||||
@ -48,6 +49,8 @@ export const buildLinearTextToImageGraph = (
|
||||
throw new Error('No model found in state');
|
||||
}
|
||||
|
||||
const onnx_model_type = model.model_type.includes('onnx');
|
||||
const model_loader = onnx_model_type ? ONNX_MODEL_LOADER : MAIN_MODEL_LOADER;
|
||||
/**
|
||||
* The easiest way to build linear graphs is to do it in the node editor, then copy and paste the
|
||||
* full graph here as a template. Then use the parameters from app state and set friendlier node
|
||||
@ -58,12 +61,14 @@ export const buildLinearTextToImageGraph = (
|
||||
*/
|
||||
|
||||
// copy-pasted graph from node editor, filled in with state values & friendly node ids
|
||||
|
||||
// TODO: Actually create the graph correctly for ONNX
|
||||
const graph: NonNullableGraph = {
|
||||
id: TEXT_TO_IMAGE_GRAPH,
|
||||
nodes: {
|
||||
[MAIN_MODEL_LOADER]: {
|
||||
type: 'main_model_loader',
|
||||
id: MAIN_MODEL_LOADER,
|
||||
[model_loader]: {
|
||||
type: model_loader,
|
||||
id: model_loader,
|
||||
model,
|
||||
},
|
||||
[CLIP_SKIP]: {
|
||||
@ -72,12 +77,12 @@ export const buildLinearTextToImageGraph = (
|
||||
skipped_layers: clipSkip,
|
||||
},
|
||||
[POSITIVE_CONDITIONING]: {
|
||||
type: 'compel',
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
id: POSITIVE_CONDITIONING,
|
||||
prompt: positivePrompt,
|
||||
},
|
||||
[NEGATIVE_CONDITIONING]: {
|
||||
type: 'compel',
|
||||
type: onnx_model_type ? 'prompt_onnx' : 'compel',
|
||||
id: NEGATIVE_CONDITIONING,
|
||||
prompt: negativePrompt,
|
||||
},
|
||||
@ -89,14 +94,14 @@ export const buildLinearTextToImageGraph = (
|
||||
use_cpu,
|
||||
},
|
||||
[TEXT_TO_LATENTS]: {
|
||||
type: 't2l',
|
||||
type: onnx_model_type ? 't2l_onnx' : 't2l',
|
||||
id: TEXT_TO_LATENTS,
|
||||
cfg_scale,
|
||||
scheduler,
|
||||
steps,
|
||||
},
|
||||
[LATENTS_TO_IMAGE]: {
|
||||
type: 'l2i',
|
||||
type: onnx_model_type ? 'l2i_onnx' : 'l2i',
|
||||
id: LATENTS_TO_IMAGE,
|
||||
fp32: vaePrecision === 'fp32' ? true : false,
|
||||
},
|
||||
@ -104,7 +109,7 @@ export const buildLinearTextToImageGraph = (
|
||||
edges: [
|
||||
{
|
||||
source: {
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
node_id: model_loader,
|
||||
field: 'clip',
|
||||
},
|
||||
destination: {
|
||||
@ -114,7 +119,7 @@ export const buildLinearTextToImageGraph = (
|
||||
},
|
||||
{
|
||||
source: {
|
||||
node_id: MAIN_MODEL_LOADER,
|
||||
node_id: model_loader,
|
||||
field: 'unet',
|
||||
},
|
||||
destination: {
|
||||
@ -218,10 +223,10 @@ export const buildLinearTextToImageGraph = (
|
||||
});
|
||||
|
||||
// add LoRA support
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS);
|
||||
addLoRAsToGraph(state, graph, TEXT_TO_LATENTS, model_loader);
|
||||
|
||||
// optionally add custom VAE
|
||||
addVAEToGraph(state, graph);
|
||||
addVAEToGraph(state, graph, model_loader);
|
||||
|
||||
// add dynamic prompts - also sets up core iteration and seed
|
||||
addDynamicPromptsToGraph(state, graph);
|
||||
|
@ -10,6 +10,7 @@ export const RANDOM_INT = 'rand_int';
|
||||
export const RANGE_OF_SIZE = 'range_of_size';
|
||||
export const ITERATE = 'iterate';
|
||||
export const MAIN_MODEL_LOADER = 'main_model_loader';
|
||||
export const ONNX_MODEL_LOADER = 'onnx_model_loader';
|
||||
export const VAE_LOADER = 'vae_loader';
|
||||
export const LORA_LOADER = 'lora_loader';
|
||||
export const CLIP_SKIP = 'clip_skip';
|
||||
|
@ -15,8 +15,11 @@ import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainM
|
||||
import SyncModelsButton from 'features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton';
|
||||
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
|
||||
import { forEach } from 'lodash-es';
|
||||
import {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
} from 'services/api/endpoints/models';
|
||||
import { NON_REFINER_BASE_MODELS } from 'services/api/constants';
|
||||
import { useGetMainModelsQuery } from 'services/api/endpoints/models';
|
||||
import { useFeatureStatus } from '../../../../system/hooks/useFeatureStatus';
|
||||
|
||||
const selector = createSelector(
|
||||
@ -35,6 +38,9 @@ const ParamMainModelSelect = () => {
|
||||
const { data: mainModels, isLoading } = useGetMainModelsQuery(
|
||||
NON_REFINER_BASE_MODELS
|
||||
);
|
||||
const { data: onnxModels, isLoading: onnxLoading } = useGetOnnxModelsQuery(
|
||||
NON_REFINER_BASE_MODELS
|
||||
);
|
||||
|
||||
const activeTabName = useAppSelector(activeTabNameSelector);
|
||||
|
||||
@ -59,17 +65,35 @@ const ParamMainModelSelect = () => {
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
forEach(onnxModels?.entities, (model, id) => {
|
||||
if (
|
||||
!model ||
|
||||
activeTabName === 'unifiedCanvas' ||
|
||||
activeTabName === 'img2img'
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
data.push({
|
||||
value: id,
|
||||
label: model.model_name,
|
||||
group: MODEL_TYPE_MAP[model.base_model],
|
||||
});
|
||||
});
|
||||
|
||||
return data;
|
||||
}, [mainModels, activeTabName]);
|
||||
}, [mainModels, onnxModels, activeTabName]);
|
||||
|
||||
// grab the full model entity from the RTK Query cache
|
||||
// TODO: maybe we should just store the full model entity in state?
|
||||
const selectedModel = useMemo(
|
||||
() =>
|
||||
mainModels?.entities[`${model?.base_model}/main/${model?.model_name}`] ??
|
||||
(mainModels?.entities[`${model?.base_model}/main/${model?.model_name}`] ||
|
||||
onnxModels?.entities[
|
||||
`${model?.base_model}/onnx/${model?.model_name}`
|
||||
]) ??
|
||||
null,
|
||||
[mainModels?.entities, model]
|
||||
[mainModels?.entities, model, onnxModels?.entities]
|
||||
);
|
||||
|
||||
const handleChangeModel = useCallback(
|
||||
@ -89,7 +113,7 @@ const ParamMainModelSelect = () => {
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
return isLoading ? (
|
||||
return isLoading || onnxLoading ? (
|
||||
<IAIMantineSearchableSelect
|
||||
label={t('modelManager.model')}
|
||||
placeholder="Loading..."
|
||||
|
@ -1,5 +1,15 @@
|
||||
import { useAppToaster } from 'app/components/Toaster';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import {
|
||||
refinerModelChanged,
|
||||
setNegativeStylePromptSDXL,
|
||||
setPositiveStylePromptSDXL,
|
||||
setRefinerAestheticScore,
|
||||
setRefinerCFGScale,
|
||||
setRefinerScheduler,
|
||||
setRefinerStart,
|
||||
setRefinerSteps,
|
||||
} from 'features/sdxl/store/sdxlSlice';
|
||||
import { useCallback } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { UnsafeImageMetadata } from 'services/api/endpoints/images';
|
||||
@ -22,6 +32,10 @@ import {
|
||||
isValidMainModel,
|
||||
isValidNegativePrompt,
|
||||
isValidPositivePrompt,
|
||||
isValidSDXLNegativeStylePrompt,
|
||||
isValidSDXLPositiveStylePrompt,
|
||||
isValidSDXLRefinerAestheticScore,
|
||||
isValidSDXLRefinerStart,
|
||||
isValidScheduler,
|
||||
isValidSeed,
|
||||
isValidSteps,
|
||||
@ -74,17 +88,34 @@ export const useRecallParameters = () => {
|
||||
* Recall both prompts with toast
|
||||
*/
|
||||
const recallBothPrompts = useCallback(
|
||||
(positivePrompt: unknown, negativePrompt: unknown) => {
|
||||
(
|
||||
positivePrompt: unknown,
|
||||
negativePrompt: unknown,
|
||||
positiveStylePrompt: unknown,
|
||||
negativeStylePrompt: unknown
|
||||
) => {
|
||||
if (
|
||||
isValidPositivePrompt(positivePrompt) ||
|
||||
isValidNegativePrompt(negativePrompt)
|
||||
isValidNegativePrompt(negativePrompt) ||
|
||||
isValidSDXLPositiveStylePrompt(positiveStylePrompt) ||
|
||||
isValidSDXLNegativeStylePrompt(negativeStylePrompt)
|
||||
) {
|
||||
if (isValidPositivePrompt(positivePrompt)) {
|
||||
dispatch(setPositivePrompt(positivePrompt));
|
||||
}
|
||||
|
||||
if (isValidNegativePrompt(negativePrompt)) {
|
||||
dispatch(setNegativePrompt(negativePrompt));
|
||||
}
|
||||
|
||||
if (isValidSDXLPositiveStylePrompt(positiveStylePrompt)) {
|
||||
dispatch(setPositiveStylePromptSDXL(positiveStylePrompt));
|
||||
}
|
||||
|
||||
if (isValidSDXLPositiveStylePrompt(negativeStylePrompt)) {
|
||||
dispatch(setNegativeStylePromptSDXL(negativeStylePrompt));
|
||||
}
|
||||
|
||||
parameterSetToast();
|
||||
return;
|
||||
}
|
||||
@ -123,6 +154,36 @@ export const useRecallParameters = () => {
|
||||
[dispatch, parameterSetToast, parameterNotSetToast]
|
||||
);
|
||||
|
||||
/**
|
||||
* Recall SDXL Positive Style Prompt with toast
|
||||
*/
|
||||
const recallSDXLPositiveStylePrompt = useCallback(
|
||||
(positiveStylePrompt: unknown) => {
|
||||
if (!isValidSDXLPositiveStylePrompt(positiveStylePrompt)) {
|
||||
parameterNotSetToast();
|
||||
return;
|
||||
}
|
||||
dispatch(setPositiveStylePromptSDXL(positiveStylePrompt));
|
||||
parameterSetToast();
|
||||
},
|
||||
[dispatch, parameterSetToast, parameterNotSetToast]
|
||||
);
|
||||
|
||||
/**
|
||||
* Recall SDXL Negative Style Prompt with toast
|
||||
*/
|
||||
const recallSDXLNegativeStylePrompt = useCallback(
|
||||
(negativeStylePrompt: unknown) => {
|
||||
if (!isValidSDXLNegativeStylePrompt(negativeStylePrompt)) {
|
||||
parameterNotSetToast();
|
||||
return;
|
||||
}
|
||||
dispatch(setNegativeStylePromptSDXL(negativeStylePrompt));
|
||||
parameterSetToast();
|
||||
},
|
||||
[dispatch, parameterSetToast, parameterNotSetToast]
|
||||
);
|
||||
|
||||
/**
|
||||
* Recall seed with toast
|
||||
*/
|
||||
@ -271,6 +332,14 @@ export const useRecallParameters = () => {
|
||||
steps,
|
||||
width,
|
||||
strength,
|
||||
positive_style_prompt,
|
||||
negative_style_prompt,
|
||||
refiner_model,
|
||||
refiner_cfg_scale,
|
||||
refiner_steps,
|
||||
refiner_scheduler,
|
||||
refiner_aesthetic_store,
|
||||
refiner_start,
|
||||
} = metadata;
|
||||
|
||||
if (isValidCfgScale(cfg_scale)) {
|
||||
@ -304,6 +373,38 @@ export const useRecallParameters = () => {
|
||||
dispatch(setImg2imgStrength(strength));
|
||||
}
|
||||
|
||||
if (isValidSDXLPositiveStylePrompt(positive_style_prompt)) {
|
||||
dispatch(setPositiveStylePromptSDXL(positive_style_prompt));
|
||||
}
|
||||
|
||||
if (isValidSDXLNegativeStylePrompt(negative_style_prompt)) {
|
||||
dispatch(setNegativeStylePromptSDXL(negative_style_prompt));
|
||||
}
|
||||
|
||||
if (isValidMainModel(refiner_model)) {
|
||||
dispatch(refinerModelChanged(refiner_model));
|
||||
}
|
||||
|
||||
if (isValidSteps(refiner_steps)) {
|
||||
dispatch(setRefinerSteps(refiner_steps));
|
||||
}
|
||||
|
||||
if (isValidCfgScale(refiner_cfg_scale)) {
|
||||
dispatch(setRefinerCFGScale(refiner_cfg_scale));
|
||||
}
|
||||
|
||||
if (isValidScheduler(refiner_scheduler)) {
|
||||
dispatch(setRefinerScheduler(refiner_scheduler));
|
||||
}
|
||||
|
||||
if (isValidSDXLRefinerAestheticScore(refiner_aesthetic_store)) {
|
||||
dispatch(setRefinerAestheticScore(refiner_aesthetic_store));
|
||||
}
|
||||
|
||||
if (isValidSDXLRefinerStart(refiner_start)) {
|
||||
dispatch(setRefinerStart(refiner_start));
|
||||
}
|
||||
|
||||
allParameterSetToast();
|
||||
},
|
||||
[allParameterNotSetToast, allParameterSetToast, dispatch]
|
||||
@ -313,6 +414,8 @@ export const useRecallParameters = () => {
|
||||
recallBothPrompts,
|
||||
recallPositivePrompt,
|
||||
recallNegativePrompt,
|
||||
recallSDXLPositiveStylePrompt,
|
||||
recallSDXLNegativeStylePrompt,
|
||||
recallSeed,
|
||||
recallCfgScale,
|
||||
recallModel,
|
||||
|
@ -1,10 +1,10 @@
|
||||
import { createAction } from '@reduxjs/toolkit';
|
||||
import { ImageDTO, MainModelField } from 'services/api/types';
|
||||
import { ImageDTO, MainModelField, OnnxModelField } from 'services/api/types';
|
||||
|
||||
export const initialImageSelected = createAction<ImageDTO | undefined>(
|
||||
'generation/initialImageSelected'
|
||||
);
|
||||
|
||||
export const modelSelected = createAction<MainModelField>(
|
||||
export const modelSelected = createAction<MainModelField | OnnxModelField>(
|
||||
'generation/modelSelected'
|
||||
);
|
||||
|
@ -3,7 +3,7 @@ import { createSlice } from '@reduxjs/toolkit';
|
||||
import { roundToMultiple } from 'common/util/roundDownToMultiple';
|
||||
import { configChanged } from 'features/system/store/configSlice';
|
||||
import { clamp } from 'lodash-es';
|
||||
import { ImageDTO, MainModelField } from 'services/api/types';
|
||||
import { ImageDTO, MainModelField, OnnxModelField } from 'services/api/types';
|
||||
import { clipSkipMap } from '../types/constants';
|
||||
import {
|
||||
CfgScaleParam,
|
||||
@ -50,7 +50,7 @@ export interface GenerationState {
|
||||
shouldUseSymmetry: boolean;
|
||||
horizontalSymmetrySteps: number;
|
||||
verticalSymmetrySteps: number;
|
||||
model: MainModelField | null;
|
||||
model: MainModelField | OnnxModelField | null;
|
||||
vae: VaeModelParam | null;
|
||||
vaePrecision: PrecisionParam;
|
||||
seamlessXAxis: boolean;
|
||||
@ -272,11 +272,12 @@ export const generationSlice = createSlice({
|
||||
const defaultModel = action.payload.sd?.defaultModel;
|
||||
|
||||
if (defaultModel && !state.model) {
|
||||
const [base_model, _model_type, model_name] = defaultModel.split('/');
|
||||
const [base_model, model_type, model_name] = defaultModel.split('/');
|
||||
|
||||
const result = zMainModel.safeParse({
|
||||
model_name,
|
||||
base_model,
|
||||
model_type,
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
|
@ -210,6 +210,14 @@ export type HeightParam = z.infer<typeof zHeight>;
|
||||
export const isValidHeight = (val: unknown): val is HeightParam =>
|
||||
zHeight.safeParse(val).success;
|
||||
|
||||
const zModelType = z.enum([
|
||||
'vae',
|
||||
'lora',
|
||||
'onnx',
|
||||
'main',
|
||||
'controlnet',
|
||||
'embedding',
|
||||
]);
|
||||
const zBaseModel = z.enum(['sd-1', 'sd-2', 'sdxl', 'sdxl-refiner']);
|
||||
|
||||
export type BaseModelParam = z.infer<typeof zBaseModel>;
|
||||
@ -221,12 +229,18 @@ export type BaseModelParam = z.infer<typeof zBaseModel>;
|
||||
export const zMainModel = z.object({
|
||||
model_name: z.string().min(1),
|
||||
base_model: zBaseModel,
|
||||
model_type: zModelType,
|
||||
});
|
||||
|
||||
/**
|
||||
* Type alias for model parameter, inferred from its zod schema
|
||||
*/
|
||||
export type MainModelParam = z.infer<typeof zMainModel>;
|
||||
|
||||
/**
|
||||
* Type alias for model parameter, inferred from its zod schema
|
||||
*/
|
||||
export type OnnxModelParam = z.infer<typeof zMainModel>;
|
||||
/**
|
||||
* Validates/type-guards a value as a model parameter
|
||||
*/
|
||||
@ -310,6 +324,39 @@ export type PrecisionParam = z.infer<typeof zPrecision>;
|
||||
export const isValidPrecision = (val: unknown): val is PrecisionParam =>
|
||||
zPrecision.safeParse(val).success;
|
||||
|
||||
/**
|
||||
* Zod schema for SDXL refiner aesthetic score parameter
|
||||
*/
|
||||
export const zSDXLRefinerAestheticScore = z.number().min(1).max(10);
|
||||
/**
|
||||
* Type alias for SDXL refiner aesthetic score parameter, inferred from its zod schema
|
||||
*/
|
||||
export type SDXLRefinerAestheticScoreParam = z.infer<
|
||||
typeof zSDXLRefinerAestheticScore
|
||||
>;
|
||||
/**
|
||||
* Validates/type-guards a value as a SDXL refiner aesthetic score parameter
|
||||
*/
|
||||
export const isValidSDXLRefinerAestheticScore = (
|
||||
val: unknown
|
||||
): val is SDXLRefinerAestheticScoreParam =>
|
||||
zSDXLRefinerAestheticScore.safeParse(val).success;
|
||||
|
||||
/**
|
||||
* Zod schema for SDXL start parameter
|
||||
*/
|
||||
export const zSDXLRefinerstart = z.number().min(0).max(1);
|
||||
/**
|
||||
* Type alias for SDXL start, inferred from its zod schema
|
||||
*/
|
||||
export type SDXLRefinerStartParam = z.infer<typeof zSDXLRefinerstart>;
|
||||
/**
|
||||
* Validates/type-guards a value as a SDXL refiner aesthetic score parameter
|
||||
*/
|
||||
export const isValidSDXLRefinerStart = (
|
||||
val: unknown
|
||||
): val is SDXLRefinerStartParam => zSDXLRefinerstart.safeParse(val).success;
|
||||
|
||||
// /**
|
||||
// * Zod schema for BaseModelType
|
||||
// */
|
||||
|
@ -8,11 +8,12 @@ export const modelIdToMainModelParam = (
|
||||
mainModelId: string
|
||||
): MainModelParam | undefined => {
|
||||
const log = logger('models');
|
||||
const [base_model, _model_type, model_name] = mainModelId.split('/');
|
||||
const [base_model, model_type, model_name] = mainModelId.split('/');
|
||||
|
||||
const result = zMainModel.safeParse({
|
||||
base_model,
|
||||
model_name,
|
||||
model_type,
|
||||
});
|
||||
|
||||
if (!result.success) {
|
||||
|
@ -21,8 +21,8 @@ export default function ParamSDXLConcatButton() {
|
||||
|
||||
return (
|
||||
<IAIIconButton
|
||||
aria-label="Concat"
|
||||
tooltip="Concatenates Basic Prompt with Style (Recommended)"
|
||||
aria-label="Concatenate Prompt & Style"
|
||||
tooltip="Concatenate Prompt & Style"
|
||||
variant="outline"
|
||||
isChecked={shouldConcatSDXLStylePrompt}
|
||||
onClick={handleShouldConcatPromptChange}
|
||||
|
@ -8,7 +8,9 @@ import { useCallback, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import {
|
||||
MainModelConfigEntity,
|
||||
OnnxModelConfigEntity,
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
useGetLoRAModelsQuery,
|
||||
LoRAModelConfigEntity,
|
||||
} from 'services/api/endpoints/models';
|
||||
@ -20,9 +22,9 @@ type ModelListProps = {
|
||||
setSelectedModelId: (name: string | undefined) => void;
|
||||
};
|
||||
|
||||
type ModelFormat = 'images' | 'checkpoint' | 'diffusers';
|
||||
type ModelFormat = 'images' | 'checkpoint' | 'diffusers' | 'olive' | 'onnx';
|
||||
|
||||
type ModelType = 'main' | 'lora';
|
||||
type ModelType = 'main' | 'lora' | 'onnx';
|
||||
|
||||
type CombinedModelFormat = ModelFormat | 'lora';
|
||||
|
||||
@ -61,6 +63,18 @@ const ModelList = (props: ModelListProps) => {
|
||||
}),
|
||||
});
|
||||
|
||||
const { filteredOnnxModels } = useGetOnnxModelsQuery(ALL_BASE_MODELS, {
|
||||
selectFromResult: ({ data }) => ({
|
||||
filteredOnnxModels: modelsFilter(data, 'onnx', 'onnx', nameFilter),
|
||||
}),
|
||||
});
|
||||
|
||||
const { filteredOliveModels } = useGetOnnxModelsQuery(ALL_BASE_MODELS, {
|
||||
selectFromResult: ({ data }) => ({
|
||||
filteredOliveModels: modelsFilter(data, 'onnx', 'olive', nameFilter),
|
||||
}),
|
||||
});
|
||||
|
||||
const handleSearchFilter = useCallback((e: ChangeEvent<HTMLInputElement>) => {
|
||||
setNameFilter(e.target.value);
|
||||
}, []);
|
||||
@ -85,10 +99,17 @@ const ModelList = (props: ModelListProps) => {
|
||||
</IAIButton>
|
||||
<IAIButton
|
||||
size="sm"
|
||||
onClick={() => setModelFormatFilter('checkpoint')}
|
||||
isChecked={modelFormatFilter === 'checkpoint'}
|
||||
onClick={() => setModelFormatFilter('onnx')}
|
||||
isChecked={modelFormatFilter === 'onnx'}
|
||||
>
|
||||
{t('modelManager.checkpointModels')}
|
||||
{t('modelManager.onnxModels')}
|
||||
</IAIButton>
|
||||
<IAIButton
|
||||
size="sm"
|
||||
onClick={() => setModelFormatFilter('olive')}
|
||||
isChecked={modelFormatFilter === 'olive'}
|
||||
>
|
||||
{t('modelManager.oliveModels')}
|
||||
</IAIButton>
|
||||
<IAIButton
|
||||
size="sm"
|
||||
@ -147,6 +168,42 @@ const ModelList = (props: ModelListProps) => {
|
||||
</Flex>
|
||||
</StyledModelContainer>
|
||||
)}
|
||||
{['images', 'olive'].includes(modelFormatFilter) &&
|
||||
filteredOliveModels.length > 0 && (
|
||||
<StyledModelContainer>
|
||||
<Flex sx={{ gap: 2, flexDir: 'column' }}>
|
||||
<Text variant="subtext" fontSize="sm">
|
||||
Olives
|
||||
</Text>
|
||||
{filteredOliveModels.map((model) => (
|
||||
<ModelListItem
|
||||
key={model.id}
|
||||
model={model}
|
||||
isSelected={selectedModelId === model.id}
|
||||
setSelectedModelId={setSelectedModelId}
|
||||
/>
|
||||
))}
|
||||
</Flex>
|
||||
</StyledModelContainer>
|
||||
)}
|
||||
{['images', 'onnx'].includes(modelFormatFilter) &&
|
||||
filteredOnnxModels.length > 0 && (
|
||||
<StyledModelContainer>
|
||||
<Flex sx={{ gap: 2, flexDir: 'column' }}>
|
||||
<Text variant="subtext" fontSize="sm">
|
||||
Onnx
|
||||
</Text>
|
||||
{filteredOnnxModels.map((model) => (
|
||||
<ModelListItem
|
||||
key={model.id}
|
||||
model={model}
|
||||
isSelected={selectedModelId === model.id}
|
||||
setSelectedModelId={setSelectedModelId}
|
||||
/>
|
||||
))}
|
||||
</Flex>
|
||||
</StyledModelContainer>
|
||||
)}
|
||||
{['images', 'lora'].includes(modelFormatFilter) &&
|
||||
filteredLoraModels.length > 0 && (
|
||||
<StyledModelContainer>
|
||||
@ -173,7 +230,12 @@ const ModelList = (props: ModelListProps) => {
|
||||
|
||||
export default ModelList;
|
||||
|
||||
const modelsFilter = <T extends MainModelConfigEntity | LoRAModelConfigEntity>(
|
||||
const modelsFilter = <
|
||||
T extends
|
||||
| MainModelConfigEntity
|
||||
| LoRAModelConfigEntity
|
||||
| OnnxModelConfigEntity
|
||||
>(
|
||||
data: EntityState<T> | undefined,
|
||||
model_type: ModelType,
|
||||
model_format: ModelFormat | undefined,
|
||||
|
@ -6,3 +6,4 @@ export { default as ParamMainModelSelect } from './features/parameters/component
|
||||
export { default as InvokeAiLogoComponent } from './features/system/components/InvokeAILogoComponent';
|
||||
export { default as SettingsModal } from './features/system/components/SettingsModal/SettingsModal';
|
||||
export { default as StatusIndicator } from './features/system/components/StatusIndicator';
|
||||
export { theme as theme } from './theme/theme';
|
||||
|
@ -10,9 +10,11 @@ import {
|
||||
ImportModelConfig,
|
||||
LoRAModelConfig,
|
||||
MainModelConfig,
|
||||
OnnxModelConfig,
|
||||
MergeModelConfig,
|
||||
TextualInversionModelConfig,
|
||||
VaeModelConfig,
|
||||
ModelType,
|
||||
} from 'services/api/types';
|
||||
|
||||
import queryString from 'query-string';
|
||||
@ -27,6 +29,8 @@ export type MainModelConfigEntity =
|
||||
| DiffusersModelConfigEntity
|
||||
| CheckpointModelConfigEntity;
|
||||
|
||||
export type OnnxModelConfigEntity = OnnxModelConfig & { id: string };
|
||||
|
||||
export type LoRAModelConfigEntity = LoRAModelConfig & { id: string };
|
||||
|
||||
export type ControlNetModelConfigEntity = ControlNetModelConfig & {
|
||||
@ -41,6 +45,7 @@ export type VaeModelConfigEntity = VaeModelConfig & { id: string };
|
||||
|
||||
type AnyModelConfigEntity =
|
||||
| MainModelConfigEntity
|
||||
| OnnxModelConfigEntity
|
||||
| LoRAModelConfigEntity
|
||||
| ControlNetModelConfigEntity
|
||||
| TextualInversionModelConfigEntity
|
||||
@ -66,6 +71,7 @@ type UpdateLoRAModelResponse = UpdateMainModelResponse;
|
||||
type DeleteMainModelArg = {
|
||||
base_model: BaseModelType;
|
||||
model_name: string;
|
||||
model_type: ModelType;
|
||||
};
|
||||
|
||||
type DeleteMainModelResponse = void;
|
||||
@ -119,6 +125,10 @@ type SearchFolderArg = operations['search_for_models']['parameters']['query'];
|
||||
const mainModelsAdapter = createEntityAdapter<MainModelConfigEntity>({
|
||||
sortComparer: (a, b) => a.model_name.localeCompare(b.model_name),
|
||||
});
|
||||
|
||||
const onnxModelsAdapter = createEntityAdapter<OnnxModelConfigEntity>({
|
||||
sortComparer: (a, b) => a.model_name.localeCompare(b.model_name),
|
||||
});
|
||||
const loraModelsAdapter = createEntityAdapter<LoRAModelConfigEntity>({
|
||||
sortComparer: (a, b) => a.model_name.localeCompare(b.model_name),
|
||||
});
|
||||
@ -156,6 +166,49 @@ const createModelEntities = <T extends AnyModelConfigEntity>(
|
||||
|
||||
export const modelsApi = api.injectEndpoints({
|
||||
endpoints: (build) => ({
|
||||
getOnnxModels: build.query<
|
||||
EntityState<OnnxModelConfigEntity>,
|
||||
BaseModelType[]
|
||||
>({
|
||||
query: (base_models) => {
|
||||
const params = {
|
||||
model_type: 'onnx',
|
||||
base_models,
|
||||
};
|
||||
|
||||
const query = queryString.stringify(params, { arrayFormat: 'none' });
|
||||
return `models/?${query}`;
|
||||
},
|
||||
providesTags: (result, error, arg) => {
|
||||
const tags: ApiFullTagDescription[] = [
|
||||
{ id: 'OnnxModel', type: LIST_TAG },
|
||||
];
|
||||
|
||||
if (result) {
|
||||
tags.push(
|
||||
...result.ids.map((id) => ({
|
||||
type: 'OnnxModel' as const,
|
||||
id,
|
||||
}))
|
||||
);
|
||||
}
|
||||
|
||||
return tags;
|
||||
},
|
||||
transformResponse: (
|
||||
response: { models: OnnxModelConfig[] },
|
||||
meta,
|
||||
arg
|
||||
) => {
|
||||
const entities = createModelEntities<OnnxModelConfigEntity>(
|
||||
response.models
|
||||
);
|
||||
return onnxModelsAdapter.setAll(
|
||||
onnxModelsAdapter.getInitialState(),
|
||||
entities
|
||||
);
|
||||
},
|
||||
}),
|
||||
getMainModels: build.query<
|
||||
EntityState<MainModelConfigEntity>,
|
||||
BaseModelType[]
|
||||
@ -248,9 +301,9 @@ export const modelsApi = api.injectEndpoints({
|
||||
DeleteMainModelResponse,
|
||||
DeleteMainModelArg
|
||||
>({
|
||||
query: ({ base_model, model_name }) => {
|
||||
query: ({ base_model, model_name, model_type }) => {
|
||||
return {
|
||||
url: `models/${base_model}/main/${model_name}`,
|
||||
url: `models/${base_model}/${model_type}/${model_name}`,
|
||||
method: 'DELETE',
|
||||
};
|
||||
},
|
||||
@ -494,6 +547,7 @@ export const modelsApi = api.injectEndpoints({
|
||||
|
||||
export const {
|
||||
useGetMainModelsQuery,
|
||||
useGetOnnxModelsQuery,
|
||||
useGetControlNetModelsQuery,
|
||||
useGetLoRAModelsQuery,
|
||||
useGetTextualInversionModelsQuery,
|
||||
|
379
invokeai/frontend/web/src/services/api/schema.d.ts
vendored
379
invokeai/frontend/web/src/services/api/schema.d.ts
vendored
@ -1381,7 +1381,7 @@ export type components = {
|
||||
* @description The nodes in this graph
|
||||
*/
|
||||
nodes?: {
|
||||
[key: string]: (components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined;
|
||||
[key: string]: (components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ParamPromptInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["ONNXSD1ModelLoaderInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined;
|
||||
};
|
||||
/**
|
||||
* Edges
|
||||
@ -1424,7 +1424,7 @@ export type components = {
|
||||
* @description The results of node executions
|
||||
*/
|
||||
results: {
|
||||
[key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined;
|
||||
[key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["ONNXModelLoaderOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined;
|
||||
};
|
||||
/**
|
||||
* Errors
|
||||
@ -2562,10 +2562,10 @@ export type components = {
|
||||
/**
|
||||
* Infill Method
|
||||
* @description The method used to infill empty regions (px)
|
||||
* @default patchmatch
|
||||
* @default tile
|
||||
* @enum {string}
|
||||
*/
|
||||
infill_method?: "patchmatch" | "tile" | "solid";
|
||||
infill_method?: "tile" | "solid";
|
||||
/**
|
||||
* Inpaint Width
|
||||
* @description The width of the inpaint region (px)
|
||||
@ -2752,7 +2752,7 @@ export type components = {
|
||||
vae?: components["schemas"]["VaeField"];
|
||||
/**
|
||||
* Tiled
|
||||
* @description Decode latents by overlapping tiles(less memory consumption)
|
||||
* @description Decode latents by overlaping tiles (less memory consumption)
|
||||
* @default false
|
||||
*/
|
||||
tiled?: boolean;
|
||||
@ -3173,6 +3173,8 @@ export type components = {
|
||||
model_name: string;
|
||||
/** @description Base model */
|
||||
base_model: components["schemas"]["BaseModelType"];
|
||||
/** @description Model Type */
|
||||
model_type: components["schemas"]["ModelType"];
|
||||
};
|
||||
/**
|
||||
* MainModelLoaderInvocation
|
||||
@ -3618,7 +3620,7 @@ export type components = {
|
||||
* @description An enumeration.
|
||||
* @enum {string}
|
||||
*/
|
||||
ModelType: "main" | "vae" | "lora" | "controlnet" | "embedding";
|
||||
ModelType: "onnx" | "main" | "vae" | "lora" | "controlnet" | "embedding";
|
||||
/**
|
||||
* ModelVariantType
|
||||
* @description An enumeration.
|
||||
@ -3628,7 +3630,7 @@ export type components = {
|
||||
/** ModelsList */
|
||||
ModelsList: {
|
||||
/** Models */
|
||||
models: (components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"])[];
|
||||
models: (components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"])[];
|
||||
};
|
||||
/**
|
||||
* MultiplyInvocation
|
||||
@ -3778,6 +3780,261 @@ export type components = {
|
||||
*/
|
||||
image_resolution?: number;
|
||||
};
|
||||
/**
|
||||
* ONNXLatentsToImageInvocation
|
||||
* @description Generates an image from latents.
|
||||
*/
|
||||
ONNXLatentsToImageInvocation: {
|
||||
/**
|
||||
* Id
|
||||
* @description The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Is Intermediate
|
||||
* @description Whether or not this node is an intermediate node.
|
||||
* @default false
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
/**
|
||||
* Type
|
||||
* @default l2i_onnx
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "l2i_onnx";
|
||||
/**
|
||||
* Latents
|
||||
* @description The latents to generate an image from
|
||||
*/
|
||||
latents?: components["schemas"]["LatentsField"];
|
||||
/**
|
||||
* Vae
|
||||
* @description Vae submodel
|
||||
*/
|
||||
vae?: components["schemas"]["VaeField"];
|
||||
/**
|
||||
* Metadata
|
||||
* @description Optional core metadata to be written to the image
|
||||
*/
|
||||
metadata?: components["schemas"]["CoreMetadata"];
|
||||
};
|
||||
/**
|
||||
* ONNXModelLoaderOutput
|
||||
* @description Model loader output
|
||||
*/
|
||||
ONNXModelLoaderOutput: {
|
||||
/**
|
||||
* Type
|
||||
* @default model_loader_output_onnx
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "model_loader_output_onnx";
|
||||
/**
|
||||
* Unet
|
||||
* @description UNet submodel
|
||||
*/
|
||||
unet?: components["schemas"]["UNetField"];
|
||||
/**
|
||||
* Clip
|
||||
* @description Tokenizer and text_encoder submodels
|
||||
*/
|
||||
clip?: components["schemas"]["ClipField"];
|
||||
/**
|
||||
* Vae Decoder
|
||||
* @description Vae submodel
|
||||
*/
|
||||
vae_decoder?: components["schemas"]["VaeField"];
|
||||
/**
|
||||
* Vae Encoder
|
||||
* @description Vae submodel
|
||||
*/
|
||||
vae_encoder?: components["schemas"]["VaeField"];
|
||||
};
|
||||
/**
|
||||
* ONNXPromptInvocation
|
||||
* @description A node to process inputs and produce outputs.
|
||||
* May use dependency injection in __init__ to receive providers.
|
||||
*/
|
||||
ONNXPromptInvocation: {
|
||||
/**
|
||||
* Id
|
||||
* @description The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Is Intermediate
|
||||
* @description Whether or not this node is an intermediate node.
|
||||
* @default false
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
/**
|
||||
* Type
|
||||
* @default prompt_onnx
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "prompt_onnx";
|
||||
/**
|
||||
* Prompt
|
||||
* @description Prompt
|
||||
* @default
|
||||
*/
|
||||
prompt?: string;
|
||||
/**
|
||||
* Clip
|
||||
* @description Clip to use
|
||||
*/
|
||||
clip?: components["schemas"]["ClipField"];
|
||||
};
|
||||
/**
|
||||
* ONNXSD1ModelLoaderInvocation
|
||||
* @description Loading submodels of selected model.
|
||||
*/
|
||||
ONNXSD1ModelLoaderInvocation: {
|
||||
/**
|
||||
* Id
|
||||
* @description The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Is Intermediate
|
||||
* @description Whether or not this node is an intermediate node.
|
||||
* @default false
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
/**
|
||||
* Type
|
||||
* @default sd1_model_loader_onnx
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "sd1_model_loader_onnx";
|
||||
/**
|
||||
* Model Name
|
||||
* @description Model to load
|
||||
* @default
|
||||
*/
|
||||
model_name?: string;
|
||||
};
|
||||
/** ONNXStableDiffusion1ModelConfig */
|
||||
ONNXStableDiffusion1ModelConfig: {
|
||||
/** Model Name */
|
||||
model_name: string;
|
||||
base_model: components["schemas"]["BaseModelType"];
|
||||
/**
|
||||
* Model Type
|
||||
* @enum {string}
|
||||
*/
|
||||
model_type: "onnx";
|
||||
/** Path */
|
||||
path: string;
|
||||
/** Description */
|
||||
description?: string;
|
||||
/**
|
||||
* Model Format
|
||||
* @enum {string}
|
||||
*/
|
||||
model_format: "onnx";
|
||||
error?: components["schemas"]["ModelError"];
|
||||
variant: components["schemas"]["ModelVariantType"];
|
||||
};
|
||||
/** ONNXStableDiffusion2ModelConfig */
|
||||
ONNXStableDiffusion2ModelConfig: {
|
||||
/** Model Name */
|
||||
model_name: string;
|
||||
base_model: components["schemas"]["BaseModelType"];
|
||||
/**
|
||||
* Model Type
|
||||
* @enum {string}
|
||||
*/
|
||||
model_type: "onnx";
|
||||
/** Path */
|
||||
path: string;
|
||||
/** Description */
|
||||
description?: string;
|
||||
/**
|
||||
* Model Format
|
||||
* @enum {string}
|
||||
*/
|
||||
model_format: "onnx";
|
||||
error?: components["schemas"]["ModelError"];
|
||||
variant: components["schemas"]["ModelVariantType"];
|
||||
prediction_type: components["schemas"]["SchedulerPredictionType"];
|
||||
/** Upcast Attention */
|
||||
upcast_attention: boolean;
|
||||
};
|
||||
/**
|
||||
* ONNXTextToLatentsInvocation
|
||||
* @description Generates latents from conditionings.
|
||||
*/
|
||||
ONNXTextToLatentsInvocation: {
|
||||
/**
|
||||
* Id
|
||||
* @description The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Is Intermediate
|
||||
* @description Whether or not this node is an intermediate node.
|
||||
* @default false
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
/**
|
||||
* Type
|
||||
* @default t2l_onnx
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "t2l_onnx";
|
||||
/**
|
||||
* Positive Conditioning
|
||||
* @description Positive conditioning for generation
|
||||
*/
|
||||
positive_conditioning?: components["schemas"]["ConditioningField"];
|
||||
/**
|
||||
* Negative Conditioning
|
||||
* @description Negative conditioning for generation
|
||||
*/
|
||||
negative_conditioning?: components["schemas"]["ConditioningField"];
|
||||
/**
|
||||
* Noise
|
||||
* @description The noise to use
|
||||
*/
|
||||
noise?: components["schemas"]["LatentsField"];
|
||||
/**
|
||||
* Steps
|
||||
* @description The number of steps to use to generate the image
|
||||
* @default 10
|
||||
*/
|
||||
steps?: number;
|
||||
/**
|
||||
* Cfg Scale
|
||||
* @description The Classifier-Free Guidance, higher values may result in a result closer to the prompt
|
||||
* @default 7.5
|
||||
*/
|
||||
cfg_scale?: number | (number)[];
|
||||
/**
|
||||
* Scheduler
|
||||
* @description The scheduler to use
|
||||
* @default euler
|
||||
* @enum {string}
|
||||
*/
|
||||
scheduler?: "ddim" | "ddpm" | "deis" | "lms" | "lms_k" | "pndm" | "heun" | "heun_k" | "euler" | "euler_k" | "euler_a" | "kdpm_2" | "kdpm_2_a" | "dpmpp_2s" | "dpmpp_2s_k" | "dpmpp_2m" | "dpmpp_2m_k" | "dpmpp_2m_sde" | "dpmpp_2m_sde_k" | "dpmpp_sde" | "dpmpp_sde_k" | "unipc";
|
||||
/**
|
||||
* Precision
|
||||
* @description The precision to use when generating latents
|
||||
* @default tensor(float16)
|
||||
* @enum {string}
|
||||
*/
|
||||
precision?: "tensor(bool)" | "tensor(int8)" | "tensor(uint8)" | "tensor(int16)" | "tensor(uint16)" | "tensor(int32)" | "tensor(uint32)" | "tensor(int64)" | "tensor(uint64)" | "tensor(float16)" | "tensor(float)" | "tensor(double)";
|
||||
/**
|
||||
* Unet
|
||||
* @description UNet submodel
|
||||
*/
|
||||
unet?: components["schemas"]["UNetField"];
|
||||
/**
|
||||
* Control
|
||||
* @description The control to use
|
||||
*/
|
||||
control?: components["schemas"]["ControlField"] | (components["schemas"]["ControlField"])[];
|
||||
};
|
||||
/**
|
||||
* OffsetPaginatedResults[BoardDTO]
|
||||
* @description Offset-paginated results
|
||||
@ -3830,6 +4087,49 @@ export type components = {
|
||||
*/
|
||||
total: number;
|
||||
};
|
||||
/**
|
||||
* OnnxModelField
|
||||
* @description Onnx model field
|
||||
*/
|
||||
OnnxModelField: {
|
||||
/**
|
||||
* Model Name
|
||||
* @description Name of the model
|
||||
*/
|
||||
model_name: string;
|
||||
/** @description Base model */
|
||||
base_model: components["schemas"]["BaseModelType"];
|
||||
/** @description Model Type */
|
||||
model_type: components["schemas"]["ModelType"];
|
||||
};
|
||||
/**
|
||||
* OnnxModelLoaderInvocation
|
||||
* @description Loads a main model, outputting its submodels.
|
||||
*/
|
||||
OnnxModelLoaderInvocation: {
|
||||
/**
|
||||
* Id
|
||||
* @description The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Is Intermediate
|
||||
* @description Whether or not this node is an intermediate node.
|
||||
* @default false
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
/**
|
||||
* Type
|
||||
* @default onnx_model_loader
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "onnx_model_loader";
|
||||
/**
|
||||
* Model
|
||||
* @description The model to load
|
||||
*/
|
||||
model: components["schemas"]["OnnxModelField"];
|
||||
};
|
||||
/**
|
||||
* OpenposeImageProcessorInvocation
|
||||
* @description Applies Openpose processing to image
|
||||
@ -3965,6 +4265,35 @@ export type components = {
|
||||
*/
|
||||
a?: number;
|
||||
};
|
||||
/**
|
||||
* ParamPromptInvocation
|
||||
* @description A prompt input parameter
|
||||
*/
|
||||
ParamPromptInvocation: {
|
||||
/**
|
||||
* Id
|
||||
* @description The id of this node. Must be unique among all nodes.
|
||||
*/
|
||||
id: string;
|
||||
/**
|
||||
* Is Intermediate
|
||||
* @description Whether or not this node is an intermediate node.
|
||||
* @default false
|
||||
*/
|
||||
is_intermediate?: boolean;
|
||||
/**
|
||||
* Type
|
||||
* @default param_prompt
|
||||
* @enum {string}
|
||||
*/
|
||||
type?: "param_prompt";
|
||||
/**
|
||||
* Prompt
|
||||
* @description The prompt value
|
||||
* @default
|
||||
*/
|
||||
prompt?: string;
|
||||
};
|
||||
/**
|
||||
* ParamStringInvocation
|
||||
* @description A string parameter
|
||||
@ -4934,6 +5263,12 @@ export type components = {
|
||||
*/
|
||||
antialias?: boolean;
|
||||
};
|
||||
/**
|
||||
* SchedulerPredictionType
|
||||
* @description An enumeration.
|
||||
* @enum {string}
|
||||
*/
|
||||
SchedulerPredictionType: "epsilon" | "v_prediction" | "sample";
|
||||
/**
|
||||
* SegmentAnythingProcessorInvocation
|
||||
* @description Applies segment anything processing to image
|
||||
@ -5244,7 +5579,7 @@ export type components = {
|
||||
* @description An enumeration.
|
||||
* @enum {string}
|
||||
*/
|
||||
SubModelType: "unet" | "text_encoder" | "text_encoder_2" | "tokenizer" | "tokenizer_2" | "vae" | "scheduler" | "safety_checker";
|
||||
SubModelType: "unet" | "text_encoder" | "text_encoder_2" | "tokenizer" | "tokenizer_2" | "vae" | "vae_decoder" | "vae_encoder" | "scheduler" | "safety_checker";
|
||||
/**
|
||||
* SubtractInvocation
|
||||
* @description Subtracts two numbers
|
||||
@ -5557,11 +5892,11 @@ export type components = {
|
||||
image?: components["schemas"]["ImageField"];
|
||||
};
|
||||
/**
|
||||
* StableDiffusion2ModelFormat
|
||||
* StableDiffusionOnnxModelFormat
|
||||
* @description An enumeration.
|
||||
* @enum {string}
|
||||
*/
|
||||
StableDiffusion2ModelFormat: "checkpoint" | "diffusers";
|
||||
StableDiffusionOnnxModelFormat: "olive" | "onnx";
|
||||
/**
|
||||
* StableDiffusionXLModelFormat
|
||||
* @description An enumeration.
|
||||
@ -5574,6 +5909,12 @@ export type components = {
|
||||
* @enum {string}
|
||||
*/
|
||||
ControlNetModelFormat: "checkpoint" | "diffusers";
|
||||
/**
|
||||
* StableDiffusion2ModelFormat
|
||||
* @description An enumeration.
|
||||
* @enum {string}
|
||||
*/
|
||||
StableDiffusion2ModelFormat: "checkpoint" | "diffusers";
|
||||
/**
|
||||
* StableDiffusion1ModelFormat
|
||||
* @description An enumeration.
|
||||
@ -5690,7 +6031,7 @@ export type operations = {
|
||||
};
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
||||
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ParamPromptInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["ONNXSD1ModelLoaderInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
@ -5727,7 +6068,7 @@ export type operations = {
|
||||
};
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
||||
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ParamPromptInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["ONNXSD1ModelLoaderInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
@ -5991,14 +6332,14 @@ export type operations = {
|
||||
};
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
/** @description The model was updated successfully */
|
||||
200: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
/** @description Bad request */
|
||||
@ -6029,7 +6370,7 @@ export type operations = {
|
||||
/** @description The model imported successfully */
|
||||
201: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
/** @description The model could not be found */
|
||||
@ -6055,14 +6396,14 @@ export type operations = {
|
||||
add_model: {
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
/** @description The model added successfully */
|
||||
201: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
/** @description The model could not be found */
|
||||
@ -6102,7 +6443,7 @@ export type operations = {
|
||||
/** @description Model converted successfully */
|
||||
200: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
/** @description Bad request */
|
||||
@ -6191,7 +6532,7 @@ export type operations = {
|
||||
/** @description Model converted successfully */
|
||||
200: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
"application/json": components["schemas"]["ONNXStableDiffusion1ModelConfig"] | components["schemas"]["StableDiffusion1ModelCheckpointConfig"] | components["schemas"]["StableDiffusion1ModelDiffusersConfig"] | components["schemas"]["VaeModelConfig"] | components["schemas"]["LoRAModelConfig"] | components["schemas"]["ControlNetModelCheckpointConfig"] | components["schemas"]["ControlNetModelDiffusersConfig"] | components["schemas"]["TextualInversionModelConfig"] | components["schemas"]["ONNXStableDiffusion2ModelConfig"] | components["schemas"]["StableDiffusion2ModelCheckpointConfig"] | components["schemas"]["StableDiffusion2ModelDiffusersConfig"] | components["schemas"]["StableDiffusionXLModelCheckpointConfig"] | components["schemas"]["StableDiffusionXLModelDiffusersConfig"];
|
||||
};
|
||||
};
|
||||
/** @description Incompatible models */
|
||||
|
@ -32,6 +32,7 @@ export type ModelType = components['schemas']['ModelType'];
|
||||
export type SubModelType = components['schemas']['SubModelType'];
|
||||
export type BaseModelType = components['schemas']['BaseModelType'];
|
||||
export type MainModelField = components['schemas']['MainModelField'];
|
||||
export type OnnxModelField = components['schemas']['OnnxModelField'];
|
||||
export type VAEModelField = components['schemas']['VAEModelField'];
|
||||
export type LoRAModelField = components['schemas']['LoRAModelField'];
|
||||
export type ControlNetModelField =
|
||||
@ -58,6 +59,8 @@ export type DiffusersModelConfig =
|
||||
export type CheckpointModelConfig =
|
||||
| components['schemas']['StableDiffusion1ModelCheckpointConfig']
|
||||
| components['schemas']['StableDiffusion2ModelCheckpointConfig']
|
||||
| components['schemas']['StableDiffusion2ModelDiffusersConfig'];
|
||||
export type OnnxModelConfig = components['schemas']['ONNXStableDiffusion1ModelConfig']
|
||||
| components['schemas']['StableDiffusionXLModelCheckpointConfig'];
|
||||
export type MainModelConfig = DiffusersModelConfig | CheckpointModelConfig;
|
||||
export type AnyModelConfig =
|
||||
@ -65,7 +68,8 @@ export type AnyModelConfig =
|
||||
| VaeModelConfig
|
||||
| ControlNetModelConfig
|
||||
| TextualInversionModelConfig
|
||||
| MainModelConfig;
|
||||
| MainModelConfig
|
||||
| OnnxModelConfig;
|
||||
|
||||
export type MergeModelConfig = components['schemas']['Body_merge_models'];
|
||||
export type ConvertModelConfig = components['schemas']['Body_convert_model'];
|
||||
@ -127,6 +131,9 @@ export type ImageCollectionInvocation = TypeReq<
|
||||
export type MainModelLoaderInvocation = TypeReq<
|
||||
components['schemas']['MainModelLoaderInvocation']
|
||||
>;
|
||||
export type OnnxModelLoaderInvocation = TypeReq<
|
||||
components['schemas']['OnnxModelLoaderInvocation']
|
||||
>;
|
||||
export type LoraLoaderInvocation = TypeReq<
|
||||
components['schemas']['LoraLoaderInvocation']
|
||||
>;
|
||||
|
@ -6447,11 +6447,16 @@ tslib@^1.8.1:
|
||||
resolved "https://registry.yarnpkg.com/tslib/-/tslib-1.14.1.tgz#cf2d38bdc34a134bcaf1091c41f6619e2f672d00"
|
||||
integrity sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==
|
||||
|
||||
tslib@^2.0.0, tslib@^2.0.3, tslib@^2.1.0, tslib@^2.3.0, tslib@^2.4.0:
|
||||
tslib@^2.0.0, tslib@^2.1.0, tslib@^2.3.0, tslib@^2.4.0:
|
||||
version "2.6.0"
|
||||
resolved "https://registry.yarnpkg.com/tslib/-/tslib-2.6.0.tgz#b295854684dbda164e181d259a22cd779dcd7bc3"
|
||||
integrity sha512-7At1WUettjcSRHXCyYtTselblcHl9PJFFVKiCAy/bY97+BPZXSQ2wbq0P9s8tK2G7dFQfNnlJnPAiArVBVBsfA==
|
||||
|
||||
tslib@^2.0.3:
|
||||
version "2.6.1"
|
||||
resolved "https://registry.yarnpkg.com/tslib/-/tslib-2.6.1.tgz#fd8c9a0ff42590b25703c0acb3de3d3f4ede0410"
|
||||
integrity sha512-t0hLfiEKfMUoqhG+U1oid7Pva4bbDPHYfJNiB7BiIjRkj1pyC++4N3huJfqY6aRH6VTB0rvtzQwjM4K6qpfOig==
|
||||
|
||||
tsutils@^3.21.0:
|
||||
version "3.21.0"
|
||||
resolved "https://registry.yarnpkg.com/tsutils/-/tsutils-3.21.0.tgz#b48717d394cea6c1e096983eed58e9d61715b623"
|
||||
|
@ -1 +1 @@
|
||||
__version__ = "3.0.1"
|
||||
__version__ = "3.0.1post3"
|
||||
|
@ -36,7 +36,6 @@ theme:
|
||||
- navigation.instant
|
||||
- navigation.tabs
|
||||
- navigation.tabs.sticky
|
||||
- navigation.top
|
||||
- navigation.tracking
|
||||
- navigation.indexes
|
||||
- navigation.path
|
||||
@ -102,9 +101,9 @@ plugins:
|
||||
nav:
|
||||
- Home: 'index.md'
|
||||
- Installation:
|
||||
- Overview: 'installation/index.md'
|
||||
- Overview: 'installation/INSTALLATION.md'
|
||||
- Installing with the Automated Installer: 'installation/010_INSTALL_AUTOMATED.md'
|
||||
- Installing manually: 'installation/020_INSTALL_MANUAL.md'
|
||||
- Installing Manually: 'installation/020_INSTALL_MANUAL.md'
|
||||
- NVIDIA Cuda / AMD ROCm: 'installation/030_INSTALL_CUDA_AND_ROCM.md'
|
||||
- Installing with Docker: 'installation/040_INSTALL_DOCKER.md'
|
||||
- Installing Models: 'installation/050_INSTALLING_MODELS.md'
|
||||
@ -124,6 +123,7 @@ nav:
|
||||
- Overview: 'nodes/overview.md'
|
||||
- Features:
|
||||
- Overview: 'features/index.md'
|
||||
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
|
||||
- Concepts: 'features/CONCEPTS.md'
|
||||
- Configuration: 'features/CONFIGURATION.md'
|
||||
- ControlNet: 'features/CONTROLNET.md'
|
||||
@ -157,6 +157,7 @@ nav:
|
||||
- Inpainting: 'deprecated/INPAINTING.md'
|
||||
- Outpainting: 'deprecated/OUTPAINTING.md'
|
||||
- Help:
|
||||
- Getting Started: 'help/gettingStartedWithAI.md'
|
||||
- Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md'
|
||||
- Other:
|
||||
- Contributors: 'other/CONTRIBUTORS.md'
|
||||
|
@ -1,281 +1,283 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "ycYWcsEKc6w7"
|
||||
},
|
||||
"source": [
|
||||
"# Stable Diffusion AI Notebook (Release 2.0.0)\n",
|
||||
"\n",
|
||||
"<img src=\"https://user-images.githubusercontent.com/60411196/186547976-d9de378a-9de8-4201-9c25-c057a9c59bad.jpeg\" alt=\"stable-diffusion-ai\" width=\"170px\"/> <br>\n",
|
||||
"#### Instructions:\n",
|
||||
"1. Execute each cell in order to mount a Dream bot and create images from text. <br>\n",
|
||||
"2. Once cells 1-8 were run correctly you'll be executing a terminal in cell #9, you'll need to enter `python scripts/dream.py` command to run Dream bot.<br> \n",
|
||||
"3. After launching dream bot, you'll see: <br> `Dream > ` in terminal. <br> Insert a command, eg. `Dream > Astronaut floating in a distant galaxy`, or type `-h` for help.\n",
|
||||
"3. After completion you'll see your generated images in path `stable-diffusion/outputs/img-samples/`, you can also show last generated images in cell #10.\n",
|
||||
"4. To quit Dream bot use `q` command. <br> \n",
|
||||
"---\n",
|
||||
"<font color=\"red\">Note:</font> It takes some time to load, but after installing all dependencies you can use the bot all time you want while colab instance is up. <br>\n",
|
||||
"<font color=\"red\">Requirements:</font> For this notebook to work you need to have [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) stored in your Google Drive, it will be needed in cell #7\n",
|
||||
"##### For more details visit Github repository: [invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)\n",
|
||||
"---\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "dr32VLxlnouf"
|
||||
},
|
||||
"source": [
|
||||
"## ◢ Installation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "a2Z5Qu_o8VtQ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 1. Check current GPU assigned\n",
|
||||
"!nvidia-smi -L\n",
|
||||
"!nvidia-smi"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "vbI9ZsQHzjqF"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 2. Download stable-diffusion Repository\n",
|
||||
"from os.path import exists\n",
|
||||
"\n",
|
||||
"!git clone --quiet https://github.com/invoke-ai/InvokeAI.git # Original repo\n",
|
||||
"%cd /content/InvokeAI/\n",
|
||||
"!git checkout --quiet tags/v2.0.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "QbXcGXYEFSNB"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 3. Install dependencies\n",
|
||||
"import gc\n",
|
||||
"\n",
|
||||
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-base.txt\n",
|
||||
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-win-colab-cuda.txt\n",
|
||||
"!pip install colab-xterm\n",
|
||||
"!pip install -r requirements-lin-win-colab-CUDA.txt\n",
|
||||
"!pip install clean-fid torchtext\n",
|
||||
"!pip install transformers\n",
|
||||
"gc.collect()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "8rSMhgnAttQa"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 4. Restart Runtime\n",
|
||||
"exit()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "ChIDWxLVHGGJ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 5. Load small ML models required\n",
|
||||
"import gc\n",
|
||||
"%cd /content/InvokeAI/\n",
|
||||
"!python scripts/preload_models.py\n",
|
||||
"gc.collect()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "795x1tMoo8b1"
|
||||
},
|
||||
"source": [
|
||||
"## ◢ Configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "YEWPV-sF1RDM"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 6. Mount google Drive\n",
|
||||
"from google.colab import drive\n",
|
||||
"drive.mount('/content/drive')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "zRTJeZ461WGu"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 7. Drive Path to model\n",
|
||||
"#@markdown Path should start with /content/drive/path-to-your-file <br>\n",
|
||||
"#@markdown <font color=\"red\">Note:</font> Model should be downloaded from https://huggingface.co <br>\n",
|
||||
"#@markdown Lastest release: [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)\n",
|
||||
"from os.path import exists\n",
|
||||
"\n",
|
||||
"model_path = \"\" #@param {type:\"string\"}\n",
|
||||
"if exists(model_path):\n",
|
||||
" print(\"✅ Valid directory\")\n",
|
||||
"else: \n",
|
||||
" print(\"❌ File doesn't exist\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "UY-NNz4I8_aG"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 8. Symlink to model\n",
|
||||
"\n",
|
||||
"from os.path import exists\n",
|
||||
"import os \n",
|
||||
"\n",
|
||||
"# Folder creation if it doesn't exist\n",
|
||||
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1\"):\n",
|
||||
" print(\"❗ Dir stable-diffusion-v1 already exists\")\n",
|
||||
"else:\n",
|
||||
" %mkdir /content/InvokeAI/models/ldm/stable-diffusion-v1\n",
|
||||
" print(\"✅ Dir stable-diffusion-v1 created\")\n",
|
||||
"\n",
|
||||
"# Symbolic link if it doesn't exist\n",
|
||||
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt\"):\n",
|
||||
" print(\"❗ Symlink already created\")\n",
|
||||
"else: \n",
|
||||
" src = model_path\n",
|
||||
" dst = '/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt'\n",
|
||||
" os.symlink(src, dst) \n",
|
||||
" print(\"✅ Symbolic link created successfully\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Mc28N0_NrCQH"
|
||||
},
|
||||
"source": [
|
||||
"## ◢ Execution"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "ir4hCrMIuUpl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 9. Run Terminal and Execute Dream bot\n",
|
||||
"#@markdown <font color=\"blue\">Steps:</font> <br>\n",
|
||||
"#@markdown 1. Execute command `python scripts/invoke.py` to run InvokeAI.<br>\n",
|
||||
"#@markdown 2. After initialized you'll see `Dream>` line.<br>\n",
|
||||
"#@markdown 3. Example text: `Astronaut floating in a distant galaxy` <br>\n",
|
||||
"#@markdown 4. To quit Dream bot use: `q` command.<br>\n",
|
||||
"\n",
|
||||
"%load_ext colabxterm\n",
|
||||
"%xterm\n",
|
||||
"gc.collect()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "qnLohSHmKoGk"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 10. Show the last 15 generated images\n",
|
||||
"import glob\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import matplotlib.image as mpimg\n",
|
||||
"%matplotlib inline\n",
|
||||
"\n",
|
||||
"images = []\n",
|
||||
"for img_path in sorted(glob.glob('/content/InvokeAI/outputs/img-samples/*.png'), reverse=True):\n",
|
||||
" images.append(mpimg.imread(img_path))\n",
|
||||
"\n",
|
||||
"images = images[:15] \n",
|
||||
"\n",
|
||||
"plt.figure(figsize=(20,10))\n",
|
||||
"\n",
|
||||
"columns = 5\n",
|
||||
"for i, image in enumerate(images):\n",
|
||||
" ax = plt.subplot(len(images) / columns + 1, columns, i + 1)\n",
|
||||
" ax.axes.xaxis.set_visible(False)\n",
|
||||
" ax.axes.yaxis.set_visible(False)\n",
|
||||
" ax.axis('off')\n",
|
||||
" plt.imshow(image)\n",
|
||||
" gc.collect()\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"collapsed_sections": [],
|
||||
"private_outputs": true,
|
||||
"provenance": []
|
||||
},
|
||||
"gpuClass": "standard",
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.9.12 64-bit",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.9.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "4e870c5c5fe42db7e2c5647ae5af656ff3391bf8c2b729cbf7fa0e16ca8cb5af"
|
||||
}
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "ycYWcsEKc6w7"
|
||||
},
|
||||
"source": [
|
||||
"# Stable Diffusion AI Notebook (Release 2.0.0)\n",
|
||||
"\n",
|
||||
"<img src=\"https://user-images.githubusercontent.com/60411196/186547976-d9de378a-9de8-4201-9c25-c057a9c59bad.jpeg\" alt=\"stable-diffusion-ai\" width=\"170px\"/> <br>\n",
|
||||
"#### Instructions:\n",
|
||||
"1. Execute each cell in order to mount a Dream bot and create images from text. <br>\n",
|
||||
"2. Once cells 1-8 were run correctly you'll be executing a terminal in cell #9, you'll need to enter `python scripts/dream.py` command to run Dream bot.<br> \n",
|
||||
"3. After launching dream bot, you'll see: <br> `Dream > ` in terminal. <br> Insert a command, eg. `Dream > Astronaut floating in a distant galaxy`, or type `-h` for help.\n",
|
||||
"3. After completion you'll see your generated images in path `stable-diffusion/outputs/img-samples/`, you can also show last generated images in cell #10.\n",
|
||||
"4. To quit Dream bot use `q` command. <br> \n",
|
||||
"---\n",
|
||||
"<font color=\"red\">Note:</font> It takes some time to load, but after installing all dependencies you can use the bot all time you want while colab instance is up. <br>\n",
|
||||
"<font color=\"red\">Requirements:</font> For this notebook to work you need to have [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) stored in your Google Drive, it will be needed in cell #7\n",
|
||||
"##### For more details visit Github repository: [invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)\n",
|
||||
"---\n"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "dr32VLxlnouf"
|
||||
},
|
||||
"source": [
|
||||
"## ◢ Installation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "a2Z5Qu_o8VtQ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 1. Check current GPU assigned\n",
|
||||
"!nvidia-smi -L\n",
|
||||
"!nvidia-smi"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "vbI9ZsQHzjqF"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 2. Download stable-diffusion Repository\n",
|
||||
"from os.path import exists\n",
|
||||
"\n",
|
||||
"!git clone --quiet https://github.com/invoke-ai/InvokeAI.git # Original repo\n",
|
||||
"%cd /content/InvokeAI/\n",
|
||||
"!git checkout --quiet tags/v2.0.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "QbXcGXYEFSNB"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 3. Install dependencies\n",
|
||||
"import gc\n",
|
||||
"\n",
|
||||
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-base.txt\n",
|
||||
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-win-colab-cuda.txt\n",
|
||||
"!pip install colab-xterm\n",
|
||||
"!pip install -r requirements-lin-win-colab-CUDA.txt\n",
|
||||
"!pip install clean-fid torchtext\n",
|
||||
"!pip install transformers\n",
|
||||
"gc.collect()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "8rSMhgnAttQa"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 4. Restart Runtime\n",
|
||||
"exit()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "ChIDWxLVHGGJ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 5. Load small ML models required\n",
|
||||
"import gc\n",
|
||||
"\n",
|
||||
"%cd /content/InvokeAI/\n",
|
||||
"!python scripts/preload_models.py\n",
|
||||
"gc.collect()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "795x1tMoo8b1"
|
||||
},
|
||||
"source": [
|
||||
"## ◢ Configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "YEWPV-sF1RDM"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 6. Mount google Drive\n",
|
||||
"from google.colab import drive\n",
|
||||
"\n",
|
||||
"drive.mount(\"/content/drive\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "zRTJeZ461WGu"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 7. Drive Path to model\n",
|
||||
"# @markdown Path should start with /content/drive/path-to-your-file <br>\n",
|
||||
"# @markdown <font color=\"red\">Note:</font> Model should be downloaded from https://huggingface.co <br>\n",
|
||||
"# @markdown Lastest release: [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)\n",
|
||||
"from os.path import exists\n",
|
||||
"\n",
|
||||
"model_path = \"\" # @param {type:\"string\"}\n",
|
||||
"if exists(model_path):\n",
|
||||
" print(\"✅ Valid directory\")\n",
|
||||
"else:\n",
|
||||
" print(\"❌ File doesn't exist\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "UY-NNz4I8_aG"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 8. Symlink to model\n",
|
||||
"\n",
|
||||
"from os.path import exists\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# Folder creation if it doesn't exist\n",
|
||||
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1\"):\n",
|
||||
" print(\"❗ Dir stable-diffusion-v1 already exists\")\n",
|
||||
"else:\n",
|
||||
" %mkdir /content/InvokeAI/models/ldm/stable-diffusion-v1\n",
|
||||
" print(\"✅ Dir stable-diffusion-v1 created\")\n",
|
||||
"\n",
|
||||
"# Symbolic link if it doesn't exist\n",
|
||||
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt\"):\n",
|
||||
" print(\"❗ Symlink already created\")\n",
|
||||
"else:\n",
|
||||
" src = model_path\n",
|
||||
" dst = \"/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt\"\n",
|
||||
" os.symlink(src, dst)\n",
|
||||
" print(\"✅ Symbolic link created successfully\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Mc28N0_NrCQH"
|
||||
},
|
||||
"source": [
|
||||
"## ◢ Execution"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "ir4hCrMIuUpl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title 9. Run Terminal and Execute Dream bot\n",
|
||||
"# @markdown <font color=\"blue\">Steps:</font> <br>\n",
|
||||
"# @markdown 1. Execute command `python scripts/invoke.py` to run InvokeAI.<br>\n",
|
||||
"# @markdown 2. After initialized you'll see `Dream>` line.<br>\n",
|
||||
"# @markdown 3. Example text: `Astronaut floating in a distant galaxy` <br>\n",
|
||||
"# @markdown 4. To quit Dream bot use: `q` command.<br>\n",
|
||||
"\n",
|
||||
"%load_ext colabxterm\n",
|
||||
"%xterm\n",
|
||||
"gc.collect()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "qnLohSHmKoGk"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title 10. Show the last 15 generated images\n",
|
||||
"import glob\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import matplotlib.image as mpimg\n",
|
||||
"%matplotlib inline\n",
|
||||
"\n",
|
||||
"images = []\n",
|
||||
"for img_path in sorted(glob.glob('/content/InvokeAI/outputs/img-samples/*.png'), reverse=True):\n",
|
||||
" images.append(mpimg.imread(img_path))\n",
|
||||
"\n",
|
||||
"images = images[:15] \n",
|
||||
"\n",
|
||||
"plt.figure(figsize=(20,10))\n",
|
||||
"\n",
|
||||
"columns = 5\n",
|
||||
"for i, image in enumerate(images):\n",
|
||||
" ax = plt.subplot(len(images) / columns + 1, columns, i + 1)\n",
|
||||
" ax.axes.xaxis.set_visible(False)\n",
|
||||
" ax.axes.yaxis.set_visible(False)\n",
|
||||
" ax.axis('off')\n",
|
||||
" plt.imshow(image)\n",
|
||||
" gc.collect()\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"collapsed_sections": [],
|
||||
"private_outputs": true,
|
||||
"provenance": []
|
||||
},
|
||||
"gpuClass": "standard",
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.9.12 64-bit",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.9.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "4e870c5c5fe42db7e2c5647ae5af656ff3391bf8c2b729cbf7fa0e16ca8cb5af"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
|
@ -58,14 +58,16 @@ dependencies = [
|
||||
"invisible-watermark~=0.2.0", # needed to install SDXL base and refiner using their repo_ids
|
||||
"matplotlib", # needed for plotting of Penner easing functions
|
||||
"mediapipe", # needed for "mediapipeface" controlnet model
|
||||
"numpy",
|
||||
"npyscreen",
|
||||
"numpy==1.24.4",
|
||||
"omegaconf",
|
||||
"onnx",
|
||||
"onnxruntime",
|
||||
"opencv-python",
|
||||
"pydantic==1.*",
|
||||
"picklescan",
|
||||
"pillow",
|
||||
"prompt-toolkit",
|
||||
"pydantic==1.10.10",
|
||||
"pympler~=1.0.1",
|
||||
"pypatchmatch",
|
||||
'pyperclip',
|
||||
@ -81,7 +83,7 @@ dependencies = [
|
||||
"test-tube~=0.7.5",
|
||||
"torch~=2.0.1",
|
||||
"torchvision~=0.15.2",
|
||||
"torchmetrics~=1.0.1",
|
||||
"torchmetrics~=0.11.0",
|
||||
"torchsde~=0.2.5",
|
||||
"transformers~=4.31.0",
|
||||
"uvicorn[standard]~=0.21.1",
|
||||
@ -103,6 +105,15 @@ dependencies = [
|
||||
"xformers~=0.0.19; sys_platform!='darwin'",
|
||||
"triton; sys_platform=='linux'",
|
||||
]
|
||||
"onnx" = [
|
||||
"onnxruntime",
|
||||
]
|
||||
"onnx-cuda" = [
|
||||
"onnxruntime-gpu",
|
||||
]
|
||||
"onnx-directml" = [
|
||||
"onnxruntime-directml",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@ -180,4 +191,4 @@ output = "coverage/index.xml"
|
||||
max-line-length = 120
|
||||
|
||||
[tool.black]
|
||||
line-length = 120
|
||||
line-length = 120
|
||||
|
@ -52,17 +52,17 @@
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"Cloning into 'latent-diffusion'...\n",
|
||||
"remote: Enumerating objects: 992, done.\u001B[K\n",
|
||||
"remote: Counting objects: 100% (695/695), done.\u001B[K\n",
|
||||
"remote: Compressing objects: 100% (397/397), done.\u001B[K\n",
|
||||
"remote: Total 992 (delta 375), reused 564 (delta 253), pack-reused 297\u001B[K\n",
|
||||
"remote: Enumerating objects: 992, done.\u001b[K\n",
|
||||
"remote: Counting objects: 100% (695/695), done.\u001b[K\n",
|
||||
"remote: Compressing objects: 100% (397/397), done.\u001b[K\n",
|
||||
"remote: Total 992 (delta 375), reused 564 (delta 253), pack-reused 297\u001b[K\n",
|
||||
"Receiving objects: 100% (992/992), 30.78 MiB | 29.43 MiB/s, done.\n",
|
||||
"Resolving deltas: 100% (510/510), done.\n",
|
||||
"Cloning into 'taming-transformers'...\n",
|
||||
"remote: Enumerating objects: 1335, done.\u001B[K\n",
|
||||
"remote: Counting objects: 100% (525/525), done.\u001B[K\n",
|
||||
"remote: Compressing objects: 100% (493/493), done.\u001B[K\n",
|
||||
"remote: Total 1335 (delta 58), reused 481 (delta 30), pack-reused 810\u001B[K\n",
|
||||
"remote: Enumerating objects: 1335, done.\u001b[K\n",
|
||||
"remote: Counting objects: 100% (525/525), done.\u001b[K\n",
|
||||
"remote: Compressing objects: 100% (493/493), done.\u001b[K\n",
|
||||
"remote: Total 1335 (delta 58), reused 481 (delta 30), pack-reused 810\u001b[K\n",
|
||||
"Receiving objects: 100% (1335/1335), 412.35 MiB | 30.53 MiB/s, done.\n",
|
||||
"Resolving deltas: 100% (267/267), done.\n",
|
||||
"Obtaining file:///content/taming-transformers\n",
|
||||
@ -73,23 +73,24 @@
|
||||
"Installing collected packages: taming-transformers\n",
|
||||
" Running setup.py develop for taming-transformers\n",
|
||||
"Successfully installed taming-transformers-0.0.1\n",
|
||||
"\u001B[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
|
||||
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
|
||||
"tensorflow 2.8.0 requires tf-estimator-nightly==2.8.0.dev2021122109, which is not installed.\n",
|
||||
"arviz 0.11.4 requires typing-extensions<4,>=3.7.4.3, but you have typing-extensions 4.1.1 which is incompatible.\u001B[0m\n"
|
||||
"arviz 0.11.4 requires typing-extensions<4,>=3.7.4.3, but you have typing-extensions 4.1.1 which is incompatible.\u001b[0m\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"#@title Installation\n",
|
||||
"# @title Installation\n",
|
||||
"!git clone https://github.com/CompVis/latent-diffusion.git\n",
|
||||
"!git clone https://github.com/CompVis/taming-transformers\n",
|
||||
"!pip install -e ./taming-transformers\n",
|
||||
"!pip install omegaconf>=2.0.0 pytorch-lightning>=1.0.8 torch-fidelity einops\n",
|
||||
"\n",
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"sys.path.append(\".\")\n",
|
||||
"sys.path.append('./taming-transformers')\n",
|
||||
"from taming.models import vqgan "
|
||||
"sys.path.append(\"./taming-transformers\")\n",
|
||||
"from taming.models import vqgan"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -104,11 +105,11 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"#@title Download\n",
|
||||
"%cd latent-diffusion/ \n",
|
||||
"# @title Download\n",
|
||||
"%cd latent-diffusion/\n",
|
||||
"\n",
|
||||
"!mkdir -p models/ldm/cin256-v2/\n",
|
||||
"!wget -O models/ldm/cin256-v2/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/cin/model.ckpt "
|
||||
"!wget -O models/ldm/cin256-v2/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/cin/model.ckpt"
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
@ -203,7 +204,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"#@title loading utils\n",
|
||||
"# @title loading utils\n",
|
||||
"import torch\n",
|
||||
"from omegaconf import OmegaConf\n",
|
||||
"\n",
|
||||
@ -212,7 +213,7 @@
|
||||
"\n",
|
||||
"def load_model_from_config(config, ckpt):\n",
|
||||
" print(f\"Loading model from {ckpt}\")\n",
|
||||
" pl_sd = torch.load(ckpt)#, map_location=\"cpu\")\n",
|
||||
" pl_sd = torch.load(ckpt) # , map_location=\"cpu\")\n",
|
||||
" sd = pl_sd[\"state_dict\"]\n",
|
||||
" model = instantiate_from_config(config.model)\n",
|
||||
" m, u = model.load_state_dict(sd, strict=False)\n",
|
||||
@ -222,7 +223,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_model():\n",
|
||||
" config = OmegaConf.load(\"configs/latent-diffusion/cin256-v2.yaml\") \n",
|
||||
" config = OmegaConf.load(\"configs/latent-diffusion/cin256-v2.yaml\")\n",
|
||||
" model = load_model_from_config(config, \"models/ldm/cin256-v2/model.ckpt\")\n",
|
||||
" return model"
|
||||
],
|
||||
@ -276,18 +277,18 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"import numpy as np \n",
|
||||
"import numpy as np\n",
|
||||
"from PIL import Image\n",
|
||||
"from einops import rearrange\n",
|
||||
"from torchvision.utils import make_grid\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"classes = [25, 187, 448, 992] # define classes to be sampled here\n",
|
||||
"classes = [25, 187, 448, 992] # define classes to be sampled here\n",
|
||||
"n_samples_per_class = 6\n",
|
||||
"\n",
|
||||
"ddim_steps = 20\n",
|
||||
"ddim_eta = 0.0\n",
|
||||
"scale = 3.0 # for unconditional guidance\n",
|
||||
"scale = 3.0 # for unconditional guidance\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"all_samples = list()\n",
|
||||
@ -295,36 +296,39 @@
|
||||
"with torch.no_grad():\n",
|
||||
" with model.ema_scope():\n",
|
||||
" uc = model.get_learned_conditioning(\n",
|
||||
" {model.cond_stage_key: torch.tensor(n_samples_per_class*[1000]).to(model.device)}\n",
|
||||
" )\n",
|
||||
" \n",
|
||||
" {model.cond_stage_key: torch.tensor(n_samples_per_class * [1000]).to(model.device)}\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" for class_label in classes:\n",
|
||||
" print(f\"rendering {n_samples_per_class} examples of class '{class_label}' in {ddim_steps} steps and using s={scale:.2f}.\")\n",
|
||||
" xc = torch.tensor(n_samples_per_class*[class_label])\n",
|
||||
" print(\n",
|
||||
" f\"rendering {n_samples_per_class} examples of class '{class_label}' in {ddim_steps} steps and using s={scale:.2f}.\"\n",
|
||||
" )\n",
|
||||
" xc = torch.tensor(n_samples_per_class * [class_label])\n",
|
||||
" c = model.get_learned_conditioning({model.cond_stage_key: xc.to(model.device)})\n",
|
||||
" \n",
|
||||
" samples_ddim, _ = sampler.sample(S=ddim_steps,\n",
|
||||
" conditioning=c,\n",
|
||||
" batch_size=n_samples_per_class,\n",
|
||||
" shape=[3, 64, 64],\n",
|
||||
" verbose=False,\n",
|
||||
" unconditional_guidance_scale=scale,\n",
|
||||
" unconditional_conditioning=uc, \n",
|
||||
" eta=ddim_eta)\n",
|
||||
"\n",
|
||||
" samples_ddim, _ = sampler.sample(\n",
|
||||
" S=ddim_steps,\n",
|
||||
" conditioning=c,\n",
|
||||
" batch_size=n_samples_per_class,\n",
|
||||
" shape=[3, 64, 64],\n",
|
||||
" verbose=False,\n",
|
||||
" unconditional_guidance_scale=scale,\n",
|
||||
" unconditional_conditioning=uc,\n",
|
||||
" eta=ddim_eta,\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" x_samples_ddim = model.decode_first_stage(samples_ddim)\n",
|
||||
" x_samples_ddim = torch.clamp((x_samples_ddim+1.0)/2.0, \n",
|
||||
" min=0.0, max=1.0)\n",
|
||||
" x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)\n",
|
||||
" all_samples.append(x_samples_ddim)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# display as grid\n",
|
||||
"grid = torch.stack(all_samples, 0)\n",
|
||||
"grid = rearrange(grid, 'n b c h w -> (n b) c h w')\n",
|
||||
"grid = rearrange(grid, \"n b c h w -> (n b) c h w\")\n",
|
||||
"grid = make_grid(grid, nrow=n_samples_per_class)\n",
|
||||
"\n",
|
||||
"# to image\n",
|
||||
"grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy()\n",
|
||||
"grid = 255.0 * rearrange(grid, \"c h w -> h w c\").cpu().numpy()\n",
|
||||
"Image.fromarray(grid.astype(np.uint8))"
|
||||
],
|
||||
"metadata": {
|
||||
|
Loading…
Reference in New Issue
Block a user