3.6 Docs updates (#5412)
* Update UNIFIED_CANVAS.md * Update index.md * Update structure * Docs updates
Before Width: | Height: | Size: 297 KiB After Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 4.9 MiB |
Before Width: | Height: | Size: 169 KiB After Width: | Height: | Size: 1.1 MiB |
Before Width: | Height: | Size: 194 KiB After Width: | Height: | Size: 131 KiB |
Before Width: | Height: | Size: 209 KiB After Width: | Height: | Size: 122 KiB |
Before Width: | Height: | Size: 114 KiB After Width: | Height: | Size: 95 KiB |
Before Width: | Height: | Size: 187 KiB After Width: | Height: | Size: 123 KiB |
Before Width: | Height: | Size: 112 KiB After Width: | Height: | Size: 107 KiB |
Before Width: | Height: | Size: 132 KiB After Width: | Height: | Size: 61 KiB |
Before Width: | Height: | Size: 167 KiB After Width: | Height: | Size: 119 KiB |
Before Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 60 KiB |
BIN
docs/assets/nodes/workflow_library.png
Normal file
After Width: | Height: | Size: 129 KiB |
53
docs/deprecated/2to3.md
Normal file
@ -0,0 +1,53 @@
|
||||
## :octicons-log-16: Important Changes Since Version 2.3
|
||||
|
||||
### Nodes
|
||||
|
||||
Behind the scenes, InvokeAI has been completely rewritten to support
|
||||
"nodes," small unitary operations that can be combined into graphs to
|
||||
form arbitrary workflows. For example, there is a prompt node that
|
||||
processes the prompt string and feeds it to a text2latent node that
|
||||
generates a latent image. The latents are then fed to a latent2image
|
||||
node that translates the latent image into a PNG.
|
||||
|
||||
The WebGUI has a node editor that allows you to graphically design and
|
||||
execute custom node graphs. The ability to save and load graphs is
|
||||
still a work in progress, but coming soon.
|
||||
|
||||
### Command-Line Interface Retired
|
||||
|
||||
All "invokeai" command-line interfaces have been retired as of version
|
||||
3.4.
|
||||
|
||||
To launch the Web GUI from the command-line, use the command
|
||||
`invokeai-web` rather than the traditional `invokeai --web`.
|
||||
|
||||
### ControlNet
|
||||
|
||||
This version of InvokeAI features ControlNet, a system that allows you
|
||||
to achieve exact poses for human and animal figures by providing a
|
||||
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
|
||||
|
||||
### New Schedulers
|
||||
|
||||
The list of schedulers has been completely revamped and brought up to date:
|
||||
|
||||
| **Short Name** | **Scheduler** | **Notes** |
|
||||
|----------------|---------------------------------|-----------------------------|
|
||||
| **ddim** | DDIMScheduler | |
|
||||
| **ddpm** | DDPMScheduler | |
|
||||
| **deis** | DEISMultistepScheduler | |
|
||||
| **lms** | LMSDiscreteScheduler | |
|
||||
| **pndm** | PNDMScheduler | |
|
||||
| **heun** | HeunDiscreteScheduler | original noise schedule |
|
||||
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
|
||||
| **euler** | EulerDiscreteScheduler | original noise schedule |
|
||||
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
|
||||
| **kdpm_2** | KDPM2DiscreteScheduler | |
|
||||
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
|
||||
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
|
||||
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
|
||||
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
|
||||
| **unipc** | UniPCMultistepScheduler | CPU only |
|
||||
| **lcm** | LCMScheduler | |
|
||||
|
||||
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
|
@ -229,29 +229,28 @@ clarity on the intent and common use cases we expect for utilizing them.
|
||||
currently being rendered by your browser into a merged copy of the image. This
|
||||
lowers the resource requirements and should improve performance.
|
||||
|
||||
### Seam Correction
|
||||
### Compositing / Seam Correction
|
||||
|
||||
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
|
||||
by Stable Diffusion into your existing image. To do this, the area around the
|
||||
`seam` at the boundary between your image and the new generation is
|
||||
by Stable Diffusion into your existing image. This is achieved through compositing - the area around the the boundary between your image and the new generation is
|
||||
automatically blended to produce a seamless output. In a fully automatic
|
||||
process, a mask is generated to cover the seam, and then the area of the seam is
|
||||
process, a mask is generated to cover the boundary, and then the area of the boundary is
|
||||
Inpainted.
|
||||
|
||||
Although the default options should work well most of the time, sometimes it can
|
||||
help to alter the parameters that control the seam Inpainting. A wider seam and
|
||||
a blur setting of about 1/3 of the seam have been noted as producing
|
||||
consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with
|
||||
both sides). Seam strength of 0.7 is best for reducing hard seams.
|
||||
help to alter the parameters that control the Compositing. A larger blur and
|
||||
a blur setting have been noted as producing
|
||||
consistently strong results . Strength of 0.7 is best for reducing hard seams.
|
||||
|
||||
- **Mode** - What part of the image will have the the Compositing applied to it.
|
||||
- **Mask edge** will apply Compositing to the edge of the masked area
|
||||
- **Mask** will apply Compositing to the entire masked area
|
||||
- **Unmasked** will apply Compositing to the entire image
|
||||
- **Steps** - Number of generation steps that will occur during the Coherence Pass, similar to Denoising Steps. Higher step counts will generally have better results.
|
||||
- **Strength** - How much noise is added for the Coherence Pass, similar to Denoising Strength. A strength of 0 will result in an unchanged image, while a strength of 1 will result in an image with a completely new area as defined by the Mode setting.
|
||||
- **Blur** - Adjusts the pixel radius of the the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
|
||||
- **Blur Method** - The method of blur applied to the masked area.
|
||||
|
||||
- **Seam Size** - The size of the seam masked area. Set higher to make a larger
|
||||
mask around the seam.
|
||||
- **Seam Blur** - The size of the blur that is applied on _each_ side of the
|
||||
masked area.
|
||||
- **Seam Strength** - The Image To Image Strength parameter used for the
|
||||
Inpainting generation that is applied to the seam area.
|
||||
- **Seam Steps** - The number of generation steps that should be used to Inpaint
|
||||
the seam.
|
||||
|
||||
### Infill & Scaling
|
||||
|
||||
|
@ -18,7 +18,7 @@ title: Home
|
||||
width: 100%;
|
||||
max-width: 100%;
|
||||
height: 50px;
|
||||
background-color: #448AFF;
|
||||
background-color: #35A4DB;
|
||||
color: #fff;
|
||||
font-size: 16px;
|
||||
border: none;
|
||||
@ -43,7 +43,7 @@ title: Home
|
||||
<div align="center" markdown>
|
||||
|
||||
|
||||
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
|
||||
[![project logo](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)](https://github.com/invoke-ai/InvokeAI)
|
||||
|
||||
[![discord badge]][discord link]
|
||||
|
||||
@ -145,60 +145,6 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
|
||||
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
|
||||
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
|
||||
|
||||
## :octicons-log-16: Important Changes Since Version 2.3
|
||||
|
||||
### Nodes
|
||||
|
||||
Behind the scenes, InvokeAI has been completely rewritten to support
|
||||
"nodes," small unitary operations that can be combined into graphs to
|
||||
form arbitrary workflows. For example, there is a prompt node that
|
||||
processes the prompt string and feeds it to a text2latent node that
|
||||
generates a latent image. The latents are then fed to a latent2image
|
||||
node that translates the latent image into a PNG.
|
||||
|
||||
The WebGUI has a node editor that allows you to graphically design and
|
||||
execute custom node graphs. The ability to save and load graphs is
|
||||
still a work in progress, but coming soon.
|
||||
|
||||
### Command-Line Interface Retired
|
||||
|
||||
All "invokeai" command-line interfaces have been retired as of version
|
||||
3.4.
|
||||
|
||||
To launch the Web GUI from the command-line, use the command
|
||||
`invokeai-web` rather than the traditional `invokeai --web`.
|
||||
|
||||
### ControlNet
|
||||
|
||||
This version of InvokeAI features ControlNet, a system that allows you
|
||||
to achieve exact poses for human and animal figures by providing a
|
||||
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
|
||||
|
||||
### New Schedulers
|
||||
|
||||
The list of schedulers has been completely revamped and brought up to date:
|
||||
|
||||
| **Short Name** | **Scheduler** | **Notes** |
|
||||
|----------------|---------------------------------|-----------------------------|
|
||||
| **ddim** | DDIMScheduler | |
|
||||
| **ddpm** | DDPMScheduler | |
|
||||
| **deis** | DEISMultistepScheduler | |
|
||||
| **lms** | LMSDiscreteScheduler | |
|
||||
| **pndm** | PNDMScheduler | |
|
||||
| **heun** | HeunDiscreteScheduler | original noise schedule |
|
||||
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
|
||||
| **euler** | EulerDiscreteScheduler | original noise schedule |
|
||||
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
|
||||
| **kdpm_2** | KDPM2DiscreteScheduler | |
|
||||
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
|
||||
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
|
||||
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
|
||||
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
|
||||
| **unipc** | UniPCMultistepScheduler | CPU only |
|
||||
| **lcm** | LCMScheduler | |
|
||||
|
||||
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
|
||||
|
||||
## :material-target: Troubleshooting
|
||||
|
||||
Please check out our **[:material-frequently-asked-questions:
|
||||
|
@ -6,10 +6,17 @@ If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](
|
||||
|
||||
## Features
|
||||
|
||||
### Workflow Library
|
||||
The Workflow Library enables you to save workflows to the Invoke database, allowing you to easily creating, modify and share workflows as needed.
|
||||
|
||||
A curated set of workflows are provided by default - these are designed to help explain important nodes' usage in the Workflow Editor.
|
||||
|
||||
![workflow_library](../assets/nodes/workflow_library.png)
|
||||
|
||||
### Linear View
|
||||
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
|
||||
|
||||
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
|
||||
To add an input to the Linear UI, right click on the **input label** and select "Add to Linear View".
|
||||
|
||||
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
|
||||
|
||||
@ -30,7 +37,7 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
|
||||
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
|
||||
|
||||
|
||||
## Important Concepts
|
||||
## Important Nodes & Concepts
|
||||
|
||||
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
|
||||
|
||||
@ -56,7 +63,7 @@ The ImageToLatents node takes in a pixel image and a VAE and outputs a latents.
|
||||
|
||||
It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
|
||||
|
||||
![groupsrandseed](../assets/nodes/groupsrandseed.png)
|
||||
![groupsrandseed](../assets/nodes/groupsnoise.png)
|
||||
|
||||
### ControlNet
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Example Workflows
|
||||
|
||||
We've curated some example workflows for you to get started with Workflows in InvokeAI
|
||||
We've curated some example workflows for you to get started with Workflows in InvokeAI! These can also be found in the Workflow Library, located in the Workflow Editor of Invoke.
|
||||
|
||||
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!
|
||||
|
||||
|
@ -215,6 +215,7 @@ We thank them for all of their time and hard work.
|
||||
- Robert Bolender
|
||||
- Robin Rombach
|
||||
- Rohan Barar
|
||||
- rohinish404
|
||||
- rpagliuca
|
||||
- rromb
|
||||
- Rupesh Sreeraman
|
||||
|
5
docs/stylesheets/extra.css
Normal file
@ -0,0 +1,5 @@
|
||||
:root {
|
||||
--md-primary-fg-color: #35A4DB;
|
||||
--md-primary-fg-color--light: #35A4DB;
|
||||
--md-primary-fg-color--dark: #35A4DB;
|
||||
}
|
37
mkdocs.yml
@ -14,6 +14,9 @@ edit_uri: edit/main/docs/
|
||||
# Copyright
|
||||
copyright: Copyright © 2023 InvokeAI Team
|
||||
|
||||
extra_css:
|
||||
- stylesheets/extra.css
|
||||
|
||||
# Configuration
|
||||
theme:
|
||||
name: material
|
||||
@ -21,17 +24,18 @@ theme:
|
||||
repo: fontawesome/brands/github
|
||||
edit: material/file-document-edit-outline
|
||||
palette:
|
||||
- media: '(prefers-color-scheme: light)'
|
||||
scheme: default
|
||||
toggle:
|
||||
icon: material/lightbulb
|
||||
name: Switch to dark mode
|
||||
# - media: '(prefers-color-scheme: light)'
|
||||
# scheme: default
|
||||
# primary: custom
|
||||
# toggle:
|
||||
# icon: material/lightbulb
|
||||
# name: Switch to dark mode
|
||||
- media: '(prefers-color-scheme: dark)'
|
||||
scheme: slate
|
||||
primary: blue
|
||||
toggle:
|
||||
icon: material/lightbulb-outline
|
||||
name: Switch to light mode
|
||||
primary: custom
|
||||
# toggle:
|
||||
# icon: material/lightbulb-outline
|
||||
# name: Switch to light mode
|
||||
features:
|
||||
- navigation.instant
|
||||
- navigation.tabs
|
||||
@ -43,6 +47,7 @@ theme:
|
||||
- search.suggest
|
||||
- toc.integrate
|
||||
|
||||
|
||||
# Extensions
|
||||
markdown_extensions:
|
||||
- abbr
|
||||
@ -130,14 +135,14 @@ nav:
|
||||
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
|
||||
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
|
||||
- Workflows & Nodes:
|
||||
- Community Nodes: 'nodes/communityNodes.md'
|
||||
- Example Workflows: 'nodes/exampleWorkflows.md'
|
||||
- Nodes Overview: 'nodes/overview.md'
|
||||
- Workflow Editor Basics: 'nodes/NODES.md'
|
||||
- List of Default Nodes: 'nodes/defaultNodes.md'
|
||||
- Workflow Editor Usage: 'nodes/NODES.md'
|
||||
- Example Workflows: 'nodes/exampleWorkflows.md'
|
||||
- ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md'
|
||||
- Facetool Node: 'nodes/detailedNodes/faceTools.md'
|
||||
- Contributing Nodes: 'nodes/contributingNodes.md'
|
||||
- Community Nodes: 'nodes/communityNodes.md'
|
||||
- Features:
|
||||
- Overview: 'features/index.md'
|
||||
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
|
||||
@ -147,7 +152,7 @@ nav:
|
||||
- Controlling Logging: 'features/LOGGING.md'
|
||||
- LoRAs & LCM-LoRAs: 'features/LORAS.md'
|
||||
- Model Merging: 'features/MODEL_MERGING.md'
|
||||
- Nodes & Workflows: 'nodes/overview.md'
|
||||
- Workflows & Nodes : 'nodes/overview'
|
||||
- NSFW Checker: 'features/WATERMARK+NSFW.md'
|
||||
- Postprocessing: 'features/POSTPROCESS.md'
|
||||
- Prompting Features: 'features/PROMPTS.md'
|
||||
@ -165,20 +170,20 @@ nav:
|
||||
- Development:
|
||||
- Overview: 'contributing/contribution_guides/development.md'
|
||||
- New Contributors: 'contributing/contribution_guides/newContributorChecklist.md'
|
||||
- InvokeAI Architecture: 'contributing/ARCHITECTURE.md'
|
||||
- Model Manager v2: 'contributing/MODEL_MANAGER.md'
|
||||
- Frontend Documentation: 'contributing/contribution_guides/contributingToFrontend.md'
|
||||
- Local Development: 'contributing/LOCAL_DEVELOPMENT.md'
|
||||
- Adding Tests: 'contributing/TESTS.md'
|
||||
- Testing: 'contributing/TESTS.md'
|
||||
- Documentation: 'contributing/contribution_guides/documentation.md'
|
||||
- Nodes: 'contributing/INVOCATIONS.md'
|
||||
- Model Manager: 'contributing/MODEL_MANAGER.md'
|
||||
- Download Queue: 'contributing/DOWNLOAD_QUEUE.md'
|
||||
- Translation: 'contributing/contribution_guides/translation.md'
|
||||
- Tutorials: 'contributing/contribution_guides/tutorials.md'
|
||||
- Changelog: 'CHANGELOG.md'
|
||||
- Deprecated:
|
||||
- InvokeAI 2.3: 'deprecated/2to3.md'
|
||||
- Command Line Interface: 'deprecated/CLI.md'
|
||||
- Changelog: 'CHANGELOG.md'
|
||||
- Variations: 'deprecated/VARIATIONS.md'
|
||||
- Translations: 'deprecated/TRANSLATION.md'
|
||||
- Embiggen: 'deprecated/EMBIGGEN.md'
|
||||
|