mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Docs Update (python version & T2I (#4867)
* Updated Control Adapter Docs * fixed typo * Update docs for 3.10 * Update diffusers language Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com> * Diffusers format Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com> * Current T2I Adapter usage Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com> * Update test-invoke-pip.yml --------- Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
This commit is contained in:
parent
96e80c71fb
commit
677918df61
@ -123,7 +123,7 @@ and go to http://localhost:9090.
|
|||||||
|
|
||||||
### Command-Line Installation (for developers and users familiar with Terminals)
|
### Command-Line Installation (for developers and users familiar with Terminals)
|
||||||
|
|
||||||
You must have Python 3.9 through 3.11 installed on your machine. Earlier or
|
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
|
||||||
later versions are not supported.
|
later versions are not supported.
|
||||||
Node.js also needs to be installed along with yarn (can be installed with
|
Node.js also needs to be installed along with yarn (can be installed with
|
||||||
the command `npm install -g yarn` if needed)
|
the command `npm install -g yarn` if needed)
|
||||||
|
@ -17,9 +17,6 @@ image generation, providing you with a way to direct the network
|
|||||||
towards generating images that better fit your desired style or
|
towards generating images that better fit your desired style or
|
||||||
outcome.
|
outcome.
|
||||||
|
|
||||||
|
|
||||||
#### How it works
|
|
||||||
|
|
||||||
ControlNet works by analyzing an input image, pre-processing that
|
ControlNet works by analyzing an input image, pre-processing that
|
||||||
image to identify relevant information that can be interpreted by each
|
image to identify relevant information that can be interpreted by each
|
||||||
specific ControlNet model, and then inserting that control information
|
specific ControlNet model, and then inserting that control information
|
||||||
@ -27,35 +24,21 @@ into the generation process. This can be used to adjust the style,
|
|||||||
composition, or other aspects of the image to better achieve a
|
composition, or other aspects of the image to better achieve a
|
||||||
specific result.
|
specific result.
|
||||||
|
|
||||||
|
#### Installation
|
||||||
#### Models
|
|
||||||
|
|
||||||
InvokeAI provides access to a series of ControlNet models that provide
|
InvokeAI provides access to a series of ControlNet models that provide
|
||||||
different effects or styles in your generated images. Currently
|
different effects or styles in your generated images.
|
||||||
InvokeAI only supports "diffuser" style ControlNet models. These are
|
|
||||||
folders that contain the files `config.json` and/or
|
|
||||||
`diffusion_pytorch_model.safetensors` and
|
|
||||||
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
|
|
||||||
the name of the model.
|
|
||||||
|
|
||||||
***InvokeAI does not currently support checkpoint-format
|
To install ControlNet Models:
|
||||||
ControlNets. These come in the form of a single file with the
|
|
||||||
extension `.safetensors`.***
|
|
||||||
|
|
||||||
Diffuser-style ControlNet models are available at HuggingFace
|
1. The easiest way to install them is
|
||||||
(http://huggingface.co) and accessed via their repo IDs (identifiers
|
|
||||||
in the format "author/modelname"). The easiest way to install them is
|
|
||||||
to use the InvokeAI model installer application. Use the
|
to use the InvokeAI model installer application. Use the
|
||||||
`invoke.sh`/`invoke.bat` launcher to select item [4] and then navigate
|
`invoke.sh`/`invoke.bat` launcher to select item [4] and then navigate
|
||||||
to the CONTROLNETS section. Select the models you wish to install and
|
to the CONTROLNETS section. Select the models you wish to install and
|
||||||
press "APPLY CHANGES". You may also enter additional HuggingFace
|
press "APPLY CHANGES". You may also enter additional HuggingFace
|
||||||
repo_ids in the "Additional models" textbox:
|
repo_ids in the "Additional models" textbox.
|
||||||
|
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the ControlNet. The ID is in the format "author/repoName"
|
||||||
|
|
||||||
![Model Installer -
|
|
||||||
Controlnetl](../assets/installing-models/model-installer-controlnet.png){:width="640px"}
|
|
||||||
|
|
||||||
Command-line users can launch the model installer using the command
|
|
||||||
`invokeai-model-install`.
|
|
||||||
|
|
||||||
_Be aware that some ControlNet models require additional code
|
_Be aware that some ControlNet models require additional code
|
||||||
functionality in order to work properly, so just installing a
|
functionality in order to work properly, so just installing a
|
||||||
@ -63,6 +46,17 @@ third-party ControlNet model may not have the desired effect._ Please
|
|||||||
read and follow the documentation for installing a third party model
|
read and follow the documentation for installing a third party model
|
||||||
not currently included among InvokeAI's default list.
|
not currently included among InvokeAI's default list.
|
||||||
|
|
||||||
|
Currently InvokeAI **only** supports 🤗 Diffusers-format ControlNet models. These are
|
||||||
|
folders that contain the files `config.json` and/or
|
||||||
|
`diffusion_pytorch_model.safetensors` and
|
||||||
|
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
|
||||||
|
the name of the model.
|
||||||
|
|
||||||
|
🤗 Diffusers-format ControlNet models are available at HuggingFace
|
||||||
|
(http://huggingface.co) and accessed via their repo IDs (identifiers
|
||||||
|
in the format "author/modelname").
|
||||||
|
|
||||||
|
#### ControlNet Models
|
||||||
The models currently supported include:
|
The models currently supported include:
|
||||||
|
|
||||||
**Canny**:
|
**Canny**:
|
||||||
@ -133,6 +127,30 @@ Start/End - 0 represents the start of the generation, 1 represents the end. The
|
|||||||
|
|
||||||
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
|
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
|
||||||
|
|
||||||
|
## T2I-Adapter
|
||||||
|
[T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) is a tool similar to ControlNet that allows for control over the generation process by providing control information during the generation process. T2I-Adapter models tend to be smaller and more efficient than ControlNets.
|
||||||
|
|
||||||
|
##### Installation
|
||||||
|
To install T2I-Adapter Models:
|
||||||
|
|
||||||
|
1. The easiest way to install models is
|
||||||
|
to use the InvokeAI model installer application. Use the
|
||||||
|
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
|
||||||
|
to the T2I-Adapters section. Select the models you wish to install and
|
||||||
|
press "APPLY CHANGES". You may also enter additional HuggingFace
|
||||||
|
repo_ids in the "Additional models" textbox.
|
||||||
|
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the T2I-Adapter. The ID is in the format "author/repoName"
|
||||||
|
|
||||||
|
#### Usage
|
||||||
|
Each T2I Adapter has two settings that are applied.
|
||||||
|
|
||||||
|
Weight - Strength of the model applied to the generation for the section, defined by start/end.
|
||||||
|
|
||||||
|
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
|
||||||
|
|
||||||
|
Additionally, each section can be expanded with the "Show Advanced" button in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in during the generation process.
|
||||||
|
|
||||||
|
**Note:** T2I-Adapter models and ControlNet models cannot currently be used together.
|
||||||
|
|
||||||
## IP-Adapter
|
## IP-Adapter
|
||||||
|
|
||||||
@ -140,7 +158,7 @@ Additionally, each ControlNet section can be expanded in order to manipulate set
|
|||||||
|
|
||||||
![IP-Adapter + T2I](https://github.com/tencent-ailab/IP-Adapter/raw/main/assets/demo/ip_adpter_plus_multi.jpg)
|
![IP-Adapter + T2I](https://github.com/tencent-ailab/IP-Adapter/raw/main/assets/demo/ip_adpter_plus_multi.jpg)
|
||||||
|
|
||||||
![IP-Adapter + IMG2IMG](https://github.com/tencent-ailab/IP-Adapter/blob/main/assets/demo/image-to-image.jpg)
|
![IP-Adapter + IMG2IMG](https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/main/assets/demo/image-to-image.jpg)
|
||||||
|
|
||||||
#### Installation
|
#### Installation
|
||||||
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
|
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
|
||||||
|
@ -57,7 +57,9 @@ Prompts provide the models directions on what to generate. As a general rule of
|
|||||||
|
|
||||||
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what you’d like to see. (Like Stable Diffusion!)
|
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what you’d like to see. (Like Stable Diffusion!)
|
||||||
|
|
||||||
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
|
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at https://models.invoke.ai
|
||||||
|
|
||||||
|
Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
|
||||||
|
|
||||||
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*
|
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*
|
||||||
|
|
||||||
|
@ -181,7 +181,7 @@ This includes 15 Nodes:
|
|||||||
|
|
||||||
**Output Example:**
|
**Output Example:**
|
||||||
|
|
||||||
<img src="https://github.com/helix4u/load_video_frame/blob/main/testmp4_embed_converted.gif" width="500" />
|
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/main/testmp4_embed_converted.gif" width="500" />
|
||||||
[Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4)
|
[Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4)
|
||||||
|
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
Loading…
Reference in New Issue
Block a user