From 4530bd46dcd4e426b688589e8138f9d4f1787d0b Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 26 Sep 2023 17:30:34 +1000 Subject: [PATCH] Added IP-Adapter --- docs/features/CONTROLNET.md | 38 ++++++++++++++++++++++++++++++------- mkdocs.yml | 6 +++--- 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/docs/features/CONTROLNET.md b/docs/features/CONTROLNET.md index 42ed43146e..a77e58a10a 100644 --- a/docs/features/CONTROLNET.md +++ b/docs/features/CONTROLNET.md @@ -1,13 +1,11 @@ --- -title: ControlNet +title: Control Adapters --- -# :material-loupe: ControlNet +# :material-loupe: Control Adapters ## ControlNet -ControlNet - ControlNet is a powerful set of features developed by the open-source community (notably, Stanford researcher [**@ilyasviel**](https://github.com/lllyasviel)) that allows you to @@ -20,7 +18,7 @@ towards generating images that better fit your desired style or outcome. -### How it works +#### How it works ControlNet works by analyzing an input image, pre-processing that image to identify relevant information that can be interpreted by each @@ -30,7 +28,7 @@ composition, or other aspects of the image to better achieve a specific result. -### Models +#### Models InvokeAI provides access to a series of ControlNet models that provide different effects or styles in your generated images. Currently @@ -96,6 +94,8 @@ A model that generates normal maps from input images, allowing for more realisti **Image Segmentation**: A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon) +**QR Code Monster**: +A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them. **Openpose**: The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image. @@ -120,7 +120,7 @@ With Pix2Pix, you can input an image into the controlnet, and then "instruct" th Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process. -## Using ControlNet +### Using ControlNet To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images. @@ -132,3 +132,27 @@ Weight - Strength of the Controlnet model applied to the generation for the sect Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied. Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke. + + +## IP-Adapter + +[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided. + +#### Installation +There are several ways to instal IP-Adapter models with an existing InvokeAI installation: + +1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [5] to download models +2. Install IP-Adapter models through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. +3. Manually downloading the models files and placed in the `models/ip-adapter` folder of the Invoke root directory. *Note:* The image_encoder folder is necessary for IP-Adapter to function. + +#### Using IP-Adapter + +IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter. + +IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, ControlNets and LoRAs. + + +Each IP-Adapter has two settings that are applied to the IP-Adapter: + +* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end +* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index f4e0688878..f95d83ac8f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -127,12 +127,12 @@ nav: - Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md' - Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md' - Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md' - - Nodes: + - Workflows & Nodes: - Community Nodes: 'nodes/communityNodes.md' - Example Workflows: 'nodes/exampleWorkflows.md' - Nodes Overview: 'nodes/overview.md' - List of Default Nodes: 'nodes/defaultNodes.md' - - Node Editor Usage: 'nodes/NODES.md' + - Workflow Editor Usage: 'nodes/NODES.md' - ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md' - Contributing Nodes: 'nodes/contributingNodes.md' - Features: @@ -140,7 +140,7 @@ nav: - New to InvokeAI?: 'help/gettingStartedWithAI.md' - Concepts: 'features/CONCEPTS.md' - Configuration: 'features/CONFIGURATION.md' - - ControlNet: 'features/CONTROLNET.md' + - Control Adapters: 'features/CONTROLNET.md' - Image-to-Image: 'features/IMG2IMG.md' - Controlling Logging: 'features/LOGGING.md' - Model Merging: 'features/MODEL_MERGING.md'