diff --git a/docs/contributing/INVOCATIONS.md b/docs/contributing/INVOCATIONS.md index be5d9d0805..6c2dc878a6 100644 --- a/docs/contributing/INVOCATIONS.md +++ b/docs/contributing/INVOCATIONS.md @@ -1,6 +1,6 @@ -# Invocations +# Nodes -Features in InvokeAI are added in the form of modular node-like systems called +Features in InvokeAI are added in the form of modular nodes systems called **Invocations**. An Invocation is simply a single operation that takes in some inputs and gives @@ -9,13 +9,34 @@ complex functionality. ## Invocations Directory -InvokeAI Invocations can be found in the `invokeai/app/invocations` directory. +InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes. -You can add your new functionality to one of the existing Invocations in this -directory or create a new file in this directory as per your needs. +New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup. + +Example `nodes` subfolder structure: +```py +├── __init__.py # Invoke-managed custom node loader +│ +├── cool_node +│ ├── __init__.py # see example below +│ └── cool_node.py +│ +└── my_node_pack + ├── __init__.py # see example below + ├── tasty_node.py + ├── bodacious_node.py + ├── utils.py + └── extra_nodes + └── fancy_node.py +``` + +Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded. + See the README in the nodes folder for more examples: + +```py +from .cool_node import CoolInvocation +``` -**Note:** _All Invocations must be inside this directory for InvokeAI to -recognize them as valid Invocations._ ## Creating A New Invocation diff --git a/docs/features/LORAS.md b/docs/features/LORAS.md new file mode 100644 index 0000000000..3426311b41 --- /dev/null +++ b/docs/features/LORAS.md @@ -0,0 +1,51 @@ +--- +title: LoRAs & LCM-LoRAs +--- + +# :material-library-shelves: LoRAs & LCM-LoRAs + +With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model. + +## LoRAs + +Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion +image generation. Larger than embeddings, but much smaller than full +models, they augment SD with improved understanding of subjects and +artistic styles. + +Unlike TI files, LoRAs do not introduce novel vocabulary into the +model's known tokens. Instead, LoRAs augment the model's weights that +are applied to generate imagery. LoRAs may be supplied with a +"trigger" word that they have been explicitly trained on, or may +simply apply their effect without being triggered. + +LoRAs are typically stored in .safetensors files, which are the most +secure way to store and transmit these types of weights. You may +install any number of `.safetensors` LoRA files simply by copying them +into the `autoimport/lora` directory of the corresponding InvokeAI models +directory (usually `invokeai` in your home directory). + +To use these when generating, open the LoRA menu item in the options +panel, select the LoRAs you want to apply and ensure that they have +the appropriate weight recommended by the model provider. Typically, +most LoRAs perform best at a weight of .75-1. + + +## LCM-LoRAs +Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM. + +LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs. + + +**Using LCM-LoRAs** +LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field. + +There are a number parameter differences when using LCM-LoRAs and standard generation: +- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point. +- The LCM scheduler should be used for generation +- CFG-Scale should be reduced to ~1 +- Steps should be reduced in the range of 4-8 + +Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point. + +More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras diff --git a/docs/features/CONCEPTS.md b/docs/features/TEXTUAL_INVERSIONS.md similarity index 66% rename from docs/features/CONCEPTS.md rename to docs/features/TEXTUAL_INVERSIONS.md index 5f3d2d961f..a3ede80d1f 100644 --- a/docs/features/CONCEPTS.md +++ b/docs/features/TEXTUAL_INVERSIONS.md @@ -1,12 +1,3 @@ ---- -title: Textual Inversion Embeddings and LoRAs ---- - -# :material-library-shelves: Textual Inversions and LoRAs - -With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model. - - ## Using Textual Inversion Files Textual inversion (TI) files are small models that customize the output of @@ -61,29 +52,4 @@ files it finds there for compatible models. At startup you will see a message si >> Current embedding manager terms: , ``` To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and -select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings. - -## Using LoRAs - -LoRA files are models that customize the output of Stable Diffusion -image generation. Larger than embeddings, but much smaller than full -models, they augment SD with improved understanding of subjects and -artistic styles. - -Unlike TI files, LoRAs do not introduce novel vocabulary into the -model's known tokens. Instead, LoRAs augment the model's weights that -are applied to generate imagery. LoRAs may be supplied with a -"trigger" word that they have been explicitly trained on, or may -simply apply their effect without being triggered. - -LoRAs are typically stored in .safetensors files, which are the most -secure way to store and transmit these types of weights. You may -install any number of `.safetensors` LoRA files simply by copying them -into the `autoimport/lora` directory of the corresponding InvokeAI models -directory (usually `invokeai` in your home directory). - -To use these when generating, open the LoRA menu item in the options -panel, select the LoRAs you want to apply and ensure that they have -the appropriate weight recommended by the model provider. Typically, -most LoRAs perform best at a weight of .75-1. - +select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings. \ No newline at end of file diff --git a/docs/help/FAQ.md b/docs/help/FAQ.md new file mode 100644 index 0000000000..80aec1bcea --- /dev/null +++ b/docs/help/FAQ.md @@ -0,0 +1,43 @@ +# FAQs + +**Where do I get started? How can I install Invoke?** + +- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)** + +**How can I download models? Can I use models I already have downloaded?** + +- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager. +- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Manger’s “Scan for Models” function. + +**My images are taking a long time to generate. How can I speed up generation?** + +- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images. +- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images. +- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup. + +**I’ve installed Python on Windows but the installer says it can’t find it?** + +- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer. + +**I’ve installed everything successfully but I still get an error about Triton when starting Invoke?** + +- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton. + +**I updated to 3.4.0 and now xFormers can’t load C++/CUDA?** + +- An issue occurred with your PyTorch update. Follow these steps to fix : + 1. Launch your invoke.bat / invoke.sh and select the option to open the developer console + 2. Run:`pip install ".[xformers]" --upgrade --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121` + - If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions` + +**It says my pip is out of date - is that why my install isn't working?** +- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date. +- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like). + +**How can I generate the exact same that I found on the internet?** +Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc. + + +**Where can I get more help?** + +- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index 43c9e7e93b..124abf222d 100644 --- a/docs/index.md +++ b/docs/index.md @@ -101,16 +101,13 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
-!!! Note - - This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time. - ## :octicons-link-24: Quick Links
+ diff --git a/mkdocs.yml b/mkdocs.yml index 97b2a16f19..de08d4ff4a 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -144,12 +144,15 @@ nav: - Control Adapters: 'features/CONTROLNET.md' - Image-to-Image: 'features/IMG2IMG.md' - Controlling Logging: 'features/LOGGING.md' + - LoRAs & LCM-LoRAs: 'features/LORAS.md' - Model Merging: 'features/MODEL_MERGING.md' - - Using Nodes : 'nodes/overview.md' + - Nodes & Workflows: 'nodes/overview.md' - NSFW Checker: 'features/WATERMARK+NSFW.md' - Postprocessing: 'features/POSTPROCESS.md' - Prompting Features: 'features/PROMPTS.md' - - Textual Inversion Training: 'features/TRAINING.md' + - Textual Inversions: + - Textual Inversions: 'features/TEXTUAL_INVERSIONS.md' + - Textual Inversion Training: 'features/TRAINING.md' - Unified Canvas: 'features/UNIFIED_CANVAS.md' - InvokeAI Web Server: 'features/WEB.md' - WebUI Hotkeys: "features/WEBUIHOTKEYS.md" @@ -180,6 +183,7 @@ nav: - Troubleshooting: 'help/deprecated/TROUBLESHOOT.md' - Help: - Getting Started: 'help/gettingStartedWithAI.md' + - FAQs: 'help/FAQ.md' - Diffusion Overview: 'help/diffusion.md' - Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md' - Other: