diff --git a/docs/features/IMG2IMG.md b/docs/features/IMG2IMG.md index 8729ec9410..6589027761 100644 --- a/docs/features/IMG2IMG.md +++ b/docs/features/IMG2IMG.md @@ -19,13 +19,13 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic This will take the original image shown here:
-![](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png) +![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 }
and generate a new image based on it as shown here:
-![](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png) +![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 }
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` @@ -45,15 +45,16 @@ Note that the prompt makes a big difference. For example, this slight variation on the prompt produces a very different image:
-![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png) +![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 } photograph of a tree on a hill with a river
!!! tip - When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will - be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera - model, or film settings. + When designing prompts, think about how the images scraped from the internet were + captioned. Very few photographs will be labeled "photograph" or "photorealistic." + They will, however, be captioned with the publication, photographer, camera model, + or film settings. If the initial image contains transparent regions, then Stable Diffusion will only draw within the transparent regions, a process called @@ -61,17 +62,17 @@ only draw within the transparent regions, a process called However, for this to work correctly, the color information underneath the transparent needs to be preserved, not erased. -!!! warning +!!! warning "**IMPORTANT ISSUE** " -**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller -than 512x512. Please scale your image to at least 512x512 before using it. -Larger images are not a problem, but may run out of VRAM on your GPU card. To -fix this, use the --fit option, which downscales the initial image to fit within -the box specified by width x height: + `img2img` does not work properly on initial images smaller + than 512x512. Please scale your image to at least 512x512 before using it. + Larger images are not a problem, but may run out of VRAM on your GPU card. To + fix this, use the --fit option, which downscales the initial image to fit within + the box specified by width x height: -``` -tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit -``` + ``` + tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit + ``` ## How does it actually work, though? @@ -87,7 +88,7 @@ from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this: -```commandline +```bash invoke> "fire" -s10 -W384 -H384 -S1592514025 ``` @@ -133,9 +134,9 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to | | strength = 0.7 | strength = 0.4 | | --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- | -| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) | +| initial image that SD sees | ![step-0](../assets/img2img/000032.step-0.png) | ![step-0](../assets/img2img/000030.step-0.png) | | steps argument to `invoke>` | `-S10` | `-S10` | -| steps actually taken | 7 | 4 | +| steps actually taken | `7` | `4` | | latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) | | output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) | @@ -150,7 +151,7 @@ If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `"fire"`: -```commandline +```bash invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7 ``` @@ -170,7 +171,7 @@ give each generation 20 steps. Here's strength `0.4` (note step count `50`, which is `20 รท 0.4` to make sure SD does `20` steps from my image): -```commandline +```bash invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4 ``` diff --git a/docs/installation/INSTALL_DOCKER.md b/docs/installation/INSTALL_DOCKER.md index 326ad39021..29f8e0b528 100644 --- a/docs/installation/INSTALL_DOCKER.md +++ b/docs/installation/INSTALL_DOCKER.md @@ -4,12 +4,17 @@ title: Docker # :fontawesome-brands-docker: Docker -## Before you begin +!!! warning "For end users" -- For end users: Install InvokeAI locally using the instructions for your OS. -- For developers: For container-related development tasks or for enabling easy - deployment to other environments (on-premises or cloud), follow these - instructions. For general use, install locally to leverage your machine's GPU. + We highly recommend to Install InvokeAI locally using [these instructions](index.md)" + +!!! tip "For developers" + + For container-related development tasks or for enabling easy + deployment to other environments (on-premises or cloud), follow these + instructions. + + For general use, install locally to leverage your machine's GPU. ## Why containers? @@ -94,11 +99,11 @@ After the build process is done, you can run the container via the provided ./docker-build/run.sh ``` -When used without arguments, the container will start the website and provide +When used without arguments, the container will start the webserver and provide you the link to open it. But if you want to use some other parameters you can also do so. -!!! example +!!! example "" ```bash docker-build/run.sh --from_file tests/validate_pr_prompt.txt @@ -112,7 +117,7 @@ also do so. !!! warning "Deprecated" - From here on you will find the rest of the previous Docker-Docs, which will still + From here on you will find the the previous Docker-Docs, which will still provide some usefull informations. ## Usage (time to have fun) diff --git a/docs/installation/INSTALL_JUPYTER.md b/docs/installation/INSTALL_JUPYTER.md index aa8efd6630..8807423be5 100644 --- a/docs/installation/INSTALL_JUPYTER.md +++ b/docs/installation/INSTALL_JUPYTER.md @@ -14,8 +14,7 @@ download the notebook from the link above and load it up in VSCode (with the appropriate extensions installed)/Jupyter/JupyterLab and start running the cells one-by-one. -Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand. - +!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand" ## Walkthrough @@ -25,4 +24,4 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan ### Updating to the development version -## Troubleshooting \ No newline at end of file +## Troubleshooting diff --git a/docs/installation/INSTALL_MANUAL.md b/docs/installation/INSTALL_MANUAL.md index f0c8a809aa..3a0ffb7fb7 100644 --- a/docs/installation/INSTALL_MANUAL.md +++ b/docs/installation/INSTALL_MANUAL.md @@ -2,51 +2,54 @@ title: Manual Installation --- -# :fontawesome-brands-linux: Linux -# :fontawesome-brands-apple: macOS -# :fontawesome-brands-windows: Windows +
+# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows +
+ +!!! warning "This is for advanced Users" + + who are already expirienced with using conda or pip ## Introduction -You have two choices for manual installation, the [first -one](#Conda_method) based on the Anaconda3 package manager (`conda`), -and [a second one](#PIP_method) which uses basic Python virtual -environment (`venv`) commands and the PIP package manager. Both -methods require you to enter commands on the command-line shell, also -known as the "console". +You have two choices for manual installation, the [first one](#Conda_method) +based on the Anaconda3 package manager (`conda`), and +[a second one](#PIP_method) which uses basic Python virtual environment (`venv`) +commands and the PIP package manager. Both methods require you to enter commands +on the terminal, also known as the "console". On Windows systems you are encouraged to install and use the [Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3), -which provides compatibility with Linux and Mac shells and nice -features such as command-line completion. +which provides compatibility with Linux and Mac shells and nice features such as +command-line completion. ### Conda method -1. Check that your system meets the [hardware -requirements](index.md#Hardware_Requirements) and has the appropriate -GPU drivers installed. In particular, if you are a Linux user with an -AMD GPU installed, you may need to install the [ROCm -driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html). +1. Check that your system meets the + [hardware requirements](index.md#Hardware_Requirements) and has the + appropriate GPU drivers installed. In particular, if you are a Linux user + with an AMD GPU installed, you may need to install the + [ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html). -InvokeAI does not yet support Windows machines with AMD GPUs due to -the lack of ROCm driver support on this platform. + InvokeAI does not yet support Windows machines with AMD GPUs due to the lack + of ROCm driver support on this platform. -To confirm that the appropriate drivers are installed, run -`nvidia-smi` on NVIDIA/CUDA systems, and `rocm-smi` on AMD -systems. These should return information about the installed video -card. + To confirm that the appropriate drivers are installed, run `nvidia-smi` on + NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return + information about the installed video card. -Macintosh users with MPS acceleration, or anybody with a CPU-only -system, can skip this step. + Macintosh users with MPS acceleration, or anybody with a CPU-only system, + can skip this step. -2. You will need to install Anaconda3 and Git if they are not already -available. Use your operating system's preferred installer, or -download installers from the following URLs +2. You will need to install Anaconda3 and Git if they are not already + available. Use your operating system's preferred package manager, or + download the installers manually. You can find them here: - - Anaconda3 (https://www.anaconda.com/) - - git (https://git-scm.com/downloads) + - [Anaconda3](https://www.anaconda.com/) + - [git](https://git-scm.com/downloads) -3. Copy the InvokeAI source code from GitHub using `git`: +3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from + GitHub: ```bash git clone https://github.com/invoke-ai/InvokeAI.git @@ -55,122 +58,160 @@ download installers from the following URLs This will create InvokeAI folder where you will follow the rest of the steps. -3. Enter the newly-created InvokeAI folder. From this step forward make sure - that you are working in the InvokeAI directory! +4. Enter the newly-created InvokeAI folder: ```bash cd InvokeAI ``` -4. Select the appropriate environment file: - We have created a series of environment files suited for different - operating systems and GPU hardware. They are located in the + From this step forward make sure that you are working in the InvokeAI + directory! + +5. Select the appropriate environment file: + + We have created a series of environment files suited for different operating + systems and GPU hardware. They are located in the `environments-and-requirements` directory: +
+ + | filename | OS | + | :----------------------: | :----------------------------: | + | environment-lin-amd.yml | Linux with an AMD (ROCm) GPU | + | environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU | + | environment-mac.yml | Macintosh | + | environment-win-cuda.yml | Windows with an NVIDA CUDA GPU | + +
+ + Choose the appropriate environment file for your system and link or copy it + to `environment.yml` in InvokeAI's top-level directory. To do so, run + following command from the repository-root: + + !!! Example "" + + === "Macintosh and Linux" + + !!! todo + + Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the + table above + + ```bash + ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml + ``` + + When this is done, confirm that a file `environment.yml` has been linked in + the InvokeAI root directory and that it points to the correct file in the + `environments-and-requirements`. + + ```bash + ls -la + ``` + + === "Windows" + + Since it requires admin privileges to create links, we will use the copy (cp) + command to create your `environment.yml` + + ```cmd + cp environments-and-requirements\environment-win-cuda.yml environment.yml + ``` + + Afterwards verify that the file `environment.yml` has been created, either via the + explorer or by using the command `dir` from the terminal + + ```cmd + dir + ``` + +6. Create the conda environment: + ```bash - environment-lin-amd.yml # Linux with an AMD (ROCm) GPU - environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU - environment-mac.yml # Macintoshes with MPS acceleration - environment-win-cuda.yml # Windows with an NVIDA CUDA GPU + conda env update ``` - Select the appropriate environment file, and make a link to it - from `environment.yml` in the top-level InvokeAI directory. The - command to do this from the top-level directory is: + This will create a new environment named `invokeai` and install all InvokeAI + dependencies into it. If something goes wrong you should take a look at + [troubleshooting](#troubleshooting). - !!! todo "Macintosh and Linux" - - ```bash - ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml - ``` +7. Activate the `invokeai` environment: - Replace `xxx` and `yyy` with the appropriate OS and GPU codes. + In order to use the newly created environment you will first need to + activate it - !!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command" - - ```bash - cp environments-and-requirements\environment-win-cuda.yml environment.yml - ``` + ```bash + conda activate invokeai + ``` - When this is done, confirm that a file `environment.yml` has been created in - the InvokeAI root directory and that it points to the correct file in the - `environments-and-requirements`. + Your command-line prompt should change to indicate that `invokeai` is active + by prepending `(invokeai)`. -4. Run conda: +8. Pre-Load the model weights files: - ```bash - conda env update - ``` + !!! tip - This will create a new environment named `invokeai` and install all - InvokeAI dependencies into it. + If you have already downloaded the weights file(s) for another Stable + Diffusion distribution, you may skip this step (by selecting "skip" when + prompted) and configure InvokeAI to use the previously-downloaded files. The + process for this is described in [here](INSTALLING_MODELS.md). - If something goes wrong at this point, see - [troubleshooting](#Troubleshooting). + ```bash + python scripts/preload_models.py + ``` -5. Activate the `invokeai` environment: + The script `preload_models.py` will interactively guide you through the + process of downloading and installing the weights files needed for InvokeAI. + Note that the main Stable Diffusion weights file is protected by a license + agreement that you have to agree to. The script will list the steps you need + to take to create an account on the site that hosts the weights files, + accept the agreement, and provide an access token that allows InvokeAI to + legally download and install the weights files. - ```bash - conda activate invokeai - ``` + If you get an error message about a module not being installed, check that + the `invokeai` environment is active and if not, repeat step 5. - Your command-line prompt should change to indicate that `invokeai` is active. +9. Run the command-line- or the web- interface: -6. Load the model weights files: + !!! example "" - ```bash - python scripts/preload_models.py - ``` + !!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!" - (Windows users should use the backslash instead of the slash) + === "CLI" - The script `preload_models.py` will interactively guide you through - downloading and installing the weights files needed for - InvokeAI. Note that the main Stable Diffusion weights file is - protected by a license agreement that you have to agree to. The - script will list the steps you need to take to create an account on - the site that hosts the weights files, accept the agreement, and - provide an access token that allows InvokeAI to legally download - and install the weights files. + ```bash + python scripts/invoke.py + ``` - If you have already downloaded the weights file(s) for another - Stable Diffusion distribution, you may skip this step (by selecting - "skip" when prompted) and configure InvokeAI to use the - previously-downloaded files. The process for this is described in - [INSTALLING_MODELS.md]. + === "local Webserver" - If you get an error message about a module not being installed, - check that the `invokeai` environment is active and if not, repeat - step 5. + ```bash + python scripts/invoke.py --web + ``` -7. Run the command-line interface or the web interface: + === "Public Webserver" - ```bash - python scripts/invoke.py # command line - python scripts/invoke.py --web # web interface - ``` + ```bash + python scripts/invoke.py --web --host 0.0.0.0 + ``` - (Windows users replace backslash with forward slash) - - If you choose the run the web interface, point your browser at - http://localhost:9090 in order to load the GUI. + If you choose the run the web interface, point your browser at + http://localhost:9090 in order to load the GUI. -8. Render away! +10. Render away! - Browse the features listed in the [Stable Diffusion Toolkit - Docs](https://invoke-ai.git) to learn about all the things you can - do with InvokeAI. + Browse the [features](../features) section to learn about all the things you + can do with InvokeAI. - Note that some GPUs are slow to warm up. In particular, when using - an AMD card with the ROCm driver, you may have to wait for over a - minute the first time you try to generate an image. Fortunately, after - the warm up period rendering will be fast. + Note that some GPUs are slow to warm up. In particular, when using an AMD + card with the ROCm driver, you may have to wait for over a minute the first + time you try to generate an image. Fortunately, after the warm up period + rendering will be fast. -9. Subsequently, to relaunch the script, be sure to run "conda - activate invokeai", enter the `InvokeAI` directory, and then launch - the invoke script. If you forget to activate the 'invokeai' - environment, the script will fail with multiple `ModuleNotFound` - errors. +11. Subsequently, to relaunch the script, be sure to run "conda activate + invokeai", enter the `InvokeAI` directory, and then launch the invoke + script. If you forget to activate the 'invokeai' environment, the script + will fail with multiple `ModuleNotFound` errors. ## Updating to newer versions of the script @@ -184,185 +225,192 @@ conda env update python scripts/preload_models.py --no-interactive #optional ``` -This will bring your local copy into sync with the remote one. The -last step may be needed to take advantage of new features or released -models. The `--no-interactive` flag will prevent the script from -prompting you to download the big Stable Diffusion weights files. +This will bring your local copy into sync with the remote one. The last step may +be needed to take advantage of new features or released models. The +`--no-interactive` flag will prevent the script from prompting you to download +the big Stable Diffusion weights files. ## pip Install -To install InvokeAI with only the PIP package manager, please follow -these steps: +To install InvokeAI with only the PIP package manager, please follow these +steps: -1. Make sure you are using Python 3.9 or higher. The rest of the install - procedure depends on this: - - ```bash - python -V - ``` - -2. Install the `virtualenv` tool if you don't have it already: - ```bash - pip install virtualenv - ``` - -3. From within the InvokeAI top-level directory, create and activate a - virtual environment named `invokeai`: - - ```bash - virtualenv invokeai - source invokeai/bin/activate - ``` - -4. Pick the correct `requirements*.txt` file for your hardware and -operating system. - - We have created a series of environment files suited for different - operating systems and GPU hardware. They are located in the - `environments-and-requirements` directory: +1. Make sure you are using Python 3.9 or higher. The rest of the install + procedure depends on this: ```bash - requirements-lin-amd.txt # Linux with an AMD (ROCm) GPU - requirements-lin-arm64.txt # Linux running on arm64 systems - requirements-lin-cuda.txt # Linux with an NVIDIA (CUDA) GPU - requirements-mac-mps-cpu.txt # Macintoshes with MPS acceleration - requirements-lin-win-colab-cuda.txt # Windows with an NVIDA (CUDA) GPU - # (supports Google Colab too) + python -V ``` - Select the appropriate requirements file, and make a link to it - from `environment.txt` in the top-level InvokeAI directory. The - command to do this from the top-level directory is: +2. Install the `virtualenv` tool if you don't have it already: - !!! todo "Macintosh and Linux" - - ```bash - ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt - ``` + ```bash + pip install virtualenv + ``` - Replace `xxx` and `yyy` with the appropriate OS and GPU codes. +3. From within the InvokeAI top-level directory, create and activate a virtual + environment named `invokeai`: - !!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command instead" - - ```bash - cp environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt - ``` + ```bash + virtualenv invokeai + source invokeai/bin/activate + ``` - Note that the order of arguments is reversed between the Linux/Mac and Windows - commands! +4. Pick the correct `requirements*.txt` file for your hardware and operating + system. - Please do not link directly to the file - `environments-and-requirements/requirements.txt`. This is a base requirements - file that does not have the platform-specific libraries. + We have created a series of environment files suited for different operating + systems and GPU hardware. They are located in the + `environments-and-requirements` directory: - When this is done, confirm that a file `requirements.txt` has been - created in the InvokeAI root directory and that it points to the - correct file in the `environments-and-requirements`. +
-5. Run PIP + | filename | OS | + | :---------------------------------: | :-------------------------------------------------------------: | + | requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU | + | requirements-lin-arm64.txt | Linux running on arm64 systems | + | requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU | + | requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration | + | requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU
(supports Google Colab too) | - Be sure that the `invokeai` environment is active before doing - this: +
- ```bash - pip install --prefer-binary -r requirements.txt - ``` + Select the appropriate requirements file, and make a link to it from + `requirements.txt` in the top-level InvokeAI directory. The command to do + this from the top-level directory is: + + !!! example "" + + === "Macintosh and Linux" + + !!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes." + + ```bash + ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt + ``` + + === "Windows" + + !!! info "since admin privileges are required to make links, so we use the copy (`cp`) command instead" + + ```cmd + cp environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt + ``` + + !!! warning + + Please do not link or copy `environments-and-requirements/requirements.txt`. + This is a base requirements file that does not have the platform-specific + libraries. + + When this is done, confirm that a file named `requirements.txt` has been + created in the InvokeAI root directory and that it points to the correct + file in `environments-and-requirements`. + +5. Run PIP + + Be sure that the `invokeai` environment is active before doing this: + + ```bash + pip install --prefer-binary -r requirements.txt + ``` + +--- ## Troubleshooting Here are some common issues and their suggested solutions. -### Conda install +### Conda -1. Conda fails before completing `conda update`: +#### Conda fails before completing `conda update` - The usual source of these errors is a package - incompatibility. While we have tried to minimize these, over time - packages get updated and sometimes introduce incompatibilities. +The usual source of these errors is a package incompatibility. While we have +tried to minimize these, over time packages get updated and sometimes introduce +incompatibilities. - We suggest that you search - [Issues](https://github.com/invoke-ai/InvokeAI/issues) or the - "bugs-and-support" channel of the [InvokeAI - Discord](https://discord.gg/ZmtBAhwWhy). +We suggest that you search +[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support" +channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy). - You may also try to install the broken packages manually using PIP. To do this, activate - the `invokeai` environment, and run `pip install` with the name and version of the - package that is causing the incompatibility. For example: +You may also try to install the broken packages manually using PIP. To do this, +activate the `invokeai` environment, and run `pip install` with the name and +version of the package that is causing the incompatibility. For example: - ```bash - pip install test-tube==0.7.5 - ``` +```bash +pip install test-tube==0.7.5 +``` - You can keep doing this until all requirements are satisfied and - the `invoke.py` script runs without errors. Please report to - [Issues](https://github.com/invoke-ai/InvokeAI/issues) what you - were able to do to work around the problem so that others can - benefit from your investigation. +You can keep doing this until all requirements are satisfied and the `invoke.py` +script runs without errors. Please report to +[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do +to work around the problem so that others can benefit from your investigation. -2. `preload_models.py` or `invoke.py` crashes at an early stage +#### `preload_models.py` or `invoke.py` crashes at an early stage - This is usually due to an incomplete or corrupted Conda install. - Make sure you have linked to the correct environment file and run - `conda update` again. +This is usually due to an incomplete or corrupted Conda install. Make sure you +have linked to the correct environment file and run `conda update` again. - If the problem persists, a more extreme measure is to clear Conda's - caches and remove the `invokeai` environment: +If the problem persists, a more extreme measure is to clear Conda's caches and +remove the `invokeai` environment: - ```bash - conda deactivate - conda env remove -n invokeai - conda clean -a - conda update - ``` +```bash +conda deactivate +conda env remove -n invokeai +conda clean -a +conda update +``` - This removes all cached library files, including ones that may have - been corrupted somehow. (This is not supposed to happen, but does - anyway). - -3. `invoke.py` crashes at a later stage. +This removes all cached library files, including ones that may have been +corrupted somehow. (This is not supposed to happen, but does anyway). - If the CLI or web site had been working ok, but something - unexpected happens later on during the session, you've encountered - a code bug that is probably unrelated to an install issue. Please - search [Issues](https://github.com/invoke-ai/InvokeAI/issues), file - a bug report, or ask for help on [Discord](https://discord.gg/ZmtBAhwWhy) +#### `invoke.py` crashes at a later stage -4. My renders are running very slowly! +If the CLI or web site had been working ok, but something unexpected happens +later on during the session, you've encountered a code bug that is probably +unrelated to an install issue. Please search +[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or +ask for help on [Discord](https://discord.gg/ZmtBAhwWhy) - You may have installed the wrong torch (machine learning) package, - and the system is running on CPU rather than the GPU. To check, - look at the log messages that appear when `invoke.py` is first - starting up. One of the earlier lines should say `Using device type - cuda`. On AMD systems, it will also say "cuda", and on Macintoshes, - it should say "mps". If instead the message says it is running on - "cpu", then you may need to install the correct torch library. +#### My renders are running very slowly - You may be able to fix this by installing a different torch - library. Here are the magic incantations for Conda and PIP. +You may have installed the wrong torch (machine learning) package, and the +system is running on CPU rather than the GPU. To check, look at the log messages +that appear when `invoke.py` is first starting up. One of the earlier lines +should say `Using device type cuda`. On AMD systems, it will also say "cuda", +and on Macintoshes, it should say "mps". If instead the message says it is +running on "cpu", then you may need to install the correct torch library. - !!! todo "For CUDA systems" +You may be able to fix this by installing a different torch library. Here are +the magic incantations for Conda and PIP. - (conda) - ```bash - conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia - ``` +!!! todo "For CUDA systems" - (pip) - ```bash - pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 - ``` + - conda - !!! todo "For AMD systems" + ```bash + conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia + ``` - (conda) - ```bash - conda activate invokeai - pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/ - ``` + - pip - (pip) - ```bash - pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/ - ``` + ```bash + pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 + ``` - More information and troubleshooting tips can be found at https://pytorch.org. \ No newline at end of file +!!! todo "For AMD systems" + + - conda + + ```bash + conda activate invokeai + pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/ + ``` + + - pip + + ```bash + pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/ + ``` + +More information and troubleshooting tips can be found at https://pytorch.org. diff --git a/docs/installation/INSTALL_SOURCE.md b/docs/installation/INSTALL_SOURCE.md index 6e3fa5f9ee..e86fccde8b 100644 --- a/docs/installation/INSTALL_SOURCE.md +++ b/docs/installation/INSTALL_SOURCE.md @@ -1,148 +1,137 @@ --- -title: The InvokeAI Source Installer +title: Source Installer --- +# The InvokeAI Source Installer + ## Introduction -The source installer is a shell script that attempts to automate every -step needed to install and run InvokeAI on a stock computer running -recent versions of Linux, MacOSX or Windows. It will leave you with a -version that runs a stable version of InvokeAI with the option to -upgrade to experimental versions later. It is not as foolproof as the -[InvokeAI installer](INSTALL_INVOKE.md) +The source installer is a shell script that attempts to automate every step +needed to install and run InvokeAI on a stock computer running recent versions +of Linux, MacOS or Windows. It will leave you with a version that runs a stable +version of InvokeAI with the option to upgrade to experimental versions later. +It is not as foolproof as the [InvokeAI installer](INSTALL_INVOKE.md) -Before you begin, make sure that you meet the [hardware -requirements](index.md#Hardware_Requirements) and has the appropriate -GPU drivers installed. In particular, if you are a Linux user with an -AMD GPU installed, you may need to install the [ROCm -driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html). +Before you begin, make sure that you meet the +[hardware requirements](index.md#Hardware_Requirements) and has the appropriate +GPU drivers installed. In particular, if you are a Linux user with an AMD GPU +installed, you may need to install the +[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html). -Installation requires roughly 18G of free disk space to load the -libraries and recommended model weights files. +Installation requires roughly 18G of free disk space to load the libraries and +recommended model weights files. ## Walk through -Though there are multiple steps, there really is only one click -involved to kick off the process. +Though there are multiple steps, there really is only one click involved to kick +off the process. -1. The source installer is distributed in ZIP files. Go to the [latest - release](https://github.com/invoke-ai/InvokeAI/releases/latest), and - look for a series of files named: +1. The source installer is distributed in ZIP files. Go to the + [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and + look for a series of files named: - - invokeAI-src-installer-mac.zip - - invokeAI-src-installer-windows.zip - - invokeAI-src-installer-linux.zip + - invokeAI-src-installer-mac.zip + - invokeAI-src-installer-windows.zip + - invokeAI-src-installer-linux.zip -Download the one that is appropriate for your operating system. + Download the one that is appropriate for your operating system. -2. Unpack the zip file into a directory that has at least 18G of free - space. Do *not* unpack into a directory that has an earlier version of - InvokeAI. +2. Unpack the zip file into a directory that has at least 18G of free space. Do + _not_ unpack into a directory that has an earlier version of InvokeAI. - This will create a new directory named "InvokeAI". This example - shows how this would look using the `unzip` command-line tool, - but you may use any graphical or command-line Zip extractor: - - ```bash - C:\Documents\Linco> unzip invokeAI-windows.zip - Archive: C: \Linco\Downloads\invokeAI-linux.zip - creating: invokeAI\ - inflating: invokeAI\install.bat - inflating: invokeAI\readme.txt - ``` + This will create a new directory named "InvokeAI". This example shows how + this would look using the `unzip` command-line tool, but you may use any + graphical or command-line Zip extractor: -3. If you are using a desktop GUI, double-click the installer file. - It will be named `install.bat` on Windows systems and `install.sh` - on Linux and Macintosh systems. + ```cmd + C:\Documents\Linco> unzip invokeAI-windows.zip + Archive: C: \Linco\Downloads\invokeAI-linux.zip + creating: invokeAI\ + inflating: invokeAI\install.bat + inflating: invokeAI\readme.txt + ``` -4. Alternatively, form the command line, run the shell script or .bat - file: +3. If you are using a desktop GUI, double-click the installer file. It will be + named `install.bat` on Windows systems and `install.sh` on Linux and + Macintosh systems. - ```bash - C:\Documents\Linco> cd invokeAI - C:\Documents\Linco> install.bat - ``` +4. Alternatively, form the command line, run the shell script or .bat file: -5. Sit back and let the install script work. It will install various - binary requirements including Conda, Git and Python, then download - the current InvokeAI code and install it along with its - dependencies. + ```cmd + C:\Documents\Linco> cd invokeAI + C:\Documents\Linco\invokeAI> install.bat + ``` -6. After installation completes, the installer will launch a script - called `preload_models.py`, which will guide you through the - first-time process of selecting one or more Stable Diffusion model - weights files, downloading and configuring them. +5. Sit back and let the install script work. It will install various binary + requirements including Conda, Git and Python, then download the current + InvokeAI code and install it along with its dependencies. - Note that the main Stable Diffusion weights file is protected by a - license agreement that you must agree to in order to use. The - script will list the steps you need to take to create an account on - the official site that hosts the weights files, accept the - agreement, and provide an access token that allows InvokeAI to - legally download and install the weights files. +6. After installation completes, the installer will launch a script called + `preload_models.py`, which will guide you through the first-time process of + selecting one or more Stable Diffusion model weights files, downloading and + configuring them. - If you have already downloaded the weights file(s) for another - Stable Diffusion distribution, you may skip this step (by selecting - "skip" when prompted) and configure InvokeAI to use the - previously-downloaded files. The process for this is described in - [INSTALLING_MODELS.md]. + Note that the main Stable Diffusion weights file is protected by a license + agreement that you must agree to in order to use. The script will list the + steps you need to take to create an account on the official site that hosts + the weights files, accept the agreement, and provide an access token that + allows InvokeAI to legally download and install the weights files. - 7. The script will now exit and you'll be ready to generate some - images. The invokeAI directory will contain numerous files. Look - for a shell script named `invoke.sh` (Linux/Mac) or `invoke.bat` - (Windows). Launch the script by double-clicking it or typing - its name at the command-line: + If you have already downloaded the weights file(s) for another Stable + Diffusion distribution, you may skip this step (by selecting "skip" when + prompted) and configure InvokeAI to use the previously-downloaded files. The + process for this is described in [Installing Models](INSTALLING_MODELS.md). - ```bash - C:\Documents\Linco\invokeAI> cd invokeAI - C:\Documents\Linco\invokeAI> invoke.bat - ``` +7. The script will now exit and you'll be ready to generate some images. The + invokeAI directory will contain numerous files. Look for a shell script + named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script + by double-clicking it or typing its name at the command-line: - The `invoke.bat` (`invoke.sh`) script will give you the choice of - starting (1) the command-line interface, or (2) the web GUI. If you - start the latter, you can load the user interface by pointing your - browser at http://localhost:9090. + ```cmd + C:\Documents\Linco> cd invokeAI + C:\Documents\Linco\invokeAI> invoke.bat + ``` - The `invoke` script also offers you a third option labeled "open - the developer console". If you choose this option, you will be - dropped into a command-line interface in which you can run python - commands directly, access developer tools, and launch InvokeAI - with customized options. To do the latter, you would launch the - script `scripts/invoke.py` as shown in this example: +The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1) +the command-line interface, or (2) the web GUI. If you start the latter, you can +load the user interface by pointing your browser at http://localhost:9090. - ```bash - python scripts\invoke.py --web --max_load_models=3 \ - --model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos - ``` +The `invoke` script also offers you a third option labeled "open the developer +console". If you choose this option, you will be dropped into a command-line +interface in which you can run python commands directly, access developer tools, +and launch InvokeAI with customized options. To do the latter, you would launch +the script `scripts/invoke.py` as shown in this example: - These options are described in detail in the [Command-Line - Interface](../features/CLI.md) documentation. +```cmd +python scripts/invoke.py --web --max_load_models=3 \ + --model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos +``` + +These options are described in detail in the +[Command-Line Interface](../features/CLI.md) documentation. ## Updating to newer versions -This section describes how to update InvokeAI to new versions of the -software. +This section describes how to update InvokeAI to new versions of the software. ### Updating the stable version -This distribution is changing rapidly, and we add new features on a -daily basis. To update to the latest released version (recommended), -run the `update.sh` (Linux/Mac) or `update.bat` (Windows) -scripts. This will fetch the latest release and re-run the -`preload_models` script to download any updated models files that may -be needed. You can also use this to add additional models that you did -not select at installation time. +This distribution is changing rapidly, and we add new features on a daily basis. +To update to the latest released version (recommended), run the `update.sh` +(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest +release and re-run the `preload_models` script to download any updated models +files that may be needed. You can also use this to add additional models that +you did not select at installation time. ### Updating to the development version -There may be times that there is a feature in the `development` branch -of InvokeAI that you'd like to take advantage of. Or perhaps there is -a branch that corrects an annoying bug. To do this, you will use the -developer's console. +There may be times that there is a feature in the `development` branch of +InvokeAI that you'd like to take advantage of. Or perhaps there is a branch that +corrects an annoying bug. To do this, you will use the developer's console. -From within the invokeAI directory, run the command `invoke.sh` -(Linux/Mac) or `invoke.bat` (Windows) and selection option (3) to open -the developers console. Then run the following command to get the -`development branch`: +From within the invokeAI directory, run the command `invoke.sh` (Linux/Mac) or +`invoke.bat` (Windows) and selection option (3) to open the developers console. +Then run the following command to get the `development branch`: ```bash git checkout development @@ -150,19 +139,18 @@ git pull conda env update ``` -You can now close the developer console and run `invoke` as before. -If you get complaints about missing models, then you may need to do -the additional step of running `preload_models.py`. This happens -relatively infrequently. To do this, simply open up the developer's -console again and type `python scripts/preload_models.py`. +You can now close the developer console and run `invoke` as before. If you get +complaints about missing models, then you may need to do the additional step of +running `preload_models.py`. This happens relatively infrequently. To do this, +simply open up the developer's console again and type +`python scripts/preload_models.py`. ## Troubleshooting -If you run into problems during or after installation, the InvokeAI -team is available to help you. Either create an -[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub -site, or make a request for help on the "bugs-and-support" channel of -our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% -volunteer organization, but typically somebody will be available to -help you within 24 hours, and often much sooner. - +If you run into problems during or after installation, the InvokeAI team is +available to help you. Either create an +[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or +make a request for help on the "bugs-and-support" channel of our +[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer +organization, but typically somebody will be available to help you within 24 +hours, and often much sooner. diff --git a/docs/installation/INSTALL.md b/docs/installation/index.md similarity index 97% rename from docs/installation/INSTALL.md rename to docs/installation/index.md index ea211696a2..8058644b5e 100644 --- a/docs/installation/INSTALL.md +++ b/docs/installation/index.md @@ -1,9 +1,7 @@ --- -title: Installation Overview +title: Overview --- -## Installation - We offer several ways to install InvokeAI, each one suited to your experience and preferences.