mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into lstein-improve-migration
This commit is contained in:
commit
5fa8f8be43
@ -1,21 +1,21 @@
|
||||
stable-diffusion-2.1-768:
|
||||
stable-diffusion-2_1-768:
|
||||
description: Stable Diffusion version 2.1 diffusers model, trained on 768x768 images (5.21 GB)
|
||||
repo_id: stabilityai/stable-diffusion-2-1
|
||||
format: diffusers
|
||||
recommended: True
|
||||
stable-diffusion-2.1-base:
|
||||
stable-diffusion-2_1-base:
|
||||
description: Stable Diffusion version 2.1 diffusers base model, trained on 512x512 images (5.21 GB)
|
||||
repo_id: stabilityai/stable-diffusion-2-1-base
|
||||
format: diffusers
|
||||
recommended: False
|
||||
stable-diffusion-1.5:
|
||||
stable-diffusion-1_5:
|
||||
description: Stable Diffusion version 1.5 weight file (4.27 GB)
|
||||
repo_id: runwayml/stable-diffusion-v1-5
|
||||
format: diffusers
|
||||
recommended: True
|
||||
vae:
|
||||
repo_id: stabilityai/sd-vae-ft-mse
|
||||
stable-diffusion-1.4:
|
||||
stable-diffusion-1_4:
|
||||
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
|
||||
repo_id: CompVis/stable-diffusion-v1-4
|
||||
recommended: False
|
||||
@ -24,7 +24,7 @@ stable-diffusion-1.4:
|
||||
repo_id: stabilityai/sd-vae-ft-mse
|
||||
width: 512
|
||||
height: 512
|
||||
inpainting-1.5:
|
||||
inpainting-1_5:
|
||||
description: RunwayML SD 1.5 model optimized for inpainting (ckpt version) (4.27 GB)
|
||||
repo_id: runwayml/stable-diffusion-inpainting
|
||||
config: v1-inpainting-inference.yaml
|
||||
@ -36,12 +36,12 @@ inpainting-1.5:
|
||||
recommended: True
|
||||
width: 512
|
||||
height: 512
|
||||
waifu-diffusion-1.4:
|
||||
waifu-diffusion-1_4:
|
||||
description: Latest waifu diffusion 1.4 (diffusers version)
|
||||
format: diffusers
|
||||
repo_id: hakurei/waifu-diffusion
|
||||
recommended: True
|
||||
waifu-diffusion-1.3:
|
||||
waifu-diffusion-1_3:
|
||||
description: Stable Diffusion 1.4 fine tuned on anime-styled images (ckpt version) (4.27 GB)
|
||||
repo_id: hakurei/waifu-diffusion-v1-3
|
||||
config: v1-inference.yaml
|
||||
@ -53,14 +53,14 @@ waifu-diffusion-1.3:
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
trinart-2.0:
|
||||
trinart-2_0:
|
||||
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures (2.13 GB)
|
||||
repo_id: naclbit/trinart_stable_diffusion_v2
|
||||
format: diffusers
|
||||
recommended: False
|
||||
vae:
|
||||
repo_id: stabilityai/sd-vae-ft-mse
|
||||
trinart_characters-2.0:
|
||||
trinart_characters-2_0:
|
||||
description: An SD model finetuned with 19.2M anime/manga style images (ckpt version) (4.27 GB)
|
||||
repo_id: naclbit/trinart_derrida_characters_v2_stable_diffusion
|
||||
config: v1-inference.yaml
|
||||
@ -72,19 +72,19 @@ trinart_characters-2.0:
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
anything-4.0:
|
||||
anything-4_0:
|
||||
description: High-quality, highly detailed anime style images with just a few prompts
|
||||
format: diffusers
|
||||
repo_id: andite/anything-v4.0
|
||||
recommended: False
|
||||
papercut-1.0:
|
||||
papercut-1_0:
|
||||
description: SD 1.5 fine-tuned for papercut art (use "PaperCut" in your prompts) (2.13 GB)
|
||||
repo_id: Fictiverse/Stable_Diffusion_PaperCut_Model
|
||||
format: diffusers
|
||||
vae:
|
||||
repo_id: stabilityai/sd-vae-ft-mse
|
||||
recommended: False
|
||||
voxel_art-1.0:
|
||||
voxel_art-1_0:
|
||||
description: Stable Diffusion trained on voxel art (use "VoxelArt" in your prompts) (4.27 GB)
|
||||
repo_id: Fictiverse/Stable_Diffusion_VoxelArt_Model
|
||||
config: v1-inference.yaml
|
||||
|
@ -12,17 +12,18 @@ title: Installing Manually
|
||||
|
||||
## Introduction
|
||||
|
||||
You have two choices for manual installation, the [first
|
||||
one](#PIP_method) uses basic Python virtual environment (`venv`)
|
||||
commands and the PIP package manager. The [second one](#Conda_method)
|
||||
based on the Anaconda3 package manager (`conda`). Both methods require
|
||||
you to enter commands on the terminal, also known as the "console".
|
||||
You have two choices for manual installation.
|
||||
The [first one](#pip-Install) uses basic Python virtual environment (`venv`)
|
||||
command and `pip` package manager.
|
||||
The [second one](#Conda-method) uses Anaconda3 package manager (`conda`).
|
||||
Both methods require you to enter commands on the terminal, also known as the
|
||||
"console".
|
||||
|
||||
Note that the conda install method is currently deprecated and will not
|
||||
be supported at some point in the future.
|
||||
Note that the `conda` installation method is currently deprecated and will
|
||||
not be supported at some point in the future.
|
||||
|
||||
On Windows systems you are encouraged to install and use the
|
||||
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||
On Windows systems, you are encouraged to install and use the
|
||||
[PowerShell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||
which provides compatibility with Linux and Mac shells and nice
|
||||
features such as command-line completion.
|
||||
|
||||
@ -37,7 +38,7 @@ manager, please follow these steps:
|
||||
```bash
|
||||
python -V
|
||||
```
|
||||
|
||||
|
||||
2. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
|
||||
GitHub:
|
||||
|
||||
@ -52,15 +53,15 @@ manager, please follow these steps:
|
||||
environment named `invokeai`:
|
||||
|
||||
```bash
|
||||
python -mvenv invokeai
|
||||
python -m venv invokeai
|
||||
source invokeai/bin/activate
|
||||
```
|
||||
|
||||
4. Make sure that pip is installed in your virtual environment an up to date:
|
||||
4. Make sure that pip is installed in your virtual environment an up to date:
|
||||
|
||||
```bash
|
||||
python -mensurepip --upgrade
|
||||
python -mpip install --upgrade pip
|
||||
python -m ensurepip --upgrade
|
||||
python -m pip install --upgrade pip
|
||||
```
|
||||
|
||||
5. Pick the correct `requirements*.txt` file for your hardware and operating
|
||||
@ -199,20 +200,20 @@ manager, please follow these steps:
|
||||
|
||||
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of the directory.
|
||||
|
||||
9. Render away!
|
||||
9. Render away!
|
||||
|
||||
Browse the [features](../features/CLI.md) section to learn about all the things you
|
||||
can do with InvokeAI.
|
||||
|
||||
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
||||
card with the ROCm driver, you may have to wait for over a minute the first
|
||||
time you try to generate an image. Fortunately, after the warm up period
|
||||
time you try to generate an image. Fortunately, after the warm-up period
|
||||
rendering will be fast.
|
||||
|
||||
10. Subsequently, to relaunch the script, be sure to run "conda activate
|
||||
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
||||
script. If you forget to activate the 'invokeai' environment, the script
|
||||
will fail with multiple `ModuleNotFound` errors.
|
||||
10. Subsequently, to relaunch the script, be sure to enter `InvokeAI` directory,
|
||||
activate the virtual environment, and then launch `invoke.py` script.
|
||||
If you forget to activate the virtual environment,
|
||||
the script will fail with multiple `ModuleNotFound` errors.
|
||||
|
||||
!!! tip
|
||||
|
||||
|
@ -3,6 +3,7 @@ accelerate
|
||||
albumentations
|
||||
datasets
|
||||
diffusers[torch]~=0.11
|
||||
dnspython==2.2.1
|
||||
einops
|
||||
eventlet
|
||||
facexlib
|
||||
|
Loading…
Reference in New Issue
Block a user