directory cleanup; working on install docs
7
.gitignore
vendored
@ -221,6 +221,9 @@ models/gfpgan
|
||||
invokeai.init
|
||||
|
||||
# ignore environment.yml and requirements.txt
|
||||
# these are to be copied from environments-and-requirements
|
||||
# these are links to the real files in environments-and-requirements
|
||||
environment.yml
|
||||
requirements.txt
|
||||
requirements.txt
|
||||
|
||||
# this may be present if the user created a venv
|
||||
invokeai
|
||||
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 466 KiB After Width: | Height: | Size: 466 KiB |
Before Width: | Height: | Size: 7.4 KiB After Width: | Height: | Size: 7.4 KiB |
Before Width: | Height: | Size: 539 KiB After Width: | Height: | Size: 539 KiB |
Before Width: | Height: | Size: 7.6 KiB After Width: | Height: | Size: 7.6 KiB |
Before Width: | Height: | Size: 450 KiB After Width: | Height: | Size: 450 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 553 KiB After Width: | Height: | Size: 553 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 418 KiB After Width: | Height: | Size: 418 KiB |
Before Width: | Height: | Size: 6.1 KiB After Width: | Height: | Size: 6.1 KiB |
Before Width: | Height: | Size: 542 KiB After Width: | Height: | Size: 542 KiB |
Before Width: | Height: | Size: 9.5 KiB After Width: | Height: | Size: 9.5 KiB |
Before Width: | Height: | Size: 395 KiB After Width: | Height: | Size: 395 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 465 KiB After Width: | Height: | Size: 465 KiB |
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
@ -94,6 +94,7 @@ installation instructions below.
|
||||
You wil need one of the following:
|
||||
|
||||
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux only)
|
||||
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
|
||||
|
||||
### :fontawesome-solid-memory: Memory
|
||||
|
58
docs/installation/INSTALL.md
Normal file
@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Installation Overview
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
We offer several ways to install InvokeAI, each one suited to your
|
||||
experience and preferences.
|
||||
|
||||
1. [1-click installer](INSTALL_1CLICK.md)
|
||||
|
||||
This is an automated shell script that will handle installation of
|
||||
all dependencies for you, and is recommended for those who have
|
||||
limited or no experience with the Python programming language, are
|
||||
not currently interested in contributing to the project, and just want
|
||||
the thing to install and run. In this version, you interact with the
|
||||
web server and command-line clients through a shell script named
|
||||
`invoke.sh` (Linux/Mac) or `invoke.bat` (Windows), and perform
|
||||
updates using `update.sh` and `update.bat`.
|
||||
|
||||
2. [Pre-compiled PIP installer](INSTALL_PCP.md)
|
||||
|
||||
This is a series of installer files for which all the requirements
|
||||
for InvokeAI have been precompiled, thereby preventing the conflicts
|
||||
that sometimes occur when an external library is changed unexpectedly.
|
||||
It will leave you with an environment in which you interact directly
|
||||
with the scripts for running the web and command line clients, and
|
||||
you will update to new versions using standard developer commands.
|
||||
|
||||
This method is recommended for users with a bit of experience using
|
||||
the `git` and `pip` tools.
|
||||
|
||||
3. [Manual Installation](MANUAL_INSTALL.md)
|
||||
|
||||
In this method you will manually run the commands needed to install
|
||||
InvokeAI and its dependencies. We offer two recipes: one suited to
|
||||
those who prefer the `conda` tool, and one suited to those who prefer
|
||||
`pip` and Python virtual environments.
|
||||
|
||||
This method is recommended for users who have previously used `conda`
|
||||
or `pip` in the past, developers, and anyone who wishes to remain on
|
||||
the cutting edge of future InvokeAI development and is willing to put
|
||||
up with occasional glitches and breakage.
|
||||
|
||||
4. [Docker Installation](INSTALL_DOCKER.md)
|
||||
|
||||
We also offer a method for creating Docker containers containing
|
||||
InvokeAI and its dependencies. This method is recommended for
|
||||
individuals with experience with Docker containers and understand
|
||||
the pluses and minuses of a container-based install.
|
||||
|
||||
5. [Jupyter Notebooks Installation](INSTALL_JUPYTER.md)
|
||||
|
||||
This method is suitable for running InvokeAI on a Google Colab
|
||||
account. It is recommended for individuals who have previously
|
||||
worked on the Colab and are comfortable with the Jupyter notebook
|
||||
environment.
|
||||
|
@ -1,12 +1,12 @@
|
||||
---
|
||||
title: Docker
|
||||
Title: Docker
|
||||
---
|
||||
|
||||
# :fontawesome-brands-docker: Docker
|
||||
|
||||
## Before you begin
|
||||
|
||||
- For end users: Install Stable Diffusion locally using the instructions for
|
||||
- For end users: Install InvokeAI locally using the instructions for
|
||||
your OS.
|
||||
- For developers: For container-related development tasks or for enabling easy
|
||||
deployment to other environments (on-premises or cloud), follow these
|
||||
@ -14,7 +14,7 @@ title: Docker
|
||||
|
||||
## Why containers?
|
||||
|
||||
They provide a flexible, reliable way to build and deploy Stable Diffusion.
|
||||
They provide a flexible, reliable way to build and deploy InvokeAI.
|
||||
You'll also use a Docker volume to store the largest model files and image
|
||||
outputs as a first step in decoupling storage and compute. Future enhancements
|
||||
can do this for other assets. See [Processes](https://12factor.net/processes)
|
||||
@ -22,7 +22,7 @@ under the Twelve-Factor App methodology for details on why running applications
|
||||
in such a stateless fashion is important.
|
||||
|
||||
You can specify the target platform when building the image and running the
|
||||
container. You'll also need to specify the Stable Diffusion requirements file
|
||||
container. You'll also need to specify the InvokeAI requirements file
|
||||
that matches the container's OS and the architecture it will run on.
|
||||
|
||||
Developers on Apple silicon (M1/M2): You
|
||||
|
324
docs/installation/MANUAL_INSTALL.md
Normal file
@ -0,0 +1,324 @@
|
||||
---
|
||||
title: Manual Installation
|
||||
---
|
||||
|
||||
# :fontawesome-brands-linux: Linux
|
||||
# :fontawesome-brands-apple: macOS
|
||||
# :fontawesome-brands-windows: Windows
|
||||
|
||||
## Introduction
|
||||
|
||||
You have two choices for manual installation, the [first
|
||||
one](#Conda_method) based on the Anaconda3 package manager (`conda`),
|
||||
and [a second one](#PIP_method) which uses basic Python virtual
|
||||
environment (`venv`) commands and the PIP package manager. Both
|
||||
methods require you to enter commands on the command-line shell, also
|
||||
known as the "console".
|
||||
|
||||
On Windows systems you are encouraged to install and use the
|
||||
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||
which provides compatibility with Linux and Mac shells and nice
|
||||
features such as command-line completion.
|
||||
|
||||
### Conda method
|
||||
|
||||
1. Check that your system meets the [hardware
|
||||
requirements](index.md#Hardware_Requirements) and has the appropriate
|
||||
GPU drivers installed. In particular, if you are a Linux user with an
|
||||
AMD GPU installed, you may need to install the [ROCm
|
||||
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||
|
||||
InvokeAI does not yet support Windows machines with AMD GPUs due to
|
||||
the lack of ROCm driver support on this platform.
|
||||
|
||||
To confirm that the appropriate drivers are installed, run
|
||||
`nvidia-smi` on NVIDIA/CUDA systems, and `rocm-smi` on AMD
|
||||
systems. These should return information about the installed video
|
||||
card.
|
||||
|
||||
Macintosh users with MPS acceleration, or anybody with a CPU-only
|
||||
system, can skip this step.
|
||||
|
||||
2. You will need to install Anaconda3 and Git if they are not already
|
||||
available. Use your operating system's preferred installer, or
|
||||
download installers from the following URLs
|
||||
|
||||
- Anaconda3 (https://www.anaconda.com/)
|
||||
- git (https://git-scm.com/downloads)
|
||||
|
||||
3. Copy the InvokeAI source code from GitHub using `git`:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||
```
|
||||
|
||||
This will create InvokeAI folder where you will follow the rest of the
|
||||
steps.
|
||||
|
||||
3. Enter the newly-created InvokeAI folder. From this step forward make sure
|
||||
that you are working in the InvokeAI directory!
|
||||
|
||||
```bash
|
||||
cd InvokeAI
|
||||
```
|
||||
4. Select the appropriate environment file:
|
||||
|
||||
We have created a series of environment files suited for different
|
||||
operating systems and GPU hardware. They are located in the
|
||||
`environments-and-requirements` directory:
|
||||
|
||||
```bash
|
||||
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
|
||||
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
|
||||
environment-mac.yml # Macintoshes with MPS acceleration
|
||||
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
|
||||
```
|
||||
|
||||
Select the appropriate environment file, and make a link to it
|
||||
from `environment.yml` in the top-level InvokeAI directory. The
|
||||
command to do this from the top-level directory is:
|
||||
|
||||
!!! todo "Macintosh and Linux"
|
||||
|
||||
```bash
|
||||
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
|
||||
```
|
||||
|
||||
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
|
||||
|
||||
!!! todo "Windows"
|
||||
|
||||
```bash
|
||||
mklink environment.yml environments-and-requirements\environment-win-cuda.yml
|
||||
```
|
||||
|
||||
Note that the order of arguments is reversed between the Linux/Mac and Windows
|
||||
commands!
|
||||
|
||||
When this is done, confirm that a file `environment.yml` has been created in
|
||||
the InvokeAI root directory and that it points to the correct file in the
|
||||
`environments-and-requirements`.
|
||||
|
||||
4. Run conda:
|
||||
|
||||
```bash
|
||||
conda env update
|
||||
```
|
||||
|
||||
This will create a new environment named `invokeai` and install all
|
||||
InvokeAI dependencies into it.
|
||||
|
||||
If something goes wrong at this point, see
|
||||
[troubleshooting](#Troubleshooting).
|
||||
|
||||
5. Activate the `invokeai` environment:
|
||||
|
||||
```bash
|
||||
conda activate invokeai
|
||||
```
|
||||
|
||||
Your command-line prompt should change to indicate that `invokeai` is active.
|
||||
|
||||
6. Load the model weights files:
|
||||
|
||||
```bash
|
||||
python scripts/preload_models.py
|
||||
```
|
||||
|
||||
(Windows users should use the backslash instead of the slash)
|
||||
|
||||
The script `preload_models.py` will interactively guide you through
|
||||
downloading and installing the weights files needed for
|
||||
InvokeAI. Note that the main Stable Diffusion weights file is
|
||||
protected by a license agreement that you have to agree to. The
|
||||
script will list the steps you need to take to create an account on
|
||||
the site that hosts the weights files, accept the agreement, and
|
||||
provide an access token that allows InvokeAI to legally download
|
||||
and install the weights files.
|
||||
|
||||
If you get an error message about a module not being installed,
|
||||
check that the `invokeai` environment is active and if not, repeat
|
||||
step 5.
|
||||
|
||||
7. Run the command-line interface or the web interface:
|
||||
|
||||
```bash
|
||||
python scripts/invoke.py # command line
|
||||
python scripts/invoke.py --web # web interface
|
||||
```
|
||||
|
||||
(Windows users replace backslash with forward slash)
|
||||
|
||||
If you choose the run the web interface, point your browser at
|
||||
http://localhost:9090 in order to load the GUI.
|
||||
|
||||
8. Render away!
|
||||
|
||||
Browse the features listed in the [Stable Diffusion Toolkit
|
||||
Docs](https://invoke-ai.git) to learn about all the things you can
|
||||
do with InvokeAI.
|
||||
|
||||
Note that some GPUs are slow to warm up. In particular, when using
|
||||
an AMD card with the ROCm driver, you may have to wait for over a
|
||||
minute the first time you try to generate an image. Fortunately, after
|
||||
the warm up period rendering will be fast.
|
||||
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda
|
||||
activate invokeai", enter the `InvokeAI` directory, and then launch
|
||||
the invoke script. If you forget to activate the 'invokeai'
|
||||
environment, the script will fail with multiple `ModuleNotFound`
|
||||
errors.
|
||||
|
||||
## Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the `git clone` method
|
||||
(step 5) to download the InvokeAI directory, then to update to the latest and
|
||||
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
conda env update
|
||||
python scripts/preload_models.py --no-interactive #optional
|
||||
```
|
||||
|
||||
This will bring your local copy into sync with the remote one. The
|
||||
last step may be needed to take advantage of new features or released
|
||||
models. The `--no-interactive` flag will prevent the script from
|
||||
prompting you to download the big Stable Diffusion weights files.
|
||||
|
||||
## pip Install
|
||||
|
||||
To install InvokeAI with only the PIP package manager, please follow
|
||||
these steps:
|
||||
|
||||
1. Make sure you are using Python 3.9 or higher. Some InvokeAI
|
||||
features require this:
|
||||
|
||||
```bash
|
||||
python -V
|
||||
```
|
||||
|
||||
2. Install the `virtualenv` tool if you don't have it already:
|
||||
```bash
|
||||
pip install virtualenv
|
||||
```
|
||||
|
||||
3. From within the InvokeAI top-level directory, create and activate a
|
||||
virtual environment named `invokeai`:
|
||||
|
||||
```bash
|
||||
virtualenv invokeai
|
||||
source invokeai/bin/activate
|
||||
```
|
||||
|
||||
4. Pick the correct `requirements*.txt` file for your hardware and
|
||||
operating system.
|
||||
|
||||
We have created a series of environment files suited for different
|
||||
operating systems and GPU hardware. They are located in the
|
||||
`environments-and-requirements` directory:
|
||||
|
||||
```bash
|
||||
requirements-lin-amd.txt # Linux with an AMD (ROCm) GPU
|
||||
requirements-lin-arm64.txt # Linux running on arm64 systems
|
||||
requirements-lin-cuda.txt # Linux with an NVIDIA (CUDA) GPU
|
||||
requirements-mac-mps-cpu.txt # Macintoshes with MPS acceleration
|
||||
requirements-lin-win-colab-cuda.txt # Windows with an NVIDA (CUDA) GPU
|
||||
# (supports Google Colab too)
|
||||
```
|
||||
|
||||
Select the appropriate requirements file, and make a link to it
|
||||
from `environment.txt` in the top-level InvokeAI directory. The
|
||||
command to do this from the top-level directory is:
|
||||
|
||||
!!! todo "Macintosh and Linux"
|
||||
|
||||
```bash
|
||||
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
||||
```
|
||||
|
||||
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
|
||||
|
||||
!!! todo "Windows"
|
||||
|
||||
```bash
|
||||
mklink requirements.txt environments-and-requirements\requirements-lin-win-colab-cuda.txt
|
||||
```
|
||||
|
||||
Note that the order of arguments is reversed between the Linux/Mac and Windows
|
||||
commands!
|
||||
|
||||
Please do not link directly to the file
|
||||
`environments-and-requirements/requirements.txt`. This is a base requirements
|
||||
file that does not have the platform-specific libraries.
|
||||
|
||||
When this is done, confirm that a file `requirements.txt` has been
|
||||
created in the InvokeAI root directory and that it points to the
|
||||
correct file in the `environments-and-requirements`.
|
||||
|
||||
5. Run PIP
|
||||
|
||||
Be sure that the `invokeai` environment is active before doing
|
||||
this:
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Here are some common issues and their suggested solutions.
|
||||
|
||||
### Conda install
|
||||
|
||||
1. Conda fails before completing `conda update`:
|
||||
|
||||
The usual source of these errors is a package
|
||||
incompatibility. While we have tried to minimize these, over time
|
||||
packages get updated and sometimes introduce incompatibilities.
|
||||
|
||||
We suggest that you search [Issues](https://github.com/invoke-ai/InvokeAI/issues) or the
|
||||
Bug Report and Support channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
|
||||
|
||||
You may also try to install the broken packages manually using PIP. To do this, activate
|
||||
the `invokeai` environment, and run `pip install` with the name and version of the
|
||||
package that is causing the incompatibility. For example:
|
||||
|
||||
```bash
|
||||
pip install test-tube==0.7.5
|
||||
```
|
||||
|
||||
You can keep doing this until all requirements are satisfied and
|
||||
the `invoke.py` script runs without errors. Please report to
|
||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you
|
||||
were able to do to work around the problem so that others can
|
||||
benefit from your investigation.
|
||||
|
||||
2. `preload_models.py` or `invoke.py` crashes at an early stage
|
||||
|
||||
This is usually due to an incomplete or corrupted Conda install.
|
||||
Make sure you have linked to the correct environment file and run
|
||||
`conda update` again.
|
||||
|
||||
If the problem persists, a more extreme measure is to clear Conda's
|
||||
caches and remove the `invokeai` environment:
|
||||
|
||||
```bash
|
||||
conda deactivate
|
||||
conda env remove -n invokeai
|
||||
conda clean -a
|
||||
conda update
|
||||
```
|
||||
|
||||
This removes all cached library files, including ones that may have
|
||||
been corrupted somehow. (This is not supposed to happen, but does
|
||||
anyway).
|
||||
|
||||
3. `invoke.py` crashes at a later stage.
|
||||
|
||||
If the CLI or web site had been working ok, but something
|
||||
unexpected happens later on during the session, you've encountered
|
||||
a code bug that is probably unrelated to an install issue. Please
|
||||
search [Issues](https://github.com/invoke-ai/InvokeAI/issues), file
|
||||
a bug report, or ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
|
||||
|
@ -4,7 +4,6 @@ channels:
|
||||
- conda-forge
|
||||
dependencies:
|
||||
- albumentations=0.4.3
|
||||
- clip
|
||||
- cudatoolkit
|
||||
- einops=0.3.0
|
||||
- eventlet
|
||||
@ -38,6 +37,7 @@ dependencies:
|
||||
- realesrgan
|
||||
- taming-transformers-rom1504
|
||||
- test-tube>=0.7.5
|
||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
- -e .
|
||||
|
@ -10,7 +10,6 @@ dependencies:
|
||||
- pip:
|
||||
- --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
||||
- albumentations==0.4.3
|
||||
- clip
|
||||
- dependency_injector==4.40.0
|
||||
- diffusers==0.6.0
|
||||
- einops==0.3.0
|
||||
@ -40,6 +39,7 @@ dependencies:
|
||||
- torchmetrics==0.7.0
|
||||
- torchvision
|
||||
- transformers==4.21.3
|
||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
|
||||
- -e .
|
||||
|
@ -13,7 +13,6 @@ dependencies:
|
||||
- cudatoolkit=11.6
|
||||
- pip:
|
||||
- albumentations==0.4.3
|
||||
- clip
|
||||
- dependency_injector==4.40.0
|
||||
- diffusers==0.6.0
|
||||
- einops==0.3.0
|
||||
@ -40,6 +39,7 @@ dependencies:
|
||||
- torch-fidelity==0.3.0
|
||||
- torchmetrics==0.7.0
|
||||
- transformers==4.21.3
|
||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
- -e .
|
||||
|
@ -10,7 +10,6 @@ dependencies:
|
||||
|
||||
- albumentations=1.2.1
|
||||
- coloredlogs=15.0.1
|
||||
- clip
|
||||
- diffusers=0.6.0
|
||||
- einops=0.4.1
|
||||
- grpcio=1.46.4
|
||||
@ -48,9 +47,10 @@ dependencies:
|
||||
- dependency_injector==4.40.0
|
||||
- realesrgan==0.2.5.0
|
||||
- test-tube==0.7.5
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan
|
||||
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan
|
||||
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
- -e .
|
||||
variables:
|
||||
PYTORCH_ENABLE_MPS_FALLBACK: 1
|
||||
|
@ -13,7 +13,6 @@ dependencies:
|
||||
- cudatoolkit=11.6
|
||||
- pip:
|
||||
- albumentations==0.4.3
|
||||
- clip
|
||||
- dependency_injector==4.40.0
|
||||
- diffusers==0.6.0
|
||||
- einops==0.3.0
|
||||
@ -22,6 +21,7 @@ dependencies:
|
||||
- flask_cors==3.0.10
|
||||
- flask_socketio==5.3.0
|
||||
- getpass_asterisk
|
||||
- gfpgan
|
||||
- imageio-ffmpeg==0.4.2
|
||||
- imageio==2.9.0
|
||||
- kornia==0.6.0
|
||||
@ -31,6 +31,7 @@ dependencies:
|
||||
- pudb==2019.2
|
||||
- pyreadline3
|
||||
- pytorch-lightning==1.7.7
|
||||
- realesrgan
|
||||
- send2trash==1.8.0
|
||||
- streamlit==1.12.0
|
||||
- taming-transformers-rom1504
|
||||
@ -38,8 +39,7 @@ dependencies:
|
||||
- torch-fidelity==0.3.0
|
||||
- torchmetrics==0.7.0
|
||||
- transformers==4.21.3
|
||||
- git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
|
||||
- git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
|
||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
|
||||
- -e .
|
||||
|
@ -1,6 +1,7 @@
|
||||
-r requirements.txt
|
||||
-r environments-and-requirements/requirements.txt
|
||||
|
||||
protobuf==3.19.6
|
||||
torch<1.13.0
|
||||
torchvision<0.14.0
|
||||
# Get hardware-appropriate torch/torchvision
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm5.1.1 --trusted-host https://download.pytorch.org
|
||||
torch
|
||||
torchvision
|
||||
-e .
|
||||
|
3
environments-and-requirements/requirements-lin-arm64.txt
Normal file
@ -0,0 +1,3 @@
|
||||
--pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
-r environments-and-requirements/requirements.txt
|
||||
-e .
|
2
environments-and-requirements/requirements-lin-cuda.txt
Normal file
@ -0,0 +1,2 @@
|
||||
-r environments-and-requirements/requirements.txt
|
||||
-e .
|
@ -1,7 +0,0 @@
|
||||
-r requirements.txt
|
||||
|
||||
# Get hardware-appropriate torch/torchvision
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm5.1.1 --trusted-host https://download.pytorch.org
|
||||
torch
|
||||
torchvision
|
||||
-e .
|
@ -1,27 +0,0 @@
|
||||
albumentations==0.4.3
|
||||
einops==0.3.0
|
||||
diffusers==0.6.0
|
||||
huggingface-hub==0.8.1
|
||||
imageio==2.9.0
|
||||
imageio-ffmpeg==0.4.2
|
||||
kornia==0.6.0
|
||||
numpy==1.23.1
|
||||
--pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
omegaconf==2.1.1
|
||||
opencv-python==4.6.0.66
|
||||
pillow==9.2.0
|
||||
pudb==2019.2
|
||||
torch==1.12.1
|
||||
torchvision==0.13.0
|
||||
pytorch-lightning==1.7.7
|
||||
streamlit==1.12.0
|
||||
taming-transformers-rom1504
|
||||
test-tube>=0.7.5
|
||||
torch-fidelity==0.3.0
|
||||
torchmetrics==0.6.0
|
||||
transformers==4.21.3
|
||||
git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
git+https://github.com/lstein/k-diffusion.git@master#egg=k-diffusion
|
||||
git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan
|
||||
git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
-e .
|
@ -1,38 +0,0 @@
|
||||
--prefer-binary
|
||||
|
||||
# pip will resolve the version which matches torch
|
||||
albumentations
|
||||
dependency_injector==4.40.0
|
||||
diffusers
|
||||
einops
|
||||
eventlet
|
||||
flask==2.1.3
|
||||
flask_cors==3.0.10
|
||||
flask_socketio==5.3.0
|
||||
flaskwebgui==0.3.7
|
||||
getpass_asterisk
|
||||
huggingface-hub
|
||||
imageio
|
||||
imageio-ffmpeg
|
||||
kornia
|
||||
numpy
|
||||
omegaconf
|
||||
opencv-python
|
||||
pillow
|
||||
pip>=22
|
||||
pudb
|
||||
pyreadline3
|
||||
pytorch-lightning==1.7.7
|
||||
scikit-image>=0.19
|
||||
send2trash
|
||||
streamlit
|
||||
taming-transformers-rom1504
|
||||
test-tube
|
||||
torch-fidelity
|
||||
torchmetrics
|
||||
transformers==4.21.*
|
||||
git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
|
||||
git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
|
||||
git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
|
||||
git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
@ -1,4 +1,4 @@
|
||||
-r requirements.txt
|
||||
-r environments-and-requirements/requirements.txt
|
||||
|
||||
protobuf==3.19.6
|
||||
torch<1.13.0
|
||||
|
@ -1,4 +1,4 @@
|
||||
-r requirements.txt
|
||||
-r environments-and-requirements/requirements.txt
|
||||
|
||||
# Get hardware-appropriate torch/torchvision
|
||||
--extra-index-url https://download.pytorch.org/whl/cu116 --trusted-host https://download.pytorch.org
|