mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'spezialspezial-patch-9' into development
This commit is contained in:
commit
240e5486c8
1
.github/workflows/mkdocs-material.yml
vendored
1
.github/workflows/mkdocs-material.yml
vendored
@ -4,6 +4,7 @@ on:
|
|||||||
branches:
|
branches:
|
||||||
- 'main'
|
- 'main'
|
||||||
- 'development'
|
- 'development'
|
||||||
|
- 'release-candidate-2-1'
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
mkdocs-material:
|
mkdocs-material:
|
||||||
|
@ -43,6 +43,7 @@ title: Manual Installation, Linux
|
|||||||
environment named `invokeai` and activate the environment.
|
environment named `invokeai` and activate the environment.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
||||||
(base) ~/InvokeAI$ conda env create
|
(base) ~/InvokeAI$ conda env create
|
||||||
(base) ~/InvokeAI$ conda activate invokeai
|
(base) ~/InvokeAI$ conda activate invokeai
|
||||||
(invokeai) ~/InvokeAI$
|
(invokeai) ~/InvokeAI$
|
||||||
@ -51,50 +52,54 @@ title: Manual Installation, Linux
|
|||||||
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
|
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
|
||||||
above.
|
above.
|
||||||
|
|
||||||
6. Load a couple of small machine-learning models required by stable diffusion:
|
6. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
|
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
|
This script will lead you through the process of creating an account on Hugging Face,
|
||||||
|
accepting the terms and conditions of the Stable Diffusion model license, and
|
||||||
|
obtaining an access token for downloading. It will then download and install the
|
||||||
|
weights files for you.
|
||||||
|
|
||||||
This step is necessary because I modified the original just-in-time
|
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
|
||||||
model loading scheme to allow the script to work on GPU machines that are not
|
same thing.
|
||||||
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
|
||||||
|
|
||||||
7. Install the weights for the stable diffusion model.
|
7. Start generating images!
|
||||||
|
|
||||||
- Sign up at https://huggingface.co
|
# Command-line interface
|
||||||
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
(invokeai) python scripts/invoke.py
|
||||||
- Accept the terms and click Access Repository
|
|
||||||
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
|
||||||
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
|
||||||
|
|
||||||
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
|
# or run the web interface on localhost:9090!
|
||||||
for details.
|
(invokeai) python scripts/invoke.py --web
|
||||||
|
|
||||||
8. Start generating images!
|
# or run the web interface on your machine's network interface!
|
||||||
|
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
|
||||||
|
|
||||||
```bash
|
To use an alternative model you may invoke the `!switch` command in
|
||||||
# for the pre-release weights use the -l or --liaon400m switch
|
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
|
either the CLI or the Web UI. See [Command Line
|
||||||
|
Client](../features/CLI.md#model-selection-and-importation). The
|
||||||
|
model names are defined in `configs/models.yaml`.
|
||||||
|
|
||||||
# for the post-release weights do not use the switch
|
9. Subsequently, to relaunch the script, be sure to run "conda
|
||||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py
|
activate invokeai" (step 5, second command), enter the `InvokeAI`
|
||||||
|
directory, and then launch the invoke script (step 8). If you forget
|
||||||
# for additional configuration switches and arguments, use -h or --help
|
to activate the 'invokeai' environment, the script will fail with
|
||||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
|
multiple `ModuleNotFound` errors.
|
||||||
```
|
|
||||||
|
|
||||||
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
|
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
## Updating to newer versions of the script
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
This distribution is changing rapidly. If you used the `git clone`
|
||||||
|
method (step 5) to download the InvokeAI directory, then to update to
|
||||||
|
the latest and greatest version, launch the Anaconda window, enter
|
||||||
|
`InvokeAI` and type:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
(invokeai) ~/InvokeAI$ git pull
|
(invokeai) ~/InvokeAI$ git pull
|
||||||
|
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
|
||||||
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
|
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -19,24 +19,9 @@ an issue on Github and we will do our best to help.
|
|||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
First you need to download a large checkpoint file.
|
|
||||||
|
|
||||||
1. Sign up at https://huggingface.co
|
|
||||||
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
|
||||||
3. Accept the terms and click Access Repository
|
|
||||||
4. Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
|
||||||
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
|
||||||
|
|
||||||
There are many other models that you can try. Please see [../features/INSTALLING_MODELS.md]
|
|
||||||
for details.
|
|
||||||
|
|
||||||
While that is downloading, open Terminal and run the following
|
|
||||||
commands one at a time, reading the comments and taking care to run
|
|
||||||
the appropriate command for your Mac's architecture (Intel or M1).
|
|
||||||
|
|
||||||
!!! todo "Homebrew"
|
!!! todo "Homebrew"
|
||||||
|
|
||||||
If you have no brew installation yet (otherwise skip):
|
First you will install the "brew" package manager. Skip this if brew is already installed.
|
||||||
|
|
||||||
```bash title="install brew (and Xcode command line tools)"
|
```bash title="install brew (and Xcode command line tools)"
|
||||||
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||||
@ -100,25 +85,6 @@ the appropriate command for your Mac's architecture (Intel or M1).
|
|||||||
cd InvokeAI
|
cd InvokeAI
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! todo "Wait until the checkpoint-file download finished, then proceed"
|
|
||||||
|
|
||||||
We will leave the big checkpoint wherever you stashed it for long-term storage,
|
|
||||||
and make a link to it from the repo's folder. This allows you to use it for
|
|
||||||
other repos, or if you need to delete Invoke AI, you won't have to download it again.
|
|
||||||
|
|
||||||
```{.bash .annotate}
|
|
||||||
# Make the directory in the repo for the symlink
|
|
||||||
mkdir -p models/ldm/stable-diffusion-v1/
|
|
||||||
|
|
||||||
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
|
|
||||||
PATH_TO_CKPT="$HOME/Downloads" # (1)!
|
|
||||||
|
|
||||||
# Create a link to the checkpoint
|
|
||||||
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
|
|
||||||
```
|
|
||||||
|
|
||||||
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
|
|
||||||
|
|
||||||
!!! todo "Create the environment & install packages"
|
!!! todo "Create the environment & install packages"
|
||||||
|
|
||||||
=== "M1 Mac"
|
=== "M1 Mac"
|
||||||
@ -137,25 +103,40 @@ the appropriate command for your Mac's architecture (Intel or M1).
|
|||||||
# Activate the environment (you need to do this every time you want to run SD)
|
# Activate the environment (you need to do this every time you want to run SD)
|
||||||
conda activate invokeai
|
conda activate invokeai
|
||||||
|
|
||||||
# This will download some bits and pieces and make take a while
|
|
||||||
(invokeai) python scripts/preload_models.py
|
|
||||||
|
|
||||||
# Run SD!
|
|
||||||
(invokeai) python scripts/dream.py
|
|
||||||
|
|
||||||
# or run the web interface!
|
|
||||||
(invokeai) python scripts/invoke.py --web
|
|
||||||
|
|
||||||
# The original scripts should work as well.
|
|
||||||
(invokeai) python scripts/orig_scripts/txt2img.py \
|
|
||||||
--prompt "a photograph of an astronaut riding a horse" \
|
|
||||||
--plms
|
|
||||||
```
|
|
||||||
!!! info
|
!!! info
|
||||||
|
|
||||||
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
||||||
create -f environment-mac.yml` never finishing in some situations. So
|
create -f environment-mac.yml` never finishing in some situations. So
|
||||||
it isn't required but wont hurt.
|
it isn't required but won't hurt.
|
||||||
|
|
||||||
|
!!! todo "Download the model weight files"
|
||||||
|
|
||||||
|
The `preload_models.py` script downloads and installs the model weight
|
||||||
|
files for you. It will lead you through the process of getting a Hugging Face
|
||||||
|
account, accepting the Stable Diffusion model weight license agreement, and
|
||||||
|
creating a download token:
|
||||||
|
|
||||||
|
# This will take some time, depending on the speed of your internet connection
|
||||||
|
# and will consume about 10GB of space
|
||||||
|
(invokeai) python scripts/preload_models.py
|
||||||
|
|
||||||
|
!! todo "Run InvokeAI!"
|
||||||
|
|
||||||
|
# Command-line interface
|
||||||
|
(invokeai) python scripts/invoke.py
|
||||||
|
|
||||||
|
# or run the web interface on localhost:9090!
|
||||||
|
(invokeai) python scripts/invoke.py --web
|
||||||
|
|
||||||
|
# or run the web interface on your machine's network interface!
|
||||||
|
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
|
||||||
|
|
||||||
|
To use an alternative model you may invoke the `!switch` command in
|
||||||
|
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||||
|
either the CLI or the Web UI. See [Command Line
|
||||||
|
Client](../features/CLI.md#model-selection-and-importation). The
|
||||||
|
model names are defined in `configs/models.yaml`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Common problems
|
## Common problems
|
||||||
|
@ -69,40 +69,42 @@ in the wiki
|
|||||||
environment file isn't specified, conda will default to `environment.yml`. You will need
|
environment file isn't specified, conda will default to `environment.yml`. You will need
|
||||||
to provide the `-f` option if you wish to load a different environment file at any point.
|
to provide the `-f` option if you wish to load a different environment file at any point.
|
||||||
|
|
||||||
7. Run the command:
|
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
||||||
|
|
||||||
```batch
|
```bash
|
||||||
python scripts\preload_models.py
|
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
|
||||||
```
|
```
|
||||||
|
|
||||||
This installs several machine learning models that stable diffusion requires.
|
!!! note
|
||||||
|
This script will lead you through the process of creating an account on Hugging Face,
|
||||||
|
accepting the terms and conditions of the Stable Diffusion model license, and
|
||||||
|
obtaining an access token for downloading. It will then download and install the
|
||||||
|
weights files for you.
|
||||||
|
|
||||||
Note: This step is required. This was done because some users may might be
|
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
|
||||||
blocked by firewalls or have limited internet connectivity for the models to
|
same thing.
|
||||||
be downloaded just-in-time.
|
|
||||||
|
|
||||||
8. Now you need to install the weights for the big stable diffusion model.
|
8. Start generating images!
|
||||||
|
|
||||||
- Sign up at https://huggingface.co
|
# Command-line interface
|
||||||
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
(invokeai) python scripts/invoke.py
|
||||||
- Accept the terms and click Access Repository
|
|
||||||
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
|
||||||
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
|
||||||
|
|
||||||
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
|
# or run the web interface on localhost:9090!
|
||||||
for details.
|
(invokeai) python scripts/invoke.py --web
|
||||||
|
|
||||||
9. Start generating images!
|
# or run the web interface on your machine's network interface!
|
||||||
|
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
|
||||||
|
|
||||||
```batch title="for the pre-release weights"
|
To use an alternative model you may invoke the `!switch` command in
|
||||||
python scripts\invoke.py -l
|
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||||
```
|
either the CLI or the Web UI. See [Command Line
|
||||||
|
Client](../features/CLI.md#model-selection-and-importation). The
|
||||||
|
model names are defined in `configs/models.yaml`.
|
||||||
|
|
||||||
```batch title="for the post-release weights"
|
9. Subsequently, to relaunch the script, first activate the Anaconda
|
||||||
python scripts\invoke.py
|
command window (step 3),enter the InvokeAI directory (step 5, `cd
|
||||||
```
|
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
|
||||||
|
launch the invoke script (step 9).
|
||||||
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
|
|
||||||
|
|
||||||
!!! tip "Tildebyte has written an alternative"
|
!!! tip "Tildebyte has written an alternative"
|
||||||
|
|
||||||
|
@ -11,6 +11,7 @@ dependencies:
|
|||||||
- numpy=1.19
|
- numpy=1.19
|
||||||
- imageio=2.9.0
|
- imageio=2.9.0
|
||||||
- opencv=4.6.0
|
- opencv=4.6.0
|
||||||
|
- getpass_asterisk
|
||||||
- pillow=8.*
|
- pillow=8.*
|
||||||
- flask=2.1.*
|
- flask=2.1.*
|
||||||
- flask_cors=3.0.10
|
- flask_cors=3.0.10
|
||||||
|
@ -22,6 +22,7 @@ dependencies:
|
|||||||
- diffusers=0.6.0
|
- diffusers=0.6.0
|
||||||
- einops=0.4.1
|
- einops=0.4.1
|
||||||
- grpcio=1.46.4
|
- grpcio=1.46.4
|
||||||
|
- getpass_asterisk
|
||||||
- humanfriendly=10.0
|
- humanfriendly=10.0
|
||||||
- imageio=2.21.2
|
- imageio=2.21.2
|
||||||
- imageio-ffmpeg=0.4.7
|
- imageio-ffmpeg=0.4.7
|
||||||
|
@ -1,14 +1,14 @@
|
|||||||
name: invokeai
|
name: invokeai
|
||||||
channels:
|
channels:
|
||||||
- pytorch
|
- pytorch
|
||||||
|
- nvidia
|
||||||
- defaults
|
- defaults
|
||||||
dependencies:
|
dependencies:
|
||||||
- python>=3.9
|
- python>=3.9
|
||||||
- pip=20.3
|
- pip=22.2.2
|
||||||
- cudatoolkit=11.3
|
- numpy=1.23.3
|
||||||
- pytorch=1.11.0
|
- torchvision=0.14.0
|
||||||
- torchvision=0.12.0
|
- pytorch-cuda=11.7
|
||||||
- numpy=1.19.2
|
|
||||||
- pip:
|
- pip:
|
||||||
- albumentations==0.4.3
|
- albumentations==0.4.3
|
||||||
- opencv-python==4.5.5.64
|
- opencv-python==4.5.5.64
|
||||||
@ -33,6 +33,7 @@ dependencies:
|
|||||||
- flask_cors==3.0.10
|
- flask_cors==3.0.10
|
||||||
- dependency_injector==4.40.0
|
- dependency_injector==4.40.0
|
||||||
- eventlet
|
- eventlet
|
||||||
|
- getpass_asterisk
|
||||||
- kornia==0.6.0
|
- kornia==0.6.0
|
||||||
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
|
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||||
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
|
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
|
||||||
|
829
frontend/dist/assets/index.4488003f.js
vendored
Normal file
829
frontend/dist/assets/index.4488003f.js
vendored
Normal file
File diff suppressed because one or more lines are too long
690
frontend/dist/assets/index.ae92a637.js
vendored
Normal file
690
frontend/dist/assets/index.ae92a637.js
vendored
Normal file
File diff suppressed because one or more lines are too long
517
frontend/dist/assets/index.cc049b93.js
vendored
Normal file
517
frontend/dist/assets/index.cc049b93.js
vendored
Normal file
File diff suppressed because one or more lines are too long
517
frontend/dist/assets/index.e2832fd4.js
vendored
Normal file
517
frontend/dist/assets/index.e2832fd4.js
vendored
Normal file
File diff suppressed because one or more lines are too long
@ -80,14 +80,36 @@ if not exist ".git" (
|
|||||||
call conda activate
|
call conda activate
|
||||||
|
|
||||||
@rem create the environment
|
@rem create the environment
|
||||||
|
call conda env remove -n invokeai
|
||||||
call conda env create
|
call conda env create
|
||||||
call conda activate invokeai
|
if "%ERRORLEVEL%" NEQ "0" (
|
||||||
|
echo ""
|
||||||
|
echo "Something went wrong while installing Python libraries and cannot continue.
|
||||||
|
echo "Please visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
|
||||||
|
echo "installation methods."
|
||||||
|
echo "Press any key to continue"
|
||||||
|
pause
|
||||||
|
exit /b
|
||||||
|
)
|
||||||
|
|
||||||
|
call conda activate invokeai
|
||||||
@rem preload the models
|
@rem preload the models
|
||||||
call python scripts\preload_models.py
|
call python scripts\preload_models.py
|
||||||
|
if "%ERRORLEVEL%" NEQ "0" (
|
||||||
|
echo ""
|
||||||
|
echo "The preload_models.py script crashed or was cancelled."
|
||||||
|
echo "InvokeAI is not ready to run. To run preload_models.py again,"
|
||||||
|
echo "run the command 'update.bat' in this directory."
|
||||||
|
echo "Press any key to continue"
|
||||||
|
pause
|
||||||
|
exit /b
|
||||||
|
)
|
||||||
|
|
||||||
@rem tell the user their next steps
|
@rem tell the user their next steps
|
||||||
echo.
|
echo ""
|
||||||
|
echo "* InvokeAI installed successfully *"
|
||||||
echo "You can now start generating images by double-clicking the 'invoke.bat' file (inside this folder)
|
echo "You can now start generating images by double-clicking the 'invoke.bat' file (inside this folder)
|
||||||
|
echo "Press any key to continue"
|
||||||
pause
|
pause
|
||||||
|
exit 0
|
||||||
|
|
||||||
|
@ -99,13 +99,32 @@ conda activate
|
|||||||
if [ "$OS_NAME" == "mac" ]; then
|
if [ "$OS_NAME" == "mac" ]; then
|
||||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-${OS_ARCH} conda env create -f environment-mac.yml
|
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-${OS_ARCH} conda env create -f environment-mac.yml
|
||||||
else
|
else
|
||||||
|
conda env remove -n invokeai
|
||||||
conda env create -f environment.yml
|
conda env create -f environment.yml
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
status=$?
|
||||||
|
|
||||||
|
if test $status -ne 0
|
||||||
|
then
|
||||||
|
echo "Something went wrong while installing Python libraries and cannot continue."
|
||||||
|
echo "Please visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
|
||||||
|
echo "installation methods"
|
||||||
|
else
|
||||||
conda activate invokeai
|
conda activate invokeai
|
||||||
|
|
||||||
# preload the models
|
# preload the models
|
||||||
|
echo "Calling the preload_models.py script"
|
||||||
python scripts/preload_models.py
|
python scripts/preload_models.py
|
||||||
|
status=$?
|
||||||
|
if test $status -ne 0
|
||||||
|
then
|
||||||
|
echo "The preload_models.py script crashed or was cancelled."
|
||||||
|
echo "InvokeAI is not ready to run. Try again by running"
|
||||||
|
echo "update.sh in this directory."
|
||||||
|
else
|
||||||
# tell the user their next steps
|
# tell the user their next steps
|
||||||
echo "You can now start generating images by running invoke.sh (inside this folder), using ./invoke.sh"
|
echo "You can now start generating images by running invoke.sh (inside this folder), using ./invoke.sh"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
conda activate invokeai
|
||||||
|
@ -896,7 +896,7 @@ class Generate:
|
|||||||
assert os.path.exists(image_path), '** "{image_path}" not found. Please enter the name of an existing image file to mask **'
|
assert os.path.exists(image_path), '** "{image_path}" not found. Please enter the name of an existing image file to mask **'
|
||||||
basename,_ = os.path.splitext(os.path.basename(image_path))
|
basename,_ = os.path.splitext(os.path.basename(image_path))
|
||||||
if self.txt2mask is None:
|
if self.txt2mask is None:
|
||||||
self.txt2mask = Txt2Mask(device = self.device)
|
self.txt2mask = Txt2Mask(device = self.device, refined=True)
|
||||||
segmented = self.txt2mask.segment(image_path,prompt)
|
segmented = self.txt2mask.segment(image_path,prompt)
|
||||||
trans = segmented.to_transparent()
|
trans = segmented.to_transparent()
|
||||||
inverse = segmented.to_transparent(invert=True)
|
inverse = segmented.to_transparent(invert=True)
|
||||||
|
@ -41,7 +41,7 @@ class Outcrop(object):
|
|||||||
height = extended_image.height,
|
height = extended_image.height,
|
||||||
init_img = extended_image,
|
init_img = extended_image,
|
||||||
strength = 0.90,
|
strength = 0.90,
|
||||||
image_callback = wrapped_callback,
|
image_callback = wrapped_callback if image_callback else None,
|
||||||
seam_size = opt.seam_size or 96,
|
seam_size = opt.seam_size or 96,
|
||||||
seam_blur = opt.seam_blur or 16,
|
seam_blur = opt.seam_blur or 16,
|
||||||
seam_strength = opt.seam_strength or 0.7,
|
seam_strength = opt.seam_strength or 0.7,
|
||||||
|
@ -36,6 +36,7 @@ from torchvision import transforms
|
|||||||
|
|
||||||
CLIP_VERSION = 'ViT-B/16'
|
CLIP_VERSION = 'ViT-B/16'
|
||||||
CLIPSEG_WEIGHTS = 'src/clipseg/weights/rd64-uni.pth'
|
CLIPSEG_WEIGHTS = 'src/clipseg/weights/rd64-uni.pth'
|
||||||
|
CLIPSEG_WEIGHTS_REFINED = 'src/clipseg/weights/rd64-uni-refined.pth'
|
||||||
CLIPSEG_SIZE = 352
|
CLIPSEG_SIZE = 352
|
||||||
|
|
||||||
class SegmentedGrayscale(object):
|
class SegmentedGrayscale(object):
|
||||||
@ -72,14 +73,14 @@ class Txt2Mask(object):
|
|||||||
Create new Txt2Mask object. The optional device argument can be one of
|
Create new Txt2Mask object. The optional device argument can be one of
|
||||||
'cuda', 'mps' or 'cpu'.
|
'cuda', 'mps' or 'cpu'.
|
||||||
'''
|
'''
|
||||||
def __init__(self,device='cpu'):
|
def __init__(self,device='cpu',refined=False):
|
||||||
print('>> Initializing clipseg model for text to mask inference')
|
print('>> Initializing clipseg model for text to mask inference')
|
||||||
self.device = device
|
self.device = device
|
||||||
self.model = CLIPDensePredT(version=CLIP_VERSION, reduce_dim=64, )
|
self.model = CLIPDensePredT(version=CLIP_VERSION, reduce_dim=64, complex_trans_conv=refined)
|
||||||
self.model.eval()
|
self.model.eval()
|
||||||
# initially we keep everything in cpu to conserve space
|
# initially we keep everything in cpu to conserve space
|
||||||
self.model.to('cpu')
|
self.model.to('cpu')
|
||||||
self.model.load_state_dict(torch.load(CLIPSEG_WEIGHTS, map_location=torch.device('cpu')), strict=False)
|
self.model.load_state_dict(torch.load(CLIPSEG_WEIGHTS_REFINED if refined else CLIPSEG_WEIGHTS, map_location=torch.device('cpu')), strict=False)
|
||||||
|
|
||||||
@torch.no_grad()
|
@torch.no_grad()
|
||||||
def segment(self, image, prompt:str) -> SegmentedGrayscale:
|
def segment(self, image, prompt:str) -> SegmentedGrayscale:
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
-r requirements.txt
|
-r requirements.txt
|
||||||
|
|
||||||
protobuf==3.19.6
|
protobuf==3.19.6
|
||||||
torch
|
torch<1.13.0
|
||||||
torchvision
|
torchvision<0.14.0
|
||||||
-e .
|
-e .
|
||||||
|
@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
albumentations
|
albumentations
|
||||||
einops
|
einops
|
||||||
|
getpass_asterisk
|
||||||
huggingface-hub
|
huggingface-hub
|
||||||
imageio-ffmpeg
|
imageio-ffmpeg
|
||||||
imageio
|
imageio
|
||||||
@ -13,7 +14,7 @@ opencv-python
|
|||||||
pillow
|
pillow
|
||||||
pip>=22
|
pip>=22
|
||||||
pudb
|
pudb
|
||||||
pytorch-lightning=1.7.7
|
pytorch-lightning==1.7.7
|
||||||
scikit-image>=0.19
|
scikit-image>=0.19
|
||||||
streamlit
|
streamlit
|
||||||
pyreadline3
|
pyreadline3
|
||||||
|
@ -14,18 +14,21 @@ import warnings
|
|||||||
from urllib import request
|
from urllib import request
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
from omegaconf import OmegaConf
|
from omegaconf import OmegaConf
|
||||||
|
from huggingface_hub import HfFolder, hf_hub_url
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from getpass_asterisk import getpass_asterisk
|
||||||
import traceback
|
import traceback
|
||||||
import getpass
|
|
||||||
import requests
|
import requests
|
||||||
import clip
|
import clip
|
||||||
import transformers
|
import transformers
|
||||||
import torch
|
import torch
|
||||||
transformers.logging.set_verbosity_error()
|
transformers.logging.set_verbosity_error()
|
||||||
|
|
||||||
# deferred loading so that help message can be printed quickly
|
import warnings
|
||||||
def load_libs():
|
warnings.filterwarnings('ignore')
|
||||||
pass
|
#warnings.simplefilter('ignore')
|
||||||
|
#warnings.filterwarnings('ignore',category=DeprecationWarning)
|
||||||
|
#warnings.filterwarnings('ignore',category=UserWarning)
|
||||||
|
|
||||||
#--------------------------globals--
|
#--------------------------globals--
|
||||||
Model_dir = './models/ldm/stable-diffusion-v1/'
|
Model_dir = './models/ldm/stable-diffusion-v1/'
|
||||||
@ -98,7 +101,7 @@ this program and resume later.\n'''
|
|||||||
#--------------------------------------------
|
#--------------------------------------------
|
||||||
def postscript():
|
def postscript():
|
||||||
print(
|
print(
|
||||||
'''You're all set! You may now launch InvokeAI using one of these two commands:
|
'''\n** Model Installation Successful **\nYou're all set! You may now launch InvokeAI using one of these two commands:
|
||||||
Web version:
|
Web version:
|
||||||
|
|
||||||
python scripts/invoke.py --web (connect to http://localhost:9090)
|
python scripts/invoke.py --web (connect to http://localhost:9090)
|
||||||
@ -220,10 +223,18 @@ This involves a few easy steps.
|
|||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
input('Press <enter> when you are ready to continue:')
|
input('Press <enter> when you are ready to continue:')
|
||||||
|
<<<<<<< HEAD
|
||||||
|
|
||||||
from huggingface_hub import HfFolder
|
from huggingface_hub import HfFolder
|
||||||
|
=======
|
||||||
|
print('(Fetching Hugging Face token from cache...',end='')
|
||||||
|
>>>>>>> spezialspezial-patch-9
|
||||||
access_token = HfFolder.get_token()
|
access_token = HfFolder.get_token()
|
||||||
|
if access_token is not None:
|
||||||
|
print('found')
|
||||||
|
|
||||||
if access_token is None:
|
if access_token is None:
|
||||||
|
print('not found')
|
||||||
print('''
|
print('''
|
||||||
4. Thank you! The last step is to enter your HuggingFace access token so that
|
4. Thank you! The last step is to enter your HuggingFace access token so that
|
||||||
this script is authorized to initiate the download. Go to the access tokens
|
this script is authorized to initiate the download. Go to the access tokens
|
||||||
@ -237,8 +248,7 @@ This involves a few easy steps.
|
|||||||
|
|
||||||
Now copy the token to your clipboard and paste it here: '''
|
Now copy the token to your clipboard and paste it here: '''
|
||||||
)
|
)
|
||||||
access_token = getpass.getpass()
|
access_token = getpass_asterisk.getpass_asterisk()
|
||||||
HfFolder.save_token(access_token)
|
|
||||||
return access_token
|
return access_token
|
||||||
|
|
||||||
#---------------------------------------------
|
#---------------------------------------------
|
||||||
@ -268,14 +278,26 @@ def download_weight_datasets(models:dict, access_token:str):
|
|||||||
)
|
)
|
||||||
if success:
|
if success:
|
||||||
successful[mod] = True
|
successful[mod] = True
|
||||||
|
if len(successful) < len(models):
|
||||||
|
print(f'\n\n** There were errors downloading one or more files. **')
|
||||||
|
print('Please double-check your license agreements, and your access token.')
|
||||||
|
HfFolder.delete_token()
|
||||||
|
print('Press any key to try again. Type ^C to quit.\n')
|
||||||
|
input()
|
||||||
|
return None
|
||||||
|
|
||||||
|
HfFolder.save_token(access_token)
|
||||||
keys = ', '.join(successful.keys())
|
keys = ', '.join(successful.keys())
|
||||||
print(f'Successfully installed {keys}')
|
print(f'Successfully installed {keys}')
|
||||||
return successful
|
return successful
|
||||||
|
|
||||||
#---------------------------------------------
|
#---------------------------------------------
|
||||||
def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
||||||
|
<<<<<<< HEAD
|
||||||
from huggingface_hub import hf_hub_url
|
from huggingface_hub import hf_hub_url
|
||||||
|
|
||||||
|
=======
|
||||||
|
>>>>>>> spezialspezial-patch-9
|
||||||
model_dest = os.path.join(Model_dir, model_name)
|
model_dest = os.path.join(Model_dir, model_name)
|
||||||
os.makedirs(os.path.dirname(model_dest), exist_ok=True)
|
os.makedirs(os.path.dirname(model_dest), exist_ok=True)
|
||||||
url = hf_hub_url(repo_id, model_name)
|
url = hf_hub_url(repo_id, model_name)
|
||||||
@ -295,6 +317,8 @@ def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
|||||||
if resp.status_code==416: # "range not satisfiable", which means nothing to return
|
if resp.status_code==416: # "range not satisfiable", which means nothing to return
|
||||||
print(f'* {model_name}: complete file found. Skipping.')
|
print(f'* {model_name}: complete file found. Skipping.')
|
||||||
return True
|
return True
|
||||||
|
elif resp.status_code != 200:
|
||||||
|
print(f'** An error occurred during downloading {model_name}: {resp.reason}')
|
||||||
elif exist_size > 0:
|
elif exist_size > 0:
|
||||||
print(f'* {model_name}: partial file found. Resuming...')
|
print(f'* {model_name}: partial file found. Resuming...')
|
||||||
else:
|
else:
|
||||||
@ -302,7 +326,7 @@ def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
if total < 2000:
|
if total < 2000:
|
||||||
print(f'* {model_name}: {resp.text}')
|
print(f'*** ERROR DOWNLOADING {model_name}: {resp.text}')
|
||||||
return False
|
return False
|
||||||
|
|
||||||
with open(model_dest, open_mode) as file, tqdm(
|
with open(model_dest, open_mode) as file, tqdm(
|
||||||
@ -329,7 +353,11 @@ def update_config_file(successfully_downloaded:dict,opt:dict):
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
if os.path.exists(Config_file):
|
if os.path.exists(Config_file):
|
||||||
|
<<<<<<< HEAD
|
||||||
print(f'** {Config_file} exists. Renaming to {Config_file}.orig')
|
print(f'** {Config_file} exists. Renaming to {Config_file}.orig')
|
||||||
|
=======
|
||||||
|
print(f'* {Config_file} exists. Renaming to {Config_file}.orig')
|
||||||
|
>>>>>>> spezialspezial-patch-9
|
||||||
os.rename(Config_file,f'{Config_file}.orig')
|
os.rename(Config_file,f'{Config_file}.orig')
|
||||||
tmpfile = os.path.join(os.path.dirname(Config_file),'new_config.tmp')
|
tmpfile = os.path.join(os.path.dirname(Config_file),'new_config.tmp')
|
||||||
with open(tmpfile, 'w') as outfile:
|
with open(tmpfile, 'w') as outfile:
|
||||||
@ -383,26 +411,35 @@ def new_config_file_contents(successfully_downloaded:dict, Config_file:str)->str
|
|||||||
# this will preload the Bert tokenizer fles
|
# this will preload the Bert tokenizer fles
|
||||||
def download_bert():
|
def download_bert():
|
||||||
print('Installing bert tokenizer (ignore deprecation errors)...', end='')
|
print('Installing bert tokenizer (ignore deprecation errors)...', end='')
|
||||||
|
<<<<<<< HEAD
|
||||||
from transformers import BertTokenizerFast, AutoFeatureExtractor
|
from transformers import BertTokenizerFast, AutoFeatureExtractor
|
||||||
|
=======
|
||||||
|
sys.stdout.flush()
|
||||||
|
>>>>>>> spezialspezial-patch-9
|
||||||
with warnings.catch_warnings():
|
with warnings.catch_warnings():
|
||||||
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
||||||
|
from transformers import BertTokenizerFast, AutoFeatureExtractor
|
||||||
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
|
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
|
||||||
print('...success')
|
print('...success')
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
#---------------------------------------------
|
#---------------------------------------------
|
||||||
# this will download requirements for Kornia
|
# this will download requirements for Kornia
|
||||||
def download_kornia():
|
def download_kornia():
|
||||||
print('Installing Kornia requirements...', end='')
|
print('Installing Kornia requirements (ignore deprecation errors)...', end='')
|
||||||
with warnings.catch_warnings():
|
sys.stdout.flush()
|
||||||
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
|
||||||
import kornia
|
import kornia
|
||||||
print('...success')
|
print('...success')
|
||||||
|
|
||||||
#---------------------------------------------
|
#---------------------------------------------
|
||||||
def download_clip():
|
def download_clip():
|
||||||
print('Loading CLIP model...',end='')
|
print('Loading CLIP model...',end='')
|
||||||
|
<<<<<<< HEAD
|
||||||
from transformers import CLIPTokenizer, CLIPTextModel
|
from transformers import CLIPTokenizer, CLIPTextModel
|
||||||
|
=======
|
||||||
|
with warnings.catch_warnings():
|
||||||
|
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
||||||
|
from transformers import CLIPTokenizer, CLIPTextModel
|
||||||
|
>>>>>>> spezialspezial-patch-9
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
version = 'openai/clip-vit-large-patch14'
|
version = 'openai/clip-vit-large-patch14'
|
||||||
tokenizer = CLIPTokenizer.from_pretrained(version)
|
tokenizer = CLIPTokenizer.from_pretrained(version)
|
||||||
@ -531,7 +568,6 @@ if __name__ == '__main__':
|
|||||||
default='./configs/models.yaml',
|
default='./configs/models.yaml',
|
||||||
help='path to configuration file to create')
|
help='path to configuration file to create')
|
||||||
opt = parser.parse_args()
|
opt = parser.parse_args()
|
||||||
load_libs()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if opt.interactive:
|
if opt.interactive:
|
||||||
|
@ -8,6 +8,12 @@ if exist ".git" (
|
|||||||
call git pull
|
call git pull
|
||||||
)
|
)
|
||||||
|
|
||||||
conda env update
|
|
||||||
|
|
||||||
|
conda env update
|
||||||
|
conda activate invokeai
|
||||||
|
python scripts/preload_models.py
|
||||||
|
|
||||||
|
echo "Press any key to continue"
|
||||||
pause
|
pause
|
||||||
|
exit 0
|
||||||
|
|
||||||
|
@ -20,3 +20,5 @@ case "${OS_NAME}" in
|
|||||||
Darwin*) conda env update -f environment-mac.yml;;
|
Darwin*) conda env update -f environment-mac.yml;;
|
||||||
*) echo "Unknown OS: $OS_NAME! This script runs only on Linux or Mac" && exit
|
*) echo "Unknown OS: $OS_NAME! This script runs only on Linux or Mac" && exit
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
python scripts/preload_models.py
|
||||||
|
Loading…
Reference in New Issue
Block a user