mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'spezialspezial-patch-9' into development
This commit is contained in:
commit
240e5486c8
1
.github/workflows/mkdocs-material.yml
vendored
1
.github/workflows/mkdocs-material.yml
vendored
@ -4,6 +4,7 @@ on:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
- 'release-candidate-2-1'
|
||||
|
||||
jobs:
|
||||
mkdocs-material:
|
||||
|
@ -43,6 +43,7 @@ title: Manual Installation, Linux
|
||||
environment named `invokeai` and activate the environment.
|
||||
|
||||
```bash
|
||||
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
||||
(base) ~/InvokeAI$ conda env create
|
||||
(base) ~/InvokeAI$ conda activate invokeai
|
||||
(invokeai) ~/InvokeAI$
|
||||
@ -51,50 +52,54 @@ title: Manual Installation, Linux
|
||||
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
|
||||
above.
|
||||
|
||||
6. Load a couple of small machine-learning models required by stable diffusion:
|
||||
6. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
||||
|
||||
```bash
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
|
||||
```
|
||||
|
||||
!!! note
|
||||
This script will lead you through the process of creating an account on Hugging Face,
|
||||
accepting the terms and conditions of the Stable Diffusion model license, and
|
||||
obtaining an access token for downloading. It will then download and install the
|
||||
weights files for you.
|
||||
|
||||
This step is necessary because I modified the original just-in-time
|
||||
model loading scheme to allow the script to work on GPU machines that are not
|
||||
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
||||
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
|
||||
same thing.
|
||||
|
||||
7. Install the weights for the stable diffusion model.
|
||||
7. Start generating images!
|
||||
|
||||
- Sign up at https://huggingface.co
|
||||
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||
- Accept the terms and click Access Repository
|
||||
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
||||
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
||||
# Command-line interface
|
||||
(invokeai) python scripts/invoke.py
|
||||
|
||||
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
|
||||
for details.
|
||||
# or run the web interface on localhost:9090!
|
||||
(invokeai) python scripts/invoke.py --web
|
||||
|
||||
8. Start generating images!
|
||||
# or run the web interface on your machine's network interface!
|
||||
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
|
||||
|
||||
```bash
|
||||
# for the pre-release weights use the -l or --liaon400m switch
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
|
||||
To use an alternative model you may invoke the `!switch` command in
|
||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||
either the CLI or the Web UI. See [Command Line
|
||||
Client](../features/CLI.md#model-selection-and-importation). The
|
||||
model names are defined in `configs/models.yaml`.
|
||||
|
||||
# for the post-release weights do not use the switch
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py
|
||||
|
||||
# for additional configuration switches and arguments, use -h or --help
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
|
||||
```
|
||||
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda
|
||||
activate invokeai" (step 5, second command), enter the `InvokeAI`
|
||||
directory, and then launch the invoke script (step 8). If you forget
|
||||
to activate the 'invokeai' environment, the script will fail with
|
||||
multiple `ModuleNotFound` errors.
|
||||
|
||||
## Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
||||
This distribution is changing rapidly. If you used the `git clone`
|
||||
method (step 5) to download the InvokeAI directory, then to update to
|
||||
the latest and greatest version, launch the Anaconda window, enter
|
||||
`InvokeAI` and type:
|
||||
|
||||
```bash
|
||||
(invokeai) ~/InvokeAI$ git pull
|
||||
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
|
||||
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
|
||||
```
|
||||
|
||||
|
@ -19,24 +19,9 @@ an issue on Github and we will do our best to help.
|
||||
|
||||
## Installation
|
||||
|
||||
First you need to download a large checkpoint file.
|
||||
|
||||
1. Sign up at https://huggingface.co
|
||||
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||
3. Accept the terms and click Access Repository
|
||||
4. Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
||||
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
||||
|
||||
There are many other models that you can try. Please see [../features/INSTALLING_MODELS.md]
|
||||
for details.
|
||||
|
||||
While that is downloading, open Terminal and run the following
|
||||
commands one at a time, reading the comments and taking care to run
|
||||
the appropriate command for your Mac's architecture (Intel or M1).
|
||||
|
||||
!!! todo "Homebrew"
|
||||
|
||||
If you have no brew installation yet (otherwise skip):
|
||||
First you will install the "brew" package manager. Skip this if brew is already installed.
|
||||
|
||||
```bash title="install brew (and Xcode command line tools)"
|
||||
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||
@ -100,25 +85,6 @@ the appropriate command for your Mac's architecture (Intel or M1).
|
||||
cd InvokeAI
|
||||
```
|
||||
|
||||
!!! todo "Wait until the checkpoint-file download finished, then proceed"
|
||||
|
||||
We will leave the big checkpoint wherever you stashed it for long-term storage,
|
||||
and make a link to it from the repo's folder. This allows you to use it for
|
||||
other repos, or if you need to delete Invoke AI, you won't have to download it again.
|
||||
|
||||
```{.bash .annotate}
|
||||
# Make the directory in the repo for the symlink
|
||||
mkdir -p models/ldm/stable-diffusion-v1/
|
||||
|
||||
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
|
||||
PATH_TO_CKPT="$HOME/Downloads" # (1)!
|
||||
|
||||
# Create a link to the checkpoint
|
||||
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
|
||||
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
|
||||
|
||||
!!! todo "Create the environment & install packages"
|
||||
|
||||
=== "M1 Mac"
|
||||
@ -137,25 +103,40 @@ the appropriate command for your Mac's architecture (Intel or M1).
|
||||
# Activate the environment (you need to do this every time you want to run SD)
|
||||
conda activate invokeai
|
||||
|
||||
# This will download some bits and pieces and make take a while
|
||||
(invokeai) python scripts/preload_models.py
|
||||
|
||||
# Run SD!
|
||||
(invokeai) python scripts/dream.py
|
||||
|
||||
# or run the web interface!
|
||||
(invokeai) python scripts/invoke.py --web
|
||||
|
||||
# The original scripts should work as well.
|
||||
(invokeai) python scripts/orig_scripts/txt2img.py \
|
||||
--prompt "a photograph of an astronaut riding a horse" \
|
||||
--plms
|
||||
```
|
||||
!!! info
|
||||
|
||||
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
||||
create -f environment-mac.yml` never finishing in some situations. So
|
||||
it isn't required but wont hurt.
|
||||
it isn't required but won't hurt.
|
||||
|
||||
!!! todo "Download the model weight files"
|
||||
|
||||
The `preload_models.py` script downloads and installs the model weight
|
||||
files for you. It will lead you through the process of getting a Hugging Face
|
||||
account, accepting the Stable Diffusion model weight license agreement, and
|
||||
creating a download token:
|
||||
|
||||
# This will take some time, depending on the speed of your internet connection
|
||||
# and will consume about 10GB of space
|
||||
(invokeai) python scripts/preload_models.py
|
||||
|
||||
!! todo "Run InvokeAI!"
|
||||
|
||||
# Command-line interface
|
||||
(invokeai) python scripts/invoke.py
|
||||
|
||||
# or run the web interface on localhost:9090!
|
||||
(invokeai) python scripts/invoke.py --web
|
||||
|
||||
# or run the web interface on your machine's network interface!
|
||||
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
|
||||
|
||||
To use an alternative model you may invoke the `!switch` command in
|
||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||
either the CLI or the Web UI. See [Command Line
|
||||
Client](../features/CLI.md#model-selection-and-importation). The
|
||||
model names are defined in `configs/models.yaml`.
|
||||
|
||||
---
|
||||
|
||||
## Common problems
|
||||
|
@ -69,40 +69,42 @@ in the wiki
|
||||
environment file isn't specified, conda will default to `environment.yml`. You will need
|
||||
to provide the `-f` option if you wish to load a different environment file at any point.
|
||||
|
||||
7. Run the command:
|
||||
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
||||
|
||||
```batch
|
||||
python scripts\preload_models.py
|
||||
```bash
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
|
||||
```
|
||||
|
||||
This installs several machine learning models that stable diffusion requires.
|
||||
!!! note
|
||||
This script will lead you through the process of creating an account on Hugging Face,
|
||||
accepting the terms and conditions of the Stable Diffusion model license, and
|
||||
obtaining an access token for downloading. It will then download and install the
|
||||
weights files for you.
|
||||
|
||||
Note: This step is required. This was done because some users may might be
|
||||
blocked by firewalls or have limited internet connectivity for the models to
|
||||
be downloaded just-in-time.
|
||||
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
|
||||
same thing.
|
||||
|
||||
8. Now you need to install the weights for the big stable diffusion model.
|
||||
8. Start generating images!
|
||||
|
||||
- Sign up at https://huggingface.co
|
||||
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||
- Accept the terms and click Access Repository
|
||||
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
||||
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
||||
# Command-line interface
|
||||
(invokeai) python scripts/invoke.py
|
||||
|
||||
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
|
||||
for details.
|
||||
# or run the web interface on localhost:9090!
|
||||
(invokeai) python scripts/invoke.py --web
|
||||
|
||||
9. Start generating images!
|
||||
# or run the web interface on your machine's network interface!
|
||||
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
|
||||
|
||||
```batch title="for the pre-release weights"
|
||||
python scripts\invoke.py -l
|
||||
```
|
||||
To use an alternative model you may invoke the `!switch` command in
|
||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||
either the CLI or the Web UI. See [Command Line
|
||||
Client](../features/CLI.md#model-selection-and-importation). The
|
||||
model names are defined in `configs/models.yaml`.
|
||||
|
||||
```batch title="for the post-release weights"
|
||||
python scripts\invoke.py
|
||||
```
|
||||
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
|
||||
9. Subsequently, to relaunch the script, first activate the Anaconda
|
||||
command window (step 3),enter the InvokeAI directory (step 5, `cd
|
||||
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
|
||||
launch the invoke script (step 9).
|
||||
|
||||
!!! tip "Tildebyte has written an alternative"
|
||||
|
||||
|
@ -11,6 +11,7 @@ dependencies:
|
||||
- numpy=1.19
|
||||
- imageio=2.9.0
|
||||
- opencv=4.6.0
|
||||
- getpass_asterisk
|
||||
- pillow=8.*
|
||||
- flask=2.1.*
|
||||
- flask_cors=3.0.10
|
||||
|
@ -22,6 +22,7 @@ dependencies:
|
||||
- diffusers=0.6.0
|
||||
- einops=0.4.1
|
||||
- grpcio=1.46.4
|
||||
- getpass_asterisk
|
||||
- humanfriendly=10.0
|
||||
- imageio=2.21.2
|
||||
- imageio-ffmpeg=0.4.7
|
||||
|
@ -1,14 +1,14 @@
|
||||
name: invokeai
|
||||
channels:
|
||||
- pytorch
|
||||
- nvidia
|
||||
- defaults
|
||||
dependencies:
|
||||
- python>=3.9
|
||||
- pip=20.3
|
||||
- cudatoolkit=11.3
|
||||
- pytorch=1.11.0
|
||||
- torchvision=0.12.0
|
||||
- numpy=1.19.2
|
||||
- pip=22.2.2
|
||||
- numpy=1.23.3
|
||||
- torchvision=0.14.0
|
||||
- pytorch-cuda=11.7
|
||||
- pip:
|
||||
- albumentations==0.4.3
|
||||
- opencv-python==4.5.5.64
|
||||
@ -33,6 +33,7 @@ dependencies:
|
||||
- flask_cors==3.0.10
|
||||
- dependency_injector==4.40.0
|
||||
- eventlet
|
||||
- getpass_asterisk
|
||||
- kornia==0.6.0
|
||||
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
|
||||
|
829
frontend/dist/assets/index.4488003f.js
vendored
Normal file
829
frontend/dist/assets/index.4488003f.js
vendored
Normal file
File diff suppressed because one or more lines are too long
690
frontend/dist/assets/index.ae92a637.js
vendored
Normal file
690
frontend/dist/assets/index.ae92a637.js
vendored
Normal file
File diff suppressed because one or more lines are too long
517
frontend/dist/assets/index.cc049b93.js
vendored
Normal file
517
frontend/dist/assets/index.cc049b93.js
vendored
Normal file
File diff suppressed because one or more lines are too long
517
frontend/dist/assets/index.e2832fd4.js
vendored
Normal file
517
frontend/dist/assets/index.e2832fd4.js
vendored
Normal file
File diff suppressed because one or more lines are too long
@ -80,14 +80,36 @@ if not exist ".git" (
|
||||
call conda activate
|
||||
|
||||
@rem create the environment
|
||||
call conda env remove -n invokeai
|
||||
call conda env create
|
||||
call conda activate invokeai
|
||||
if "%ERRORLEVEL%" NEQ "0" (
|
||||
echo ""
|
||||
echo "Something went wrong while installing Python libraries and cannot continue.
|
||||
echo "Please visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
|
||||
echo "installation methods."
|
||||
echo "Press any key to continue"
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
call conda activate invokeai
|
||||
@rem preload the models
|
||||
call python scripts\preload_models.py
|
||||
if "%ERRORLEVEL%" NEQ "0" (
|
||||
echo ""
|
||||
echo "The preload_models.py script crashed or was cancelled."
|
||||
echo "InvokeAI is not ready to run. To run preload_models.py again,"
|
||||
echo "run the command 'update.bat' in this directory."
|
||||
echo "Press any key to continue"
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
@rem tell the user their next steps
|
||||
echo.
|
||||
echo ""
|
||||
echo "* InvokeAI installed successfully *"
|
||||
echo "You can now start generating images by double-clicking the 'invoke.bat' file (inside this folder)
|
||||
|
||||
echo "Press any key to continue"
|
||||
pause
|
||||
exit 0
|
||||
|
||||
|
@ -99,13 +99,32 @@ conda activate
|
||||
if [ "$OS_NAME" == "mac" ]; then
|
||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-${OS_ARCH} conda env create -f environment-mac.yml
|
||||
else
|
||||
conda env remove -n invokeai
|
||||
conda env create -f environment.yml
|
||||
fi
|
||||
|
||||
status=$?
|
||||
|
||||
if test $status -ne 0
|
||||
then
|
||||
echo "Something went wrong while installing Python libraries and cannot continue."
|
||||
echo "Please visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
|
||||
echo "installation methods"
|
||||
else
|
||||
conda activate invokeai
|
||||
# preload the models
|
||||
echo "Calling the preload_models.py script"
|
||||
python scripts/preload_models.py
|
||||
status=$?
|
||||
if test $status -ne 0
|
||||
then
|
||||
echo "The preload_models.py script crashed or was cancelled."
|
||||
echo "InvokeAI is not ready to run. Try again by running"
|
||||
echo "update.sh in this directory."
|
||||
else
|
||||
# tell the user their next steps
|
||||
echo "You can now start generating images by running invoke.sh (inside this folder), using ./invoke.sh"
|
||||
fi
|
||||
fi
|
||||
|
||||
conda activate invokeai
|
||||
|
||||
# preload the models
|
||||
python scripts/preload_models.py
|
||||
|
||||
# tell the user their next steps
|
||||
echo "You can now start generating images by running invoke.sh (inside this folder), using ./invoke.sh"
|
||||
|
@ -896,7 +896,7 @@ class Generate:
|
||||
assert os.path.exists(image_path), '** "{image_path}" not found. Please enter the name of an existing image file to mask **'
|
||||
basename,_ = os.path.splitext(os.path.basename(image_path))
|
||||
if self.txt2mask is None:
|
||||
self.txt2mask = Txt2Mask(device = self.device)
|
||||
self.txt2mask = Txt2Mask(device = self.device, refined=True)
|
||||
segmented = self.txt2mask.segment(image_path,prompt)
|
||||
trans = segmented.to_transparent()
|
||||
inverse = segmented.to_transparent(invert=True)
|
||||
|
@ -41,7 +41,7 @@ class Outcrop(object):
|
||||
height = extended_image.height,
|
||||
init_img = extended_image,
|
||||
strength = 0.90,
|
||||
image_callback = wrapped_callback,
|
||||
image_callback = wrapped_callback if image_callback else None,
|
||||
seam_size = opt.seam_size or 96,
|
||||
seam_blur = opt.seam_blur or 16,
|
||||
seam_strength = opt.seam_strength or 0.7,
|
||||
|
@ -36,6 +36,7 @@ from torchvision import transforms
|
||||
|
||||
CLIP_VERSION = 'ViT-B/16'
|
||||
CLIPSEG_WEIGHTS = 'src/clipseg/weights/rd64-uni.pth'
|
||||
CLIPSEG_WEIGHTS_REFINED = 'src/clipseg/weights/rd64-uni-refined.pth'
|
||||
CLIPSEG_SIZE = 352
|
||||
|
||||
class SegmentedGrayscale(object):
|
||||
@ -72,14 +73,14 @@ class Txt2Mask(object):
|
||||
Create new Txt2Mask object. The optional device argument can be one of
|
||||
'cuda', 'mps' or 'cpu'.
|
||||
'''
|
||||
def __init__(self,device='cpu'):
|
||||
def __init__(self,device='cpu',refined=False):
|
||||
print('>> Initializing clipseg model for text to mask inference')
|
||||
self.device = device
|
||||
self.model = CLIPDensePredT(version=CLIP_VERSION, reduce_dim=64, )
|
||||
self.model = CLIPDensePredT(version=CLIP_VERSION, reduce_dim=64, complex_trans_conv=refined)
|
||||
self.model.eval()
|
||||
# initially we keep everything in cpu to conserve space
|
||||
self.model.to('cpu')
|
||||
self.model.load_state_dict(torch.load(CLIPSEG_WEIGHTS, map_location=torch.device('cpu')), strict=False)
|
||||
self.model.load_state_dict(torch.load(CLIPSEG_WEIGHTS_REFINED if refined else CLIPSEG_WEIGHTS, map_location=torch.device('cpu')), strict=False)
|
||||
|
||||
@torch.no_grad()
|
||||
def segment(self, image, prompt:str) -> SegmentedGrayscale:
|
||||
|
@ -1,6 +1,6 @@
|
||||
-r requirements.txt
|
||||
|
||||
protobuf==3.19.6
|
||||
torch
|
||||
torchvision
|
||||
torch<1.13.0
|
||||
torchvision<0.14.0
|
||||
-e .
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
albumentations
|
||||
einops
|
||||
getpass_asterisk
|
||||
huggingface-hub
|
||||
imageio-ffmpeg
|
||||
imageio
|
||||
@ -13,7 +14,7 @@ opencv-python
|
||||
pillow
|
||||
pip>=22
|
||||
pudb
|
||||
pytorch-lightning=1.7.7
|
||||
pytorch-lightning==1.7.7
|
||||
scikit-image>=0.19
|
||||
streamlit
|
||||
pyreadline3
|
||||
|
@ -14,18 +14,21 @@ import warnings
|
||||
from urllib import request
|
||||
from tqdm import tqdm
|
||||
from omegaconf import OmegaConf
|
||||
from huggingface_hub import HfFolder, hf_hub_url
|
||||
from pathlib import Path
|
||||
from getpass_asterisk import getpass_asterisk
|
||||
import traceback
|
||||
import getpass
|
||||
import requests
|
||||
import clip
|
||||
import transformers
|
||||
import torch
|
||||
transformers.logging.set_verbosity_error()
|
||||
|
||||
# deferred loading so that help message can be printed quickly
|
||||
def load_libs():
|
||||
pass
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
#warnings.simplefilter('ignore')
|
||||
#warnings.filterwarnings('ignore',category=DeprecationWarning)
|
||||
#warnings.filterwarnings('ignore',category=UserWarning)
|
||||
|
||||
#--------------------------globals--
|
||||
Model_dir = './models/ldm/stable-diffusion-v1/'
|
||||
@ -98,7 +101,7 @@ this program and resume later.\n'''
|
||||
#--------------------------------------------
|
||||
def postscript():
|
||||
print(
|
||||
'''You're all set! You may now launch InvokeAI using one of these two commands:
|
||||
'''\n** Model Installation Successful **\nYou're all set! You may now launch InvokeAI using one of these two commands:
|
||||
Web version:
|
||||
|
||||
python scripts/invoke.py --web (connect to http://localhost:9090)
|
||||
@ -220,10 +223,18 @@ This involves a few easy steps.
|
||||
'''
|
||||
)
|
||||
input('Press <enter> when you are ready to continue:')
|
||||
<<<<<<< HEAD
|
||||
|
||||
from huggingface_hub import HfFolder
|
||||
=======
|
||||
print('(Fetching Hugging Face token from cache...',end='')
|
||||
>>>>>>> spezialspezial-patch-9
|
||||
access_token = HfFolder.get_token()
|
||||
if access_token is not None:
|
||||
print('found')
|
||||
|
||||
if access_token is None:
|
||||
print('not found')
|
||||
print('''
|
||||
4. Thank you! The last step is to enter your HuggingFace access token so that
|
||||
this script is authorized to initiate the download. Go to the access tokens
|
||||
@ -237,8 +248,7 @@ This involves a few easy steps.
|
||||
|
||||
Now copy the token to your clipboard and paste it here: '''
|
||||
)
|
||||
access_token = getpass.getpass()
|
||||
HfFolder.save_token(access_token)
|
||||
access_token = getpass_asterisk.getpass_asterisk()
|
||||
return access_token
|
||||
|
||||
#---------------------------------------------
|
||||
@ -268,14 +278,26 @@ def download_weight_datasets(models:dict, access_token:str):
|
||||
)
|
||||
if success:
|
||||
successful[mod] = True
|
||||
if len(successful) < len(models):
|
||||
print(f'\n\n** There were errors downloading one or more files. **')
|
||||
print('Please double-check your license agreements, and your access token.')
|
||||
HfFolder.delete_token()
|
||||
print('Press any key to try again. Type ^C to quit.\n')
|
||||
input()
|
||||
return None
|
||||
|
||||
HfFolder.save_token(access_token)
|
||||
keys = ', '.join(successful.keys())
|
||||
print(f'Successfully installed {keys}')
|
||||
return successful
|
||||
|
||||
#---------------------------------------------
|
||||
def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
||||
<<<<<<< HEAD
|
||||
from huggingface_hub import hf_hub_url
|
||||
|
||||
=======
|
||||
>>>>>>> spezialspezial-patch-9
|
||||
model_dest = os.path.join(Model_dir, model_name)
|
||||
os.makedirs(os.path.dirname(model_dest), exist_ok=True)
|
||||
url = hf_hub_url(repo_id, model_name)
|
||||
@ -295,6 +317,8 @@ def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
||||
if resp.status_code==416: # "range not satisfiable", which means nothing to return
|
||||
print(f'* {model_name}: complete file found. Skipping.')
|
||||
return True
|
||||
elif resp.status_code != 200:
|
||||
print(f'** An error occurred during downloading {model_name}: {resp.reason}')
|
||||
elif exist_size > 0:
|
||||
print(f'* {model_name}: partial file found. Resuming...')
|
||||
else:
|
||||
@ -302,7 +326,7 @@ def download_with_resume(repo_id:str, model_name:str, access_token:str)->bool:
|
||||
|
||||
try:
|
||||
if total < 2000:
|
||||
print(f'* {model_name}: {resp.text}')
|
||||
print(f'*** ERROR DOWNLOADING {model_name}: {resp.text}')
|
||||
return False
|
||||
|
||||
with open(model_dest, open_mode) as file, tqdm(
|
||||
@ -329,7 +353,11 @@ def update_config_file(successfully_downloaded:dict,opt:dict):
|
||||
|
||||
try:
|
||||
if os.path.exists(Config_file):
|
||||
<<<<<<< HEAD
|
||||
print(f'** {Config_file} exists. Renaming to {Config_file}.orig')
|
||||
=======
|
||||
print(f'* {Config_file} exists. Renaming to {Config_file}.orig')
|
||||
>>>>>>> spezialspezial-patch-9
|
||||
os.rename(Config_file,f'{Config_file}.orig')
|
||||
tmpfile = os.path.join(os.path.dirname(Config_file),'new_config.tmp')
|
||||
with open(tmpfile, 'w') as outfile:
|
||||
@ -383,26 +411,35 @@ def new_config_file_contents(successfully_downloaded:dict, Config_file:str)->str
|
||||
# this will preload the Bert tokenizer fles
|
||||
def download_bert():
|
||||
print('Installing bert tokenizer (ignore deprecation errors)...', end='')
|
||||
<<<<<<< HEAD
|
||||
from transformers import BertTokenizerFast, AutoFeatureExtractor
|
||||
=======
|
||||
sys.stdout.flush()
|
||||
>>>>>>> spezialspezial-patch-9
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
||||
from transformers import BertTokenizerFast, AutoFeatureExtractor
|
||||
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
|
||||
print('...success')
|
||||
sys.stdout.flush()
|
||||
|
||||
#---------------------------------------------
|
||||
# this will download requirements for Kornia
|
||||
def download_kornia():
|
||||
print('Installing Kornia requirements...', end='')
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
||||
print('Installing Kornia requirements (ignore deprecation errors)...', end='')
|
||||
sys.stdout.flush()
|
||||
import kornia
|
||||
print('...success')
|
||||
|
||||
#---------------------------------------------
|
||||
def download_clip():
|
||||
print('Loading CLIP model...',end='')
|
||||
<<<<<<< HEAD
|
||||
from transformers import CLIPTokenizer, CLIPTextModel
|
||||
=======
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
||||
from transformers import CLIPTokenizer, CLIPTextModel
|
||||
>>>>>>> spezialspezial-patch-9
|
||||
sys.stdout.flush()
|
||||
version = 'openai/clip-vit-large-patch14'
|
||||
tokenizer = CLIPTokenizer.from_pretrained(version)
|
||||
@ -531,7 +568,6 @@ if __name__ == '__main__':
|
||||
default='./configs/models.yaml',
|
||||
help='path to configuration file to create')
|
||||
opt = parser.parse_args()
|
||||
load_libs()
|
||||
|
||||
try:
|
||||
if opt.interactive:
|
||||
|
@ -8,6 +8,12 @@ if exist ".git" (
|
||||
call git pull
|
||||
)
|
||||
|
||||
conda env update
|
||||
|
||||
conda env update
|
||||
conda activate invokeai
|
||||
python scripts/preload_models.py
|
||||
|
||||
echo "Press any key to continue"
|
||||
pause
|
||||
exit 0
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user