mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
merged with main
This commit is contained in:
commit
fff41a7349
Binary file not shown.
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 84 KiB |
BIN
docs/assets/installer-walkthrough/installing-models.png
Normal file
BIN
docs/assets/installer-walkthrough/installing-models.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 128 KiB |
BIN
docs/assets/installer-walkthrough/settings-form.png
Normal file
BIN
docs/assets/installer-walkthrough/settings-form.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 114 KiB |
@ -150,7 +150,7 @@ experimental versions later.
|
|||||||
|
|
||||||
```cmd
|
```cmd
|
||||||
C:\Documents\Linco> cd InvokeAI-Installer
|
C:\Documents\Linco> cd InvokeAI-Installer
|
||||||
C:\Documents\Linco\invokeAI> install.bat
|
C:\Documents\Linco\invokeAI> .\install.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
7. **Select the location to install InvokeAI**: The script will ask you to choose where to install InvokeAI. Select a
|
7. **Select the location to install InvokeAI**: The script will ask you to choose where to install InvokeAI. Select a
|
||||||
@ -167,6 +167,11 @@ experimental versions later.
|
|||||||
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
|
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
|
||||||
on Macintoshes, where "YourName" is your login name.
|
on Macintoshes, where "YourName" is your login name.
|
||||||
|
|
||||||
|
-If you have previously installed InvokeAI, you will be asked to
|
||||||
|
confirm whether you want to reinstall into this directory. You
|
||||||
|
may choose to reinstall, in which case your version will be upgraded,
|
||||||
|
or choose a different directory.
|
||||||
|
|
||||||
- The script uses tab autocompletion to suggest directory path completions.
|
- The script uses tab autocompletion to suggest directory path completions.
|
||||||
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
|
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
|
||||||
to suggest completions.
|
to suggest completions.
|
||||||
@ -181,11 +186,6 @@ experimental versions later.
|
|||||||
are unsure what GPU you are using, you can ask the installer to
|
are unsure what GPU you are using, you can ask the installer to
|
||||||
guess.
|
guess.
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
![choose-gpu-screenshot](../assets/installer-walkthrough/choose-gpu.png)
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
|
|
||||||
9. **Watch it go!**: Sit back and let the install script work. It will install the third-party
|
9. **Watch it go!**: Sit back and let the install script work. It will install the third-party
|
||||||
libraries needed by InvokeAI and the application itself.
|
libraries needed by InvokeAI and the application itself.
|
||||||
|
|
||||||
@ -197,25 +197,138 @@ experimental versions later.
|
|||||||
minutes and nothing is happening, you can interrupt the script with ^C. You
|
minutes and nothing is happening, you can interrupt the script with ^C. You
|
||||||
may restart it and it will pick up where it left off.
|
may restart it and it will pick up where it left off.
|
||||||
|
|
||||||
10. **Post-install Configuration**: After installation completes, the installer will launch the
|
|
||||||
configuration script, which will guide you through the first-time
|
|
||||||
process of selecting one or more Stable Diffusion model weights
|
|
||||||
files, downloading and configuring them. We provide a list of
|
|
||||||
popular models that InvokeAI performs well with. However, you can
|
|
||||||
add more weight files later on using the command-line client or
|
|
||||||
the Web UI. See [Installing Models](050_INSTALLING_MODELS.md) for
|
|
||||||
details.
|
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![downloading-models-screenshot](../assets/installer-walkthrough/downloading-models.png)
|
![initial-settings-screenshot](../assets/installer-walkthrough/settings-form.png)
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
10. **Post-install Configuration**: After installation completes, the
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
installer will launch the configuration form, which will guide you
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
through the first-time process of adjusting some of InvokeAI's
|
||||||
process for this is described in [Installing Models](050_INSTALLING_MODELS.md).
|
startup settings. To move around this form use ctrl-N for
|
||||||
|
<N>ext and ctrl-P for <P>revious, or use <tab>
|
||||||
|
and shift-<tab> to move forward and back. Once you are in a
|
||||||
|
multi-checkbox field use the up and down cursor keys to select the
|
||||||
|
item you want, and <space> to toggle it on and off. Within
|
||||||
|
a directory field, pressing <tab> will provide autocomplete
|
||||||
|
options.
|
||||||
|
|
||||||
11. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
|
Generally the defaults are fine, and you can come back to this screen at
|
||||||
|
any time to tweak your system. Here are the options you can adjust:
|
||||||
|
|
||||||
|
- ***Output directory for images***
|
||||||
|
This is the path to a directory in which InvokeAI will store all its
|
||||||
|
generated images.
|
||||||
|
|
||||||
|
- ***NSFW checker***
|
||||||
|
If checked, InvokeAI will test images for potential sexual content
|
||||||
|
and blur them out if found.
|
||||||
|
|
||||||
|
- ***HuggingFace Access Token***
|
||||||
|
InvokeAI has the ability to download embedded styles and subjects
|
||||||
|
from the HuggingFace Concept Library on-demand. However, some of
|
||||||
|
the concept library files are password protected. To make download
|
||||||
|
smoother, you can set up an account at huggingface.co, obtain an
|
||||||
|
access token, and paste it into this field. Note that you paste
|
||||||
|
to this screen using ctrl-shift-V
|
||||||
|
|
||||||
|
- ***Free GPU memory after each generation***
|
||||||
|
This is useful for low-memory machines and helps minimize the
|
||||||
|
amount of GPU VRAM used by InvokeAI.
|
||||||
|
|
||||||
|
- ***Enable xformers support if available***
|
||||||
|
If the xformers library was successfully installed, this will activate
|
||||||
|
it to reduce memory consumption and increase rendering speed noticeably.
|
||||||
|
Note that xformers has the side effect of generating slightly different
|
||||||
|
images even when presented with the same seed and other settings.
|
||||||
|
|
||||||
|
- ***Force CPU to be used on GPU systems***
|
||||||
|
This will use the (slow) CPU rather than the accelerated GPU. This
|
||||||
|
can be used to generate images on systems that don't have a compatible
|
||||||
|
GPU.
|
||||||
|
|
||||||
|
- ***Precision***
|
||||||
|
This controls whether to use float32 or float16 arithmetic.
|
||||||
|
float16 uses less memory but is also slightly less accurate.
|
||||||
|
Ordinarily the right arithmetic is picked automatically ("auto"),
|
||||||
|
but you may have to use float32 to get images on certain systems
|
||||||
|
and graphics cards. The "autocast" option is deprecated and
|
||||||
|
shouldn't be used unless you are asked to by a member of the team.
|
||||||
|
|
||||||
|
- ***Number of models to cache in CPU memory***
|
||||||
|
This allows you to keep models in memory and switch rapidly among
|
||||||
|
them rather than having them load from disk each time. This slider
|
||||||
|
controls how many models to keep loaded at once. Each
|
||||||
|
model will use 2-4 GB of RAM, so use this cautiously
|
||||||
|
|
||||||
|
- ***Directory containing embedding/textual inversion files***
|
||||||
|
This is the directory in which you can place custom embedding
|
||||||
|
files (.pt or .bin). During startup, this directory will be
|
||||||
|
scanned and InvokeAI will print out the text terms that
|
||||||
|
are available to trigger the embeddings.
|
||||||
|
|
||||||
|
At the bottom of the screen you will see a checkbox for accepting
|
||||||
|
the CreativeML Responsible AI License. You need to accept the license
|
||||||
|
in order to download Stable Diffusion models from the next screen.
|
||||||
|
|
||||||
|
_You can come back to the startup options form_ as many times as you like.
|
||||||
|
From the `invoke.sh` or `invoke.bat` launcher, select option (6) to relaunch
|
||||||
|
this script. On the command line, it is named `invokeai-configure`.
|
||||||
|
|
||||||
|
11. **Downloading Models**: After you press `[NEXT]` on the screen, you will be taken
|
||||||
|
to another screen that prompts you to download a series of starter models. The ones
|
||||||
|
we recommend are preselected for you, but you are encouraged to use the checkboxes to
|
||||||
|
pick and choose.
|
||||||
|
You will probably wish to download `autoencoder-840000` for use with models that
|
||||||
|
were trained with an older version of the Stability VAE.
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
![select-models-screenshot](../assets/installer-walkthrough/installing-models.png)
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Below the preselected list of starter models is a large text field which you can use
|
||||||
|
to specify a series of models to import. You can specify models in a variety of formats,
|
||||||
|
each separated by a space or newline. The formats accepted are:
|
||||||
|
|
||||||
|
- The path to a .ckpt or .safetensors file. On most systems, you can drag a file from
|
||||||
|
the file browser to the textfield to automatically paste the path. Be sure to remove
|
||||||
|
extraneous quotation marks and other things that come along for the ride.
|
||||||
|
|
||||||
|
- The path to a directory containing a combination of `.ckpt` and `.safetensors` files.
|
||||||
|
The directory will be scanned from top to bottom (including subfolders) and any
|
||||||
|
file that can be imported will be.
|
||||||
|
|
||||||
|
- A URL pointing to a `.ckpt` or `.safetensors` file. You can cut
|
||||||
|
and paste directly from a web page, or simply drag the link from the web page
|
||||||
|
or navigation bar. (You can also use ctrl-shift-V to paste into this field)
|
||||||
|
The file will be downloaded and installed.
|
||||||
|
|
||||||
|
- The HuggingFace repository ID (repo_id) for a `diffusers` model. These IDs have
|
||||||
|
the format _author_name/model_name_, as in `andite/anything-v4.0`
|
||||||
|
|
||||||
|
- The path to a local directory containing a `diffusers`
|
||||||
|
model. These directories always have the file `model_index.json`
|
||||||
|
at their top level.
|
||||||
|
|
||||||
|
_Select a directory for models to import_ You may select a local
|
||||||
|
directory for autoimporting at startup time. If you select this
|
||||||
|
option, the directory you choose will be scanned for new
|
||||||
|
.ckpt/.safetensors files each time InvokeAI starts up, and any new
|
||||||
|
files will be automatically imported and made available for your
|
||||||
|
use.
|
||||||
|
|
||||||
|
_Convert imported models into diffusers_ When legacy checkpoint
|
||||||
|
files are imported, you may select to use them unmodified (the
|
||||||
|
default) or to convert them into `diffusers` models. The latter
|
||||||
|
load much faster and have slightly better rendering performance,
|
||||||
|
but not all checkpoint files can be converted. Note that Stable Diffusion
|
||||||
|
Version 2.X files are **only** supported in `diffusers` format and will
|
||||||
|
be converted regardless.
|
||||||
|
|
||||||
|
_You can come back to the model install form_ as many times as you like.
|
||||||
|
From the `invoke.sh` or `invoke.bat` launcher, select option (5) to relaunch
|
||||||
|
this script. On the command line, it is named `invokeai-model-install`.
|
||||||
|
|
||||||
|
12. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
|
||||||
for the directory `invokeai` installed in the location you chose at the
|
for the directory `invokeai` installed in the location you chose at the
|
||||||
beginning of the install session. Look for a shell script named `invoke.sh`
|
beginning of the install session. Look for a shell script named `invoke.sh`
|
||||||
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
|
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
|
||||||
@ -348,25 +461,11 @@ version (recommended), follow these steps:
|
|||||||
1. Start the `invoke.sh`/`invoke.bat` launch script from within the
|
1. Start the `invoke.sh`/`invoke.bat` launch script from within the
|
||||||
`invokeai` root directory.
|
`invokeai` root directory.
|
||||||
|
|
||||||
2. Choose menu item (6) "Developer's Console". This will launch a new
|
2. Choose menu item (10) "Update InvokeAI".
|
||||||
command line.
|
|
||||||
|
|
||||||
3. Type the following command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install InvokeAI --upgrade
|
|
||||||
```
|
|
||||||
4. Watch the installation run. Once it is complete, you may exit the
|
|
||||||
command line by typing `exit`, and then start InvokeAI from the
|
|
||||||
launch script as per usual.
|
|
||||||
|
|
||||||
|
|
||||||
Alternatively, if you wish to get the most recent unreleased
|
|
||||||
development version, perform the same steps to enter the developer's
|
|
||||||
console, and then type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install https://github.com/invoke-ai/InvokeAI/archive/refs/heads/main.zip
|
|
||||||
```
|
|
||||||
|
|
||||||
|
3. This will launch a menu that gives you the option of:
|
||||||
|
|
||||||
|
1. Updating to the latest official release;
|
||||||
|
2. Updating to the bleeding-edge development version; or
|
||||||
|
3. Manually entering the tag or branch name of a version of
|
||||||
|
InvokeAI you wish to try out.
|
||||||
|
@ -15,8 +15,9 @@ echo 5. download and install models
|
|||||||
echo 6. change InvokeAI startup options
|
echo 6. change InvokeAI startup options
|
||||||
echo 7. re-run the configure script to fix a broken install
|
echo 7. re-run the configure script to fix a broken install
|
||||||
echo 8. open the developer console
|
echo 8. open the developer console
|
||||||
echo 9. command-line help
|
echo 9. update InvokeAI
|
||||||
set /P restore="Please enter 1-9: [2] "
|
echo 10. command-line help
|
||||||
|
set /P restore="Please enter 1-10: [2] "
|
||||||
if not defined restore set restore=2
|
if not defined restore set restore=2
|
||||||
IF /I "%restore%" == "1" (
|
IF /I "%restore%" == "1" (
|
||||||
echo Starting the InvokeAI command-line..
|
echo Starting the InvokeAI command-line..
|
||||||
@ -51,7 +52,10 @@ IF /I "%restore%" == "1" (
|
|||||||
echo *************************
|
echo *************************
|
||||||
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
|
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
|
||||||
call cmd /k
|
call cmd /k
|
||||||
) ELSE IF /I "%restore%" == "8" (
|
) ELSE IF /I "%restore%" == "9" (
|
||||||
|
echo Running invokeai-update...
|
||||||
|
python .venv\Scripts\invokeai-update.exe %*
|
||||||
|
) ELSE IF /I "%restore%" == "10" (
|
||||||
echo Displaying command line help...
|
echo Displaying command line help...
|
||||||
python .venv\Scripts\invokeai.exe --help %*
|
python .venv\Scripts\invokeai.exe --help %*
|
||||||
pause
|
pause
|
||||||
|
@ -34,9 +34,10 @@ if [ "$0" != "bash" ]; then
|
|||||||
echo "6. change InvokeAI startup options"
|
echo "6. change InvokeAI startup options"
|
||||||
echo "7. re-run the configure script to fix a broken install"
|
echo "7. re-run the configure script to fix a broken install"
|
||||||
echo "8. open the developer console"
|
echo "8. open the developer console"
|
||||||
echo "9. command-line help "
|
echo "9. update InvokeAI"
|
||||||
|
echo "10. command-line help "
|
||||||
echo ""
|
echo ""
|
||||||
read -p "Please enter 1-9: [2] " yn
|
read -p "Please enter 1-10: [2] " yn
|
||||||
choice=${yn:='2'}
|
choice=${yn:='2'}
|
||||||
case $choice in
|
case $choice in
|
||||||
1)
|
1)
|
||||||
@ -65,11 +66,15 @@ if [ "$0" != "bash" ]; then
|
|||||||
exec invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
|
exec invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
|
||||||
;;
|
;;
|
||||||
8)
|
8)
|
||||||
echo "Developer Console:"
|
echo "Developer Console:"
|
||||||
file_name=$(basename "${BASH_SOURCE[0]}")
|
file_name=$(basename "${BASH_SOURCE[0]}")
|
||||||
bash --init-file "$file_name"
|
bash --init-file "$file_name"
|
||||||
;;
|
;;
|
||||||
9)
|
9)
|
||||||
|
echo "Update:"
|
||||||
|
exec invokeai-update
|
||||||
|
;;
|
||||||
|
10)
|
||||||
exec invokeai --help
|
exec invokeai --help
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
|
@ -970,6 +970,7 @@ class Generate:
|
|||||||
|
|
||||||
seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
|
seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
|
||||||
if self.embedding_path is not None:
|
if self.embedding_path is not None:
|
||||||
|
print(f'>> Loading embeddings from {self.embedding_path}')
|
||||||
for root, _, files in os.walk(self.embedding_path):
|
for root, _, files in os.walk(self.embedding_path):
|
||||||
for name in files:
|
for name in files:
|
||||||
ti_path = os.path.join(root, name)
|
ti_path = os.path.join(root, name)
|
||||||
@ -977,7 +978,7 @@ class Generate:
|
|||||||
ti_path, defer_injecting_tokens=True
|
ti_path, defer_injecting_tokens=True
|
||||||
)
|
)
|
||||||
print(
|
print(
|
||||||
f'>> Textual inversions available: {", ".join(self.model.textual_inversion_manager.get_all_trigger_strings())}'
|
f'>> Textual inversion triggers: {", ".join(self.model.textual_inversion_manager.get_all_trigger_strings())}'
|
||||||
)
|
)
|
||||||
|
|
||||||
self.model_name = model_name
|
self.model_name = model_name
|
||||||
|
@ -62,6 +62,7 @@ def main():
|
|||||||
Globals.always_use_cpu = args.always_use_cpu
|
Globals.always_use_cpu = args.always_use_cpu
|
||||||
Globals.internet_available = args.internet_available and check_internet()
|
Globals.internet_available = args.internet_available and check_internet()
|
||||||
Globals.disable_xformers = not args.xformers
|
Globals.disable_xformers = not args.xformers
|
||||||
|
Globals.sequential_guidance = args.sequential_guidance
|
||||||
Globals.ckpt_convert = args.ckpt_convert
|
Globals.ckpt_convert = args.ckpt_convert
|
||||||
|
|
||||||
print(f">> Internet connectivity is {Globals.internet_available}")
|
print(f">> Internet connectivity is {Globals.internet_available}")
|
||||||
@ -672,7 +673,6 @@ def import_model(model_path: str, gen, opt, completer, convert=False) -> str:
|
|||||||
completer.update_models(gen.model_manager.list_models())
|
completer.update_models(gen.model_manager.list_models())
|
||||||
print(f">> {model_name} successfully installed")
|
print(f">> {model_name} successfully installed")
|
||||||
|
|
||||||
|
|
||||||
def _verify_load(model_name: str, gen) -> bool:
|
def _verify_load(model_name: str, gen) -> bool:
|
||||||
print(">> Verifying that new model loads...")
|
print(">> Verifying that new model loads...")
|
||||||
current_model = gen.model_name
|
current_model = gen.model_name
|
||||||
@ -705,7 +705,6 @@ def _get_model_name_and_desc(
|
|||||||
)
|
)
|
||||||
return model_name, model_description
|
return model_name, model_description
|
||||||
|
|
||||||
|
|
||||||
def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer) -> str:
|
def convert_model(model_name_or_path: Union[Path, str], gen, opt, completer) -> str:
|
||||||
model_name_or_path = model_name_or_path.replace("\\", "/") # windows
|
model_name_or_path = model_name_or_path.replace("\\", "/") # windows
|
||||||
manager = gen.model_manager
|
manager = gen.model_manager
|
||||||
|
@ -1 +1 @@
|
|||||||
__version__='2.3.0'
|
__version__='2.3.1+a0'
|
||||||
|
@ -91,15 +91,15 @@ import pydoc
|
|||||||
import re
|
import re
|
||||||
import shlex
|
import shlex
|
||||||
import sys
|
import sys
|
||||||
import ldm.invoke
|
|
||||||
import ldm.invoke.pngwriter
|
|
||||||
|
|
||||||
from ldm.invoke.globals import Globals
|
|
||||||
from ldm.invoke.prompt_parser import split_weighted_subprompts
|
|
||||||
from argparse import Namespace
|
from argparse import Namespace
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
|
import ldm.invoke
|
||||||
|
import ldm.invoke.pngwriter
|
||||||
|
from ldm.invoke.globals import Globals
|
||||||
|
from ldm.invoke.prompt_parser import split_weighted_subprompts
|
||||||
|
|
||||||
APP_ID = ldm.invoke.__app_id__
|
APP_ID = ldm.invoke.__app_id__
|
||||||
APP_NAME = ldm.invoke.__app_name__
|
APP_NAME = ldm.invoke.__app_name__
|
||||||
APP_VERSION = ldm.invoke.__version__
|
APP_VERSION = ldm.invoke.__version__
|
||||||
@ -489,6 +489,13 @@ class Args(object):
|
|||||||
action='store_true',
|
action='store_true',
|
||||||
help='Force free gpu memory before final decoding',
|
help='Force free gpu memory before final decoding',
|
||||||
)
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--sequential_guidance',
|
||||||
|
dest='sequential_guidance',
|
||||||
|
action='store_true',
|
||||||
|
help="Calculate guidance in serial instead of in parallel, lowering memory requirement "
|
||||||
|
"at the expense of speed",
|
||||||
|
)
|
||||||
model_group.add_argument(
|
model_group.add_argument(
|
||||||
'--xformers',
|
'--xformers',
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
|
@ -743,11 +743,10 @@ def default_embedding_dir() -> Path:
|
|||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def write_default_options(program_opts: Namespace, initfile: Path):
|
def write_default_options(program_opts: Namespace, initfile: Path):
|
||||||
opt = default_startup_options()
|
opt = default_startup_options(initfile)
|
||||||
opt.hf_token = HfFolder.get_token()
|
opt.hf_token = HfFolder.get_token()
|
||||||
write_opts(opt, initfile)
|
write_opts(opt, initfile)
|
||||||
|
|
||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def main():
|
def main():
|
||||||
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
|
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
|
||||||
|
102
ldm/invoke/config/invokeai_update.py
Normal file
102
ldm/invoke/config/invokeai_update.py
Normal file
@ -0,0 +1,102 @@
|
|||||||
|
'''
|
||||||
|
Minimalist updater script. Prompts user for the tag or branch to update to and runs
|
||||||
|
pip install <path_to_git_source>.
|
||||||
|
'''
|
||||||
|
|
||||||
|
import platform
|
||||||
|
import requests
|
||||||
|
import subprocess
|
||||||
|
from rich import box, print
|
||||||
|
from rich.console import Console, group
|
||||||
|
from rich.panel import Panel
|
||||||
|
from rich.prompt import Prompt
|
||||||
|
from rich.style import Style
|
||||||
|
from rich.text import Text
|
||||||
|
from rich.live import Live
|
||||||
|
from rich.table import Table
|
||||||
|
|
||||||
|
from ldm.invoke import __version__
|
||||||
|
|
||||||
|
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive"
|
||||||
|
INVOKE_AI_REL="https://api.github.com/repos/invoke-ai/InvokeAI/releases"
|
||||||
|
|
||||||
|
OS = platform.uname().system
|
||||||
|
ARCH = platform.uname().machine
|
||||||
|
|
||||||
|
ORANGE_ON_DARK_GREY = Style(bgcolor="grey23", color="orange1")
|
||||||
|
|
||||||
|
if OS == "Windows":
|
||||||
|
# Windows terminals look better without a background colour
|
||||||
|
console = Console(style=Style(color="grey74"))
|
||||||
|
else:
|
||||||
|
console = Console(style=Style(color="grey74", bgcolor="grey23"))
|
||||||
|
|
||||||
|
def get_versions()->dict:
|
||||||
|
return requests.get(url=INVOKE_AI_REL).json()
|
||||||
|
|
||||||
|
def welcome(versions: dict):
|
||||||
|
|
||||||
|
@group()
|
||||||
|
def text():
|
||||||
|
yield f'InvokeAI Version: [bold yellow]{__version__}'
|
||||||
|
yield ''
|
||||||
|
yield 'This script will update InvokeAI to the latest release, or to a development version of your choice.'
|
||||||
|
yield ''
|
||||||
|
yield '[bold yellow]Options:'
|
||||||
|
yield f'''[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
|
||||||
|
[2] Update to the bleeding-edge development version ([italic]main[/italic])
|
||||||
|
[3] Manually enter the tag or branch name you wish to update'''
|
||||||
|
|
||||||
|
console.rule()
|
||||||
|
console.print(
|
||||||
|
Panel(
|
||||||
|
title="[bold wheat1]InvokeAI Updater",
|
||||||
|
renderable=text(),
|
||||||
|
box=box.DOUBLE,
|
||||||
|
expand=True,
|
||||||
|
padding=(1, 2),
|
||||||
|
style=ORANGE_ON_DARK_GREY,
|
||||||
|
subtitle=f"[bold grey39]{OS}-{ARCH}",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# console.rule is used instead of console.line to maintain dark background
|
||||||
|
# on terminals where light background is the default
|
||||||
|
console.rule(characters=" ")
|
||||||
|
|
||||||
|
def main():
|
||||||
|
versions = get_versions()
|
||||||
|
welcome(versions)
|
||||||
|
|
||||||
|
tag = None
|
||||||
|
choice = Prompt.ask(Text.from_markup(('[grey74 on grey23]Choice:')),choices=['1','2','3'],default='1')
|
||||||
|
|
||||||
|
if choice=='1':
|
||||||
|
tag = versions[0]['tag_name']
|
||||||
|
elif choice=='2':
|
||||||
|
tag = 'main'
|
||||||
|
elif choice=='3':
|
||||||
|
tag = Prompt.ask('[grey74 on grey23]Enter an InvokeAI tag or branch name')
|
||||||
|
|
||||||
|
console.print(Panel(f':crossed_fingers: Upgrading to [yellow]{tag}[/yellow]', box=box.MINIMAL, style=ORANGE_ON_DARK_GREY))
|
||||||
|
|
||||||
|
cmd = f'pip install {INVOKE_AI_SRC}/{tag}.zip --use-pep517'
|
||||||
|
|
||||||
|
progress = Table.grid(expand=True)
|
||||||
|
progress_panel = Panel(progress, box=box.MINIMAL, style=ORANGE_ON_DARK_GREY)
|
||||||
|
|
||||||
|
with subprocess.Popen(['bash', '-c', cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc:
|
||||||
|
progress.add_column()
|
||||||
|
with Live(progress_panel, console=console, vertical_overflow='visible'):
|
||||||
|
while proc.poll() is None:
|
||||||
|
for l in iter(proc.stdout.readline, b''):
|
||||||
|
progress.add_row(l.decode().strip(), style=ORANGE_ON_DARK_GREY)
|
||||||
|
if proc.returncode == 0:
|
||||||
|
console.rule(f':heavy_check_mark: Upgrade successful')
|
||||||
|
else:
|
||||||
|
console.rule(f':exclamation: [bold red]Upgrade failed[/red bold]')
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
try:
|
||||||
|
main()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
pass
|
@ -13,8 +13,8 @@ the attributes:
|
|||||||
|
|
||||||
import os
|
import os
|
||||||
import os.path as osp
|
import os.path as osp
|
||||||
from pathlib import Path
|
|
||||||
from argparse import Namespace
|
from argparse import Namespace
|
||||||
|
from pathlib import Path
|
||||||
from typing import Union
|
from typing import Union
|
||||||
|
|
||||||
Globals = Namespace()
|
Globals = Namespace()
|
||||||
@ -48,6 +48,9 @@ Globals.internet_available = True
|
|||||||
# Whether to disable xformers
|
# Whether to disable xformers
|
||||||
Globals.disable_xformers = False
|
Globals.disable_xformers = False
|
||||||
|
|
||||||
|
# Low-memory tradeoff for guidance calculations.
|
||||||
|
Globals.sequential_guidance = False
|
||||||
|
|
||||||
# whether we are forcing full precision
|
# whether we are forcing full precision
|
||||||
Globals.full_precision = False
|
Globals.full_precision = False
|
||||||
|
|
||||||
|
@ -441,6 +441,7 @@ class TextualInversionDataset(Dataset):
|
|||||||
self.image_paths = [
|
self.image_paths = [
|
||||||
os.path.join(self.data_root, file_path)
|
os.path.join(self.data_root, file_path)
|
||||||
for file_path in os.listdir(self.data_root)
|
for file_path in os.listdir(self.data_root)
|
||||||
|
if os.path.isfile(file_path) and file_path.endswith(('.png','.PNG','.jpg','.JPG','.jpeg','.JPEG','.gif','.GIF'))
|
||||||
]
|
]
|
||||||
|
|
||||||
self.num_images = len(self.image_paths)
|
self.num_images = len(self.image_paths)
|
||||||
|
@ -1,4 +1,3 @@
|
|||||||
import math
|
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from math import ceil
|
from math import ceil
|
||||||
@ -6,13 +5,20 @@ from typing import Callable, Optional, Union, Any, Dict
|
|||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
from diffusers.models.cross_attention import AttnProcessor
|
from diffusers.models.cross_attention import AttnProcessor
|
||||||
|
from typing_extensions import TypeAlias
|
||||||
|
|
||||||
|
from ldm.invoke.globals import Globals
|
||||||
from ldm.models.diffusion.cross_attention_control import Arguments, \
|
from ldm.models.diffusion.cross_attention_control import Arguments, \
|
||||||
restore_default_cross_attention, override_cross_attention, Context, get_cross_attention_modules, \
|
restore_default_cross_attention, override_cross_attention, Context, get_cross_attention_modules, \
|
||||||
CrossAttentionType, SwapCrossAttnContext
|
CrossAttentionType, SwapCrossAttnContext
|
||||||
from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver
|
from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver
|
||||||
|
|
||||||
|
ModelForwardCallback: TypeAlias = Union[
|
||||||
|
# x, t, conditioning, Optional[cross-attention kwargs]
|
||||||
|
Callable[[torch.Tensor, torch.Tensor, torch.Tensor, Optional[dict[str, Any]]], torch.Tensor],
|
||||||
|
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor]
|
||||||
|
]
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
@dataclass(frozen=True)
|
||||||
class PostprocessingSettings:
|
class PostprocessingSettings:
|
||||||
@ -32,7 +38,7 @@ class InvokeAIDiffuserComponent:
|
|||||||
* Hybrid conditioning (used for inpainting)
|
* Hybrid conditioning (used for inpainting)
|
||||||
'''
|
'''
|
||||||
debug_thresholding = False
|
debug_thresholding = False
|
||||||
last_percent_through = 0.0
|
sequential_guidance = False
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ExtraConditioningInfo:
|
class ExtraConditioningInfo:
|
||||||
@ -45,8 +51,7 @@ class InvokeAIDiffuserComponent:
|
|||||||
return self.cross_attention_control_args is not None
|
return self.cross_attention_control_args is not None
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, model, model_forward_callback:
|
def __init__(self, model, model_forward_callback: ModelForwardCallback,
|
||||||
Callable[[torch.Tensor, torch.Tensor, torch.Tensor, Optional[dict[str,Any]]], torch.Tensor],
|
|
||||||
is_running_diffusers: bool=False,
|
is_running_diffusers: bool=False,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
@ -58,7 +63,7 @@ class InvokeAIDiffuserComponent:
|
|||||||
self.is_running_diffusers = is_running_diffusers
|
self.is_running_diffusers = is_running_diffusers
|
||||||
self.model_forward_callback = model_forward_callback
|
self.model_forward_callback = model_forward_callback
|
||||||
self.cross_attention_control_context = None
|
self.cross_attention_control_context = None
|
||||||
self.last_percent_through = 0.0
|
self.sequential_guidance = Globals.sequential_guidance
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def custom_attention_context(self,
|
def custom_attention_context(self,
|
||||||
@ -146,11 +151,20 @@ class InvokeAIDiffuserComponent:
|
|||||||
wants_hybrid_conditioning = isinstance(conditioning, dict)
|
wants_hybrid_conditioning = isinstance(conditioning, dict)
|
||||||
|
|
||||||
if wants_hybrid_conditioning:
|
if wants_hybrid_conditioning:
|
||||||
unconditioned_next_x, conditioned_next_x = self.apply_hybrid_conditioning(x, sigma, unconditioning, conditioning)
|
unconditioned_next_x, conditioned_next_x = self._apply_hybrid_conditioning(x, sigma, unconditioning,
|
||||||
|
conditioning)
|
||||||
elif wants_cross_attention_control:
|
elif wants_cross_attention_control:
|
||||||
unconditioned_next_x, conditioned_next_x = self.apply_cross_attention_controlled_conditioning(x, sigma, unconditioning, conditioning, cross_attention_control_types_to_do)
|
unconditioned_next_x, conditioned_next_x = self._apply_cross_attention_controlled_conditioning(x, sigma,
|
||||||
|
unconditioning,
|
||||||
|
conditioning,
|
||||||
|
cross_attention_control_types_to_do)
|
||||||
|
elif self.sequential_guidance:
|
||||||
|
unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning_sequentially(
|
||||||
|
x, sigma, unconditioning, conditioning)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
unconditioned_next_x, conditioned_next_x = self.apply_standard_conditioning(x, sigma, unconditioning, conditioning)
|
unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning(
|
||||||
|
x, sigma, unconditioning, conditioning)
|
||||||
|
|
||||||
combined_next_x = self._combine(unconditioned_next_x, conditioned_next_x, unconditional_guidance_scale)
|
combined_next_x = self._combine(unconditioned_next_x, conditioned_next_x, unconditional_guidance_scale)
|
||||||
|
|
||||||
@ -185,7 +199,7 @@ class InvokeAIDiffuserComponent:
|
|||||||
|
|
||||||
# methods below are called from do_diffusion_step and should be considered private to this class.
|
# methods below are called from do_diffusion_step and should be considered private to this class.
|
||||||
|
|
||||||
def apply_standard_conditioning(self, x, sigma, unconditioning, conditioning):
|
def _apply_standard_conditioning(self, x, sigma, unconditioning, conditioning):
|
||||||
# fast batched path
|
# fast batched path
|
||||||
x_twice = torch.cat([x] * 2)
|
x_twice = torch.cat([x] * 2)
|
||||||
sigma_twice = torch.cat([sigma] * 2)
|
sigma_twice = torch.cat([sigma] * 2)
|
||||||
@ -198,7 +212,17 @@ class InvokeAIDiffuserComponent:
|
|||||||
return unconditioned_next_x, conditioned_next_x
|
return unconditioned_next_x, conditioned_next_x
|
||||||
|
|
||||||
|
|
||||||
def apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning):
|
def _apply_standard_conditioning_sequentially(self, x: torch.Tensor, sigma, unconditioning: torch.Tensor, conditioning: torch.Tensor):
|
||||||
|
# low-memory sequential path
|
||||||
|
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning)
|
||||||
|
conditioned_next_x = self.model_forward_callback(x, sigma, conditioning)
|
||||||
|
if conditioned_next_x.device.type == 'mps':
|
||||||
|
# prevent a result filled with zeros. seems to be a torch bug.
|
||||||
|
conditioned_next_x = conditioned_next_x.clone()
|
||||||
|
return unconditioned_next_x, conditioned_next_x
|
||||||
|
|
||||||
|
|
||||||
|
def _apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning):
|
||||||
assert isinstance(conditioning, dict)
|
assert isinstance(conditioning, dict)
|
||||||
assert isinstance(unconditioning, dict)
|
assert isinstance(unconditioning, dict)
|
||||||
x_twice = torch.cat([x] * 2)
|
x_twice = torch.cat([x] * 2)
|
||||||
@ -216,18 +240,21 @@ class InvokeAIDiffuserComponent:
|
|||||||
return unconditioned_next_x, conditioned_next_x
|
return unconditioned_next_x, conditioned_next_x
|
||||||
|
|
||||||
|
|
||||||
def apply_cross_attention_controlled_conditioning(self,
|
def _apply_cross_attention_controlled_conditioning(self,
|
||||||
x: torch.Tensor,
|
x: torch.Tensor,
|
||||||
sigma,
|
sigma,
|
||||||
unconditioning,
|
unconditioning,
|
||||||
conditioning,
|
conditioning,
|
||||||
cross_attention_control_types_to_do):
|
cross_attention_control_types_to_do):
|
||||||
if self.is_running_diffusers:
|
if self.is_running_diffusers:
|
||||||
return self.apply_cross_attention_controlled_conditioning__diffusers(x, sigma, unconditioning, conditioning, cross_attention_control_types_to_do)
|
return self._apply_cross_attention_controlled_conditioning__diffusers(x, sigma, unconditioning,
|
||||||
|
conditioning,
|
||||||
|
cross_attention_control_types_to_do)
|
||||||
else:
|
else:
|
||||||
return self.apply_cross_attention_controlled_conditioning__compvis(x, sigma, unconditioning, conditioning, cross_attention_control_types_to_do)
|
return self._apply_cross_attention_controlled_conditioning__compvis(x, sigma, unconditioning, conditioning,
|
||||||
|
cross_attention_control_types_to_do)
|
||||||
|
|
||||||
def apply_cross_attention_controlled_conditioning__diffusers(self,
|
def _apply_cross_attention_controlled_conditioning__diffusers(self,
|
||||||
x: torch.Tensor,
|
x: torch.Tensor,
|
||||||
sigma,
|
sigma,
|
||||||
unconditioning,
|
unconditioning,
|
||||||
@ -250,7 +277,7 @@ class InvokeAIDiffuserComponent:
|
|||||||
return unconditioned_next_x, conditioned_next_x
|
return unconditioned_next_x, conditioned_next_x
|
||||||
|
|
||||||
|
|
||||||
def apply_cross_attention_controlled_conditioning__compvis(self, x:torch.Tensor, sigma, unconditioning, conditioning, cross_attention_control_types_to_do):
|
def _apply_cross_attention_controlled_conditioning__compvis(self, x:torch.Tensor, sigma, unconditioning, conditioning, cross_attention_control_types_to_do):
|
||||||
# print('pct', percent_through, ': doing cross attention control on', cross_attention_control_types_to_do)
|
# print('pct', percent_through, ': doing cross attention control on', cross_attention_control_types_to_do)
|
||||||
# slower non-batched path (20% slower on mac MPS)
|
# slower non-batched path (20% slower on mac MPS)
|
||||||
# We are only interested in using attention maps for conditioned_next_x, but batching them with generation of
|
# We are only interested in using attention maps for conditioned_next_x, but batching them with generation of
|
||||||
|
@ -61,8 +61,13 @@ class TextualInversionManager:
|
|||||||
|
|
||||||
def load_textual_inversion(self, ckpt_path: Union[str,Path], defer_injecting_tokens: bool = False):
|
def load_textual_inversion(self, ckpt_path: Union[str,Path], defer_injecting_tokens: bool = False):
|
||||||
ckpt_path = Path(ckpt_path)
|
ckpt_path = Path(ckpt_path)
|
||||||
|
|
||||||
|
if not ckpt_path.is_file():
|
||||||
|
return
|
||||||
|
|
||||||
if str(ckpt_path).endswith(".DS_Store"):
|
if str(ckpt_path).endswith(".DS_Store"):
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
scan_result = scan_file_path(str(ckpt_path))
|
scan_result = scan_file_path(str(ckpt_path))
|
||||||
if scan_result.infected_files == 1:
|
if scan_result.infected_files == 1:
|
||||||
@ -87,7 +92,7 @@ class TextualInversionManager:
|
|||||||
!= embedding_info['token_dim']
|
!= embedding_info['token_dim']
|
||||||
):
|
):
|
||||||
print(
|
print(
|
||||||
f"** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with a different token dimension. It can't be used with this model."
|
f"** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with an incompatible token dimension: {self.text_encoder.get_input_embeddings().weight.data[0].shape[0]} vs {embedding_info['token_dim']}."
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
@ -333,7 +338,6 @@ class TextualInversionManager:
|
|||||||
# .pt files found at https://cyberes.github.io/stable-diffusion-textual-inversion-models/
|
# .pt files found at https://cyberes.github.io/stable-diffusion-textual-inversion-models/
|
||||||
# They are actually .bin files
|
# They are actually .bin files
|
||||||
elif len(embedding_ckpt.keys()) == 1:
|
elif len(embedding_ckpt.keys()) == 1:
|
||||||
print(">> Detected .bin file masquerading as .pt file")
|
|
||||||
embedding_info = self._parse_embedding_bin(embedding_file)
|
embedding_info = self._parse_embedding_bin(embedding_file)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
@ -372,9 +376,6 @@ class TextualInversionManager:
|
|||||||
if isinstance(
|
if isinstance(
|
||||||
list(embedding_ckpt["string_to_token"].values())[0], torch.Tensor
|
list(embedding_ckpt["string_to_token"].values())[0], torch.Tensor
|
||||||
):
|
):
|
||||||
print(
|
|
||||||
">> Detected .pt file variant 1"
|
|
||||||
) # example at https://github.com/invoke-ai/InvokeAI/issues/1829
|
|
||||||
for token in list(embedding_ckpt["string_to_token"].keys()):
|
for token in list(embedding_ckpt["string_to_token"].keys()):
|
||||||
embedding_info["name"] = (
|
embedding_info["name"] = (
|
||||||
token
|
token
|
||||||
@ -387,7 +388,7 @@ class TextualInversionManager:
|
|||||||
embedding_info["num_vectors_per_token"] = embedding_info[
|
embedding_info["num_vectors_per_token"] = embedding_info[
|
||||||
"embedding"
|
"embedding"
|
||||||
].shape[0]
|
].shape[0]
|
||||||
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
|
embedding_info["token_dim"] = embedding_info["embedding"].size()[1]
|
||||||
else:
|
else:
|
||||||
print(">> Invalid embedding format")
|
print(">> Invalid embedding format")
|
||||||
embedding_info = None
|
embedding_info = None
|
||||||
|
@ -109,6 +109,7 @@ dependencies = [
|
|||||||
"invokeai-merge" = "ldm.invoke.merge_diffusers:main" # note name munging
|
"invokeai-merge" = "ldm.invoke.merge_diffusers:main" # note name munging
|
||||||
"invokeai-ti" = "ldm.invoke.training.textual_inversion:main"
|
"invokeai-ti" = "ldm.invoke.training.textual_inversion:main"
|
||||||
"invokeai-model-install" = "ldm.invoke.config.model_install:main"
|
"invokeai-model-install" = "ldm.invoke.config.model_install:main"
|
||||||
|
"invokeai-update" = "ldm.invoke.config.invokeai_update:main"
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
"Homepage" = "https://invoke-ai.github.io/InvokeAI/"
|
"Homepage" = "https://invoke-ai.github.io/InvokeAI/"
|
||||||
|
Loading…
Reference in New Issue
Block a user