Merge remote-tracking branch 'origin/main' into dev/diffusers

# Conflicts:
#	scripts/configure_invokeai.py
This commit is contained in:
Kevin Turner 2022-12-03 20:00:39 -08:00
commit e0495a7440
33 changed files with 934 additions and 840 deletions

View File

@ -38,18 +38,32 @@ This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion), [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image process with various new features and options to aid the image
generation process. It runs on Windows, Mac and Linux machines, with generation process. It runs on Windows, macOS and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface. Web interface (see below), and an easy-to-use command-line interface.
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>] **Quick links**: [[How to Install](#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
## Installation Quick-Start
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.2.3)
2. Download the .zip file for your OS (Windows/macOS/Linux).
3. Unzip the file.
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
5. Wait a while, until it is done.
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
8. Type `banana sushi` in the box on the top left and click `Invoke`:
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div> <div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
_Note: This fork is rapidly evolving. Please use the For full installation and upgrade instructions, please see:
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature [InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
## Table of Contents ## Table of Contents
@ -69,10 +83,13 @@ This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see: instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/) [InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
### Hardware Requirements ### Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
#### System #### System
You wil need one of the following: You wil need one of the following:
@ -80,6 +97,10 @@ You wil need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. - An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip. - An Apple computer with an M1 chip.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
#### Memory #### Memory
- At least 12 GB Main Memory RAM. - At least 12 GB Main Memory RAM.
@ -130,12 +151,12 @@ you can try starting `invoke.py` with the `--precision=float32` flag:
### Latest Changes ### Latest Changes
- v2.0.1 (13 October 2022) - v2.0.1 (13 November 2022)
- fix noisy images at high step count when using k* samplers - fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than - dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment) via a new python process (which could break the environment)
- v2.0.0 (9 October 2022) - v2.0.0 (9 November 2022)
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains - `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility. for backward compatibility.
@ -144,7 +165,7 @@ you can try starting `invoke.py` with the `--precision=float32` flag:
- img2img runs on all k* samplers - img2img runs on all k* samplers
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative prompts</a> - Support for <a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction - Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes - Support for Textual Inversion on macOS
- Support in both WebGUI and CLI for <a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing of previously-generated images</a> - Support in both WebGUI and CLI for <a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas), using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command. and "embiggen" upscaling. See the `!fix` command.
@ -153,7 +174,7 @@ you can try starting `invoke.py` with the `--precision=float32` flag:
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a> during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images - Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings. and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms. - Command-line completion in `invoke.py` now works on Windows, Linux and macOS platforms.
- Improved <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line completion behavior</a>. - Improved <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line completion behavior</a>.
New commands added: New commands added:
- List command-line history with `!history` - List command-line history with `!history`

View File

@ -206,9 +206,10 @@ class InvokeAIWebServer:
FlaskUI( FlaskUI(
app=self.app, app=self.app,
socketio=self.socketio, socketio=self.socketio,
server="flask_socketio", start_server="flask-socketio",
width=1600, width=1600,
height=1000, height=1000,
idle_interval=10,
port=self.port port=self.port
).run() ).run()
except KeyboardInterrupt: except KeyboardInterrupt:

View File

@ -6,6 +6,8 @@ IFS=$'\n\t'
echo "Be certain that you're in the 'installer' directory before continuing." echo "Be certain that you're in the 'installer' directory before continuing."
read -p "Press any key to continue, or CTRL-C to exit..." read -p "Press any key to continue, or CTRL-C to exit..."
VERSION='2.2.3'
# make the installer zip for linux and mac # make the installer zip for linux and mac
rm -rf InvokeAI rm -rf InvokeAI
mkdir -p InvokeAI mkdir -p InvokeAI
@ -13,8 +15,8 @@ cp install.sh.in InvokeAI/install.sh
chmod a+x InvokeAI/install.sh chmod a+x InvokeAI/install.sh
cp readme.txt InvokeAI cp readme.txt InvokeAI
zip -r InvokeAI-binary-linux.zip InvokeAI zip -r InvokeAI-binary-$VERSION-linux.zip InvokeAI
zip -r InvokeAI-binary-mac.zip InvokeAI zip -r InvokeAI-binary-$VERSION-mac.zip InvokeAI
# make the installer zip for windows # make the installer zip for windows
rm -rf InvokeAI rm -rf InvokeAI
@ -23,7 +25,7 @@ cp install.bat.in InvokeAI/install.bat
cp readme.txt InvokeAI cp readme.txt InvokeAI
cp WinLongPathsEnabled.reg InvokeAI cp WinLongPathsEnabled.reg InvokeAI
zip -r InvokeAI-binary-windows.zip InvokeAI zip -r InvokeAI-binary-$VERSION-windows.zip InvokeAI
rm -rf InvokeAI rm -rf InvokeAI

View File

@ -10,13 +10,15 @@
@rem This enables a user to install this project without manually installing git or Python @rem This enables a user to install this project without manually installing git or Python
@rem change to the script's directory
PUSHD "%~dp0"
set "no_cache_dir=--no-cache-dir" set "no_cache_dir=--no-cache-dir"
if "%1" == "use-cache" ( if "%1" == "use-cache" (
set "no_cache_dir=" set "no_cache_dir="
) )
echo ***** Installing InvokeAI.. ***** echo ***** Installing InvokeAI.. *****
echo "USING development BRANCH. REMEMBER TO CHANGE TO main BEFORE RELEASE"
@rem Config @rem Config
set INSTALL_ENV_DIR=%cd%\installer_files\env set INSTALL_ENV_DIR=%cd%\installer_files\env
@rem https://mamba.readthedocs.io/en/latest/installation.html @rem https://mamba.readthedocs.io/en/latest/installation.html

View File

@ -214,7 +214,7 @@ _err_exit $? _err_msg
echo -e "\n***** Installed InvokeAI *****\n" echo -e "\n***** Installed InvokeAI *****\n"
cp binary_installer/invoke.sh.in ./invoke.sh cp binary_installer/invoke.sh.in ./invoke.sh
chmod a+x ./invoke.sh chmod a+rx ./invoke.sh
echo -e "\n***** Installed invoke launcher script ******\n" echo -e "\n***** Installed invoke launcher script ******\n"
# more cleanup # more cleanup
@ -229,7 +229,7 @@ deactivate
echo -e "\n***** Finished downloading models *****\n" echo -e "\n***** Finished downloading models *****\n"
echo "All done! Run the command" echo "All done! Run the command"
echo " \"$scriptdir/invoke.sh\"" echo " $scriptdir/invoke.sh"
echo "to start InvokeAI." echo "to start InvokeAI."
read -p "Press any key to exit..." read -p "Press any key to exit..."
exit exit

View File

@ -1,5 +1,6 @@
@echo off @echo off
PUSHD "%~dp0"
call .venv\Scripts\activate.bat call .venv\Scripts\activate.bat
echo Do you want to generate images using the echo Do you want to generate images using the
@ -10,10 +11,10 @@ echo 3. open the developer console
set /p choice="Please enter 1, 2 or 3: " set /p choice="Please enter 1, 2 or 3: "
if /i "%choice%" == "1" ( if /i "%choice%" == "1" (
echo Starting the InvokeAI command-line. echo Starting the InvokeAI command-line.
.venv\Scripts\python scripts\invoke.py .venv\Scripts\python scripts\invoke.py %*
) else if /i "%choice%" == "2" ( ) else if /i "%choice%" == "2" (
echo Starting the InvokeAI browser-based UI. echo Starting the InvokeAI browser-based UI.
.venv\Scripts\python scripts\invoke.py --web .venv\Scripts\python scripts\invoke.py --web %*
) else if /i "%choice%" == "3" ( ) else if /i "%choice%" == "3" (
echo Developer Console echo Developer Console
echo Python command is: echo Python command is:

View File

@ -4,6 +4,11 @@ set -eu
. .venv/bin/activate . .venv/bin/activate
# set required env var for torch on mac MPS
if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
echo "Do you want to generate images using the" echo "Do you want to generate images using the"
echo "1. command-line" echo "1. command-line"
echo "2. browser-based UI" echo "2. browser-based UI"
@ -15,11 +20,11 @@ read choice
case $choice in case $choice in
1) 1)
printf "\nStarting the InvokeAI command-line..\n"; printf "\nStarting the InvokeAI command-line..\n";
.venv/bin/python scripts/invoke.py; .venv/bin/python scripts/invoke.py $*;
;; ;;
2) 2)
printf "\nStarting the InvokeAI browser-based UI..\n"; printf "\nStarting the InvokeAI browser-based UI..\n";
.venv/bin/python scripts/invoke.py --web; .venv/bin/python scripts/invoke.py --web $*;
;; ;;
3) 3)
printf "\nDeveloper Console:\n"; printf "\nDeveloper Console:\n";

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,7 @@
# #
# pip-compile --allow-unsafe --generate-hashes --output-file=installer/py3.10-darwin-x86_64-cpu-reqs.txt installer/requirements.in # pip-compile --allow-unsafe --generate-hashes --output-file=installer/py3.10-darwin-x86_64-cpu-reqs.txt installer/requirements.in
# #
--extra-index-url https://download.pytorch.org/whl/torch_stable.html --extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https --trusted-host https
absl-py==1.3.0 \ absl-py==1.3.0 \
@ -987,7 +987,6 @@ numpy==1.23.4 \
# pandas # pandas
# pyarrow # pyarrow
# pydeck # pydeck
# pypatchmatch
# pytorch-lightning # pytorch-lightning
# pywavelets # pywavelets
# qudida # qudida
@ -1160,7 +1159,6 @@ pillow==9.3.0 \
# imageio # imageio
# k-diffusion # k-diffusion
# matplotlib # matplotlib
# pypatchmatch
# realesrgan # realesrgan
# scikit-image # scikit-image
# streamlit # streamlit
@ -1296,9 +1294,6 @@ pyparsing==3.0.9 \
# via # via
# matplotlib # matplotlib
# packaging # packaging
pypatchmatch @ https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip \
--hash=sha256:4ad6ec95379e7d122d494ff76633cc7cf9b71330d5efda147fceba81e3dc6cd2
# via -r installer/requirements.in
pyreadline3==3.4.1 \ pyreadline3==3.4.1 \
--hash=sha256:6f3d1f7b8a31ba32b73917cefc1f28cc660562f39aea8646d30bd6eff21f7bae \ --hash=sha256:6f3d1f7b8a31ba32b73917cefc1f28cc660562f39aea8646d30bd6eff21f7bae \
--hash=sha256:b0efb6516fd4fb07b45949053826a62fa4cb353db5be2bbb4a7aa1fdd1e345fb --hash=sha256:b0efb6516fd4fb07b45949053826a62fa4cb353db5be2bbb4a7aa1fdd1e345fb
@ -1831,27 +1826,27 @@ toolz==0.12.0 \
--hash=sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f \ --hash=sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f \
--hash=sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194 --hash=sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194
# via altair # via altair
torch==1.12.1 ; platform_system == "Darwin" \ torch==1.12.0 ; platform_system == "Darwin" \
--hash=sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8 \ --hash=sha256:0399746f83b4541bcb5b219a18dbe8cade760aba1c660d2748a38c6dc338ebc7 \
--hash=sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e \ --hash=sha256:0986685f2ec8b7c4d3593e8cfe96be85d462943f1a8f54112fc48d4d9fbbe903 \
--hash=sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd \ --hash=sha256:13c7cca6b2ea3704d775444f02af53c5f072d145247e17b8cd7813ac57869f03 \
--hash=sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2 \ --hash=sha256:201abf43a99bb4980cc827dd4b38ac28f35e4dddac7832718be3d5479cafd2c1 \
--hash=sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541 \ --hash=sha256:2143d5fe192fd908b70b494349de5b1ac02854a8a902bd5f47d13d85b410e430 \
--hash=sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52 \ --hash=sha256:2568f011dddeb5990d8698cc375d237f14568ffa8489854e3b94113b4b6b7c8b \
--hash=sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e \ --hash=sha256:3322d33a06e440d715bb214334bd41314c94632d9a2f07d22006bf21da3a2be4 \
--hash=sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073 \ --hash=sha256:349ea3ba0c0e789e0507876c023181f13b35307aebc2e771efd0e045b8e03e84 \
--hash=sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1 \ --hash=sha256:44a3804e9bb189574f5d02ccc2dc6e32e26a81b3e095463b7067b786048c6072 \
--hash=sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8 \ --hash=sha256:5ed69d5af232c5c3287d44cef998880dadcc9721cd020e9ae02f42e56b79c2e4 \
--hash=sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134 \ --hash=sha256:60d06ee2abfa85f10582d205404d52889d69bcbb71f7e211cfc37e3957ac19ca \
--hash=sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421 \ --hash=sha256:63341f96840a223f277e498d2737b39da30d9f57c7a1ef88857b920096317739 \
--hash=sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286 \ --hash=sha256:72207b8733523388c49d43ffcc4416d1d8cd64c40f7826332e714605ace9b1d2 \
--hash=sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d \ --hash=sha256:7ddb167827170c4e3ff6a27157414a00b9fef93dea175da04caf92a0619b7aee \
--hash=sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf \ --hash=sha256:844f1db41173b53fe40c44b3e04fcca23a6ce00ac328b7099f2800e611766845 \
--hash=sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada \ --hash=sha256:a1325c9c28823af497cbf443369bddac9ac59f67f1e600f8ab9b754958e55b76 \
--hash=sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2 \ --hash=sha256:abbdc5483359b9495dc76e3bd7911ccd2ddc57706c117f8316832e31590af871 \
--hash=sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d \ --hash=sha256:c0313438bc36448ffd209f5fb4e5f325b3af158cdf61c8829b8ddaf128c57816 \
--hash=sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d \ --hash=sha256:e3e8348edca3e3cee5a67a2b452b85c57712efe1cc3ffdb87c128b3dde54534e \
--hash=sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a --hash=sha256:fb47291596677570246d723ee6abbcbac07eeba89d8f83de31e3954f21f44879
# via # via
# -r installer/requirements.in # -r installer/requirements.in
# accelerate # accelerate
@ -1882,26 +1877,26 @@ torchmetrics==0.10.2 \
--hash=sha256:43757d82266969906fc74b6e80766fcb2a0d52d6c3d09e3b7c98cf3b733fd20c \ --hash=sha256:43757d82266969906fc74b6e80766fcb2a0d52d6c3d09e3b7c98cf3b733fd20c \
--hash=sha256:daa29d96bff5cff04d80eec5b9f5076993d6ac9c2d2163e88b6b31f8d38f7c25 --hash=sha256:daa29d96bff5cff04d80eec5b9f5076993d6ac9c2d2163e88b6b31f8d38f7c25
# via pytorch-lightning # via pytorch-lightning
torchvision==0.13.1 ; platform_system == "Darwin" \ torchvision==0.13.0 ; platform_system == "Darwin" \
--hash=sha256:0298bae3b09ac361866088434008d82b99d6458fe8888c8df90720ef4b347d44 \ --hash=sha256:01e9e7b2e7724e66561e8d98f900985d80191e977c5c0b3f33ed31800ba0210c \
--hash=sha256:08f592ea61836ebeceb5c97f4d7a813b9d7dc651bbf7ce4401563ccfae6a21fc \ --hash=sha256:0e28740bd5695076f7c449af650fc474d6566722d446461c2ceebf9c9599b37f \
--hash=sha256:099874088df104d54d8008f2a28539ca0117b512daed8bf3c2bbfa2b7ccb187a \ --hash=sha256:1b703701f0b99f307ad925b1abda2b3d5bdbf30643ff02102b6aeeb8840ae278 \
--hash=sha256:0e77706cc90462653620e336bb90daf03d7bf1b88c3a9a3037df8d111823a56e \ --hash=sha256:1e2049f1207631d42d743205f663f1d2235796565be3f18b0339d479626faf30 \
--hash=sha256:19286a733c69dcbd417b86793df807bd227db5786ed787c17297741a9b0d0fc7 \ --hash=sha256:253eb0c67bf88cef4a79ec69058c3e94f9fde28b9e3699ad1afc0b3ed50f8075 \
--hash=sha256:3567fb3def829229ec217c1e38f08c5128ff7fb65854cac17ebac358ff7aa309 \ --hash=sha256:42d95ab197d090efc5669fec02fbc603d05c859e50ca2c60180d1a113aa9b3e2 \
--hash=sha256:4d8bf321c4380854ef04613935fdd415dce29d1088a7ff99e06e113f0efe9203 \ --hash=sha256:5c31e9b3004142dbfdf32adc4cf2d4fd709b820833e9786f839ae3a91ff65ef0 \
--hash=sha256:5e631241bee3661de64f83616656224af2e3512eb2580da7c08e08b8c965a8ac \ --hash=sha256:61d5093a50b7923a4e5bf9e0271001c29e01abec2348b7dd93370a0a9d15836c \
--hash=sha256:7552e80fa222252b8b217a951c85e172a710ea4cad0ae0c06fbb67addece7871 \ --hash=sha256:667cac55afb13cda7d362466e7eba3119e529b210e55507d231bead09aca5e1f \
--hash=sha256:7cb789ceefe6dcd0dc8eeda37bfc45efb7cf34770eac9533861d51ca508eb5b3 \ --hash=sha256:6c4c35428c758adc485ff8f239b5ed68c1b6c26efa261a52e431cab0f7f22aec \
--hash=sha256:83e9e2457f23110fd53b0177e1bc621518d6ea2108f570e853b768ce36b7c679 \ --hash=sha256:83a4d9d50787d1e886c94486b63b15978391f6cf1892fce6a93132c09b14e128 \
--hash=sha256:87c137f343197769a51333076e66bfcd576301d2cd8614b06657187c71b06c4f \ --hash=sha256:a20662c11dc14fd4eff102ceb946a7ee80b9f98303bb52435cc903f2c4c1fe10 \
--hash=sha256:899eec0b9f3b99b96d6f85b9aa58c002db41c672437677b553015b9135b3be7e \ --hash=sha256:acb72a40e5dc0cd454d28514dbdd589a5057afd9bb5c785b87a54718b999bfa1 \
--hash=sha256:8e4d02e4d8a203e0c09c10dfb478214c224d080d31efc0dbf36d9c4051f7f3c6 \ --hash=sha256:ad458146aca15f652f9b0c227bebd5403602c7341f15f68f20ec119fa8e8f4a5 \
--hash=sha256:b167934a5943242da7b1e59318f911d2d253feeca0d13ad5d832b58eed943401 \ --hash=sha256:ada295dbfe55017b02acfab960a997387f5addbadd28ee5e575e24f692992ce4 \
--hash=sha256:c5ed609c8bc88c575226400b2232e0309094477c82af38952e0373edef0003fd \ --hash=sha256:b620a43df4131ad09f5761c415a016a9ea95aaf8ec8c91d030fb59bad591094a \
--hash=sha256:e9a563894f9fa40692e24d1aa58c3ef040450017cfed3598ff9637f404f3fe3b \ --hash=sha256:b7a2c9aebc7ef265777fe7e82577364288d98cf6b8cf0a63bb2621df78a7af1a \
--hash=sha256:ef5fe3ec1848123cd0ec74c07658192b3147dcd38e507308c790d5943e87b88c \ --hash=sha256:c2278a189663087bb8e65915062aa7a25b8f8e5a3cfaa5879fe277e23e4bbf40 \
--hash=sha256:f230a1a40ed70d51e463ce43df243ec520902f8725de2502e485efc5eea9d864 --hash=sha256:df16abf31e7a5fce8db1f781bf1e4f20c8bc730c7c3f657e946cc5820c04e465
# via # via
# -r installer/requirements.in # -r installer/requirements.in
# basicsr # basicsr

View File

@ -1,10 +1,11 @@
# #
# This file is autogenerated by pip-compile with python 3.9 # This file is autogenerated by pip-compile with Python 3.9
# To update, run: # by the following command:
# #
# pip-compile --allow-unsafe --generate-hashes --output-file=installer/py3.10-linux-x86_64-cuda-reqs.txt installer/requirements.in # pip-compile --allow-unsafe --generate-hashes --output-file=binary_installer/py3.10-linux-x86_64-cuda-reqs.txt binary_installer/requirements.in
# #
--extra-index-url https://download.pytorch.org/whl/torch_stable.html --extra-index-url https://download.pytorch.org/whl/torch_stable.html
--extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https --trusted-host https
absl-py==1.3.0 \ absl-py==1.3.0 \
@ -17,7 +18,7 @@ accelerate==0.14.0 \
--hash=sha256:31c5bcc40564ef849b5bc1c4424a43ccaf9e26413b7df89c2e36bf81f070fd44 \ --hash=sha256:31c5bcc40564ef849b5bc1c4424a43ccaf9e26413b7df89c2e36bf81f070fd44 \
--hash=sha256:b15d562c0889d0cf441b01faa025dfc29b163d061b6cc7d489c2c83b0a55ffab --hash=sha256:b15d562c0889d0cf441b01faa025dfc29b163d061b6cc7d489c2c83b0a55ffab
# via # via
# -r installer/requirements.in # -r binary_installer/requirements.in
# k-diffusion # k-diffusion
addict==2.4.0 \ addict==2.4.0 \
--hash=sha256:249bb56bbfd3cdc2a004ea0ff4c2b6ddc84d53bc2194761636eb314d5cfa5dfc \ --hash=sha256:249bb56bbfd3cdc2a004ea0ff4c2b6ddc84d53bc2194761636eb314d5cfa5dfc \
@ -119,7 +120,7 @@ aiosignal==1.2.0 \
albumentations==1.3.0 \ albumentations==1.3.0 \
--hash=sha256:294165d87d03bc8323e484927f0a5c1a3c64b0e7b9c32a979582a6c93c363bdf \ --hash=sha256:294165d87d03bc8323e484927f0a5c1a3c64b0e7b9c32a979582a6c93c363bdf \
--hash=sha256:be1af36832c8893314f2a5550e8ac19801e04770734c1b70fa3c996b41f37bed --hash=sha256:be1af36832c8893314f2a5550e8ac19801e04770734c1b70fa3c996b41f37bed
# via -r installer/requirements.in # via -r binary_installer/requirements.in
altair==4.2.0 \ altair==4.2.0 \
--hash=sha256:0c724848ae53410c13fa28be2b3b9a9dcb7b5caa1a70f7f217bd663bb419935a \ --hash=sha256:0c724848ae53410c13fa28be2b3b9a9dcb7b5caa1a70f7f217bd663bb419935a \
--hash=sha256:d87d9372e63b48cd96b2a6415f0cf9457f50162ab79dc7a31cd7e024dd840026 --hash=sha256:d87d9372e63b48cd96b2a6415f0cf9457f50162ab79dc7a31cd7e024dd840026
@ -150,6 +151,10 @@ blinker==1.5 \
--hash=sha256:1eb563df6fdbc39eeddc177d953203f99f097e9bf0e2b8f9f3cf18b6ca425e36 \ --hash=sha256:1eb563df6fdbc39eeddc177d953203f99f097e9bf0e2b8f9f3cf18b6ca425e36 \
--hash=sha256:923e5e2f69c155f2cc42dafbbd70e16e3fde24d2d4aa2ab72fbe386238892462 --hash=sha256:923e5e2f69c155f2cc42dafbbd70e16e3fde24d2d4aa2ab72fbe386238892462
# via streamlit # via streamlit
boltons==21.0.0 \
--hash=sha256:65e70a79a731a7fe6e98592ecfb5ccf2115873d01dbc576079874629e5c90f13 \
--hash=sha256:b9bb7b58b2b420bbe11a6025fdef6d3e5edc9f76a42fb467afe7ca212ef9948b
# via torchsde
cachetools==5.2.0 \ cachetools==5.2.0 \
--hash=sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757 \ --hash=sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757 \
--hash=sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db --hash=sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db
@ -183,11 +188,11 @@ click==8.1.3 \
clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip \ clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip \
--hash=sha256:b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a --hash=sha256:b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a
# via # via
# -r installer/requirements.in # -r binary_installer/requirements.in
# clipseg # clipseg
clipseg @ https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip \ clipseg @ https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip \
--hash=sha256:14f43ed42f90be3fe57f06de483cb8be0f67f87a6f62a011339d45a39f4b4189 --hash=sha256:14f43ed42f90be3fe57f06de483cb8be0f67f87a6f62a011339d45a39f4b4189
# via -r installer/requirements.in # via -r binary_installer/requirements.in
commonmark==0.9.1 \ commonmark==0.9.1 \
--hash=sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60 \ --hash=sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60 \
--hash=sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9 --hash=sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9
@ -274,7 +279,7 @@ decorator==5.1.1 \
diffusers==0.7.2 \ diffusers==0.7.2 \
--hash=sha256:4a5f8b3a5fbd936bba7d459611cb35ec62875030367be32b232f9e19543e25a9 \ --hash=sha256:4a5f8b3a5fbd936bba7d459611cb35ec62875030367be32b232f9e19543e25a9 \
--hash=sha256:fb814ffd150cc6f470380b8c6a521181a77beb2f44134d2aad2e4cd8aa2ced0e --hash=sha256:fb814ffd150cc6f470380b8c6a521181a77beb2f44134d2aad2e4cd8aa2ced0e
# via -r installer/requirements.in # via -r binary_installer/requirements.in
dnspython==2.2.1 \ dnspython==2.2.1 \
--hash=sha256:0f7569a4a6ff151958b64304071d370daa3243d15941a7beedf0c9fe5105603e \ --hash=sha256:0f7569a4a6ff151958b64304071d370daa3243d15941a7beedf0c9fe5105603e \
--hash=sha256:a851e51367fb93e9e1361732c1d60dab63eff98712e503ea7d92e6eccb109b4f --hash=sha256:a851e51367fb93e9e1361732c1d60dab63eff98712e503ea7d92e6eccb109b4f
@ -294,7 +299,7 @@ entrypoints==0.4 \
eventlet==0.33.1 \ eventlet==0.33.1 \
--hash=sha256:a085922698e5029f820cf311a648ac324d73cec0e4792877609d978a4b5bbf31 \ --hash=sha256:a085922698e5029f820cf311a648ac324d73cec0e4792877609d978a4b5bbf31 \
--hash=sha256:afbe17f06a58491e9aebd7a4a03e70b0b63fd4cf76d8307bae07f280479b1515 --hash=sha256:afbe17f06a58491e9aebd7a4a03e70b0b63fd4cf76d8307bae07f280479b1515
# via -r installer/requirements.in # via -r binary_installer/requirements.in
facexlib==0.2.5 \ facexlib==0.2.5 \
--hash=sha256:31e20cc4ed5d63562d380e4564bae14ac0d5d1899a079bad87621e13564567e4 \ --hash=sha256:31e20cc4ed5d63562d380e4564bae14ac0d5d1899a079bad87621e13564567e4 \
--hash=sha256:cc7ceb56c5424319c47223cf75eef6828c34c66082707c6eb35b95d39779f02d --hash=sha256:cc7ceb56c5424319c47223cf75eef6828c34c66082707c6eb35b95d39779f02d
@ -320,15 +325,15 @@ flask==2.2.2 \
flask-cors==3.0.10 \ flask-cors==3.0.10 \
--hash=sha256:74efc975af1194fc7891ff5cd85b0f7478be4f7f59fe158102e91abb72bb4438 \ --hash=sha256:74efc975af1194fc7891ff5cd85b0f7478be4f7f59fe158102e91abb72bb4438 \
--hash=sha256:b60839393f3b84a0f3746f6cdca56c1ad7426aa738b70d6c61375857823181de --hash=sha256:b60839393f3b84a0f3746f6cdca56c1ad7426aa738b70d6c61375857823181de
# via -r installer/requirements.in # via -r binary_installer/requirements.in
flask-socketio==5.3.1 \ flask-socketio==5.3.1 \
--hash=sha256:fd0ed0fc1341671d92d5f5b2f5503916deb7aa7e2940e6636cfa2c087c828bf9 \ --hash=sha256:fd0ed0fc1341671d92d5f5b2f5503916deb7aa7e2940e6636cfa2c087c828bf9 \
--hash=sha256:ff0c721f20bff1e2cfba77948727a8db48f187e89a72fe50c34478ce6efb3353 --hash=sha256:ff0c721f20bff1e2cfba77948727a8db48f187e89a72fe50c34478ce6efb3353
# via -r installer/requirements.in # via -r binary_installer/requirements.in
flaskwebgui==0.3.7 \ flaskwebgui==0.3.7 \
--hash=sha256:4a69955308eaa8bb256ba04a994dc8f58a48dcd6f9599694ab1bcd9f43d88a5d \ --hash=sha256:4a69955308eaa8bb256ba04a994dc8f58a48dcd6f9599694ab1bcd9f43d88a5d \
--hash=sha256:535974ce2672dcc74787c254de24cceed4101be75d96952dae82014dd57f061e --hash=sha256:535974ce2672dcc74787c254de24cceed4101be75d96952dae82014dd57f061e
# via -r installer/requirements.in # via -r binary_installer/requirements.in
fonttools==4.38.0 \ fonttools==4.38.0 \
--hash=sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1 \ --hash=sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1 \
--hash=sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb --hash=sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb
@ -412,11 +417,11 @@ future==0.18.2 \
getpass-asterisk==1.0.1 \ getpass-asterisk==1.0.1 \
--hash=sha256:20d45cafda0066d761961e0919728526baf7bb5151fbf48a7d5ea4034127d857 \ --hash=sha256:20d45cafda0066d761961e0919728526baf7bb5151fbf48a7d5ea4034127d857 \
--hash=sha256:7cc357a924cf62fa4e15b73cb4e5e30685c9084e464ffdc3fd9000a2b54ea9e9 --hash=sha256:7cc357a924cf62fa4e15b73cb4e5e30685c9084e464ffdc3fd9000a2b54ea9e9
# via -r installer/requirements.in # via -r binary_installer/requirements.in
gfpgan @ https://github.com/TencentARC/GFPGAN/archive/2eac2033893ca7f427f4035d80fe95b92649ac56.zip \ gfpgan @ https://github.com/invoke-ai/GFPGAN/archive/c796277a1cf77954e5fc0b288d7062d162894248.zip ; platform_system == "Linux" or platform_system == "Darwin" \
--hash=sha256:79e6d71c8f1df7c7ccb0ac6b9a2ccb615ad5cde818c8b6f285a8711c05aebf85 --hash=sha256:4155907b8b7db3686324554df7007eedd245cdf8656c21da9d9a3f44bef2fcaa
# via # via
# -r installer/requirements.in # -r binary_installer/requirements.in
# realesrgan # realesrgan
gitdb==4.0.9 \ gitdb==4.0.9 \
--hash=sha256:8033ad4e853066ba6ca92050b9df2f89301b8fc8bf7e9324d412a63f8bf1a8fd \ --hash=sha256:8033ad4e853066ba6ca92050b9df2f89301b8fc8bf7e9324d412a63f8bf1a8fd \
@ -577,7 +582,7 @@ imageio-ffmpeg==0.4.7 \
--hash=sha256:7a08838f97f363e37ca41821b864fd3fdc99ab1fe2421040c78eb5f56a9e723e \ --hash=sha256:7a08838f97f363e37ca41821b864fd3fdc99ab1fe2421040c78eb5f56a9e723e \
--hash=sha256:8e724d12dfe83e2a6eb39619e820243ca96c81c47c2648e66e05f7ee24e14312 \ --hash=sha256:8e724d12dfe83e2a6eb39619e820243ca96c81c47c2648e66e05f7ee24e14312 \
--hash=sha256:fc60686ef03c2d0f842901b206223c30051a6a120384458761390104470846fd --hash=sha256:fc60686ef03c2d0f842901b206223c30051a6a120384458761390104470846fd
# via -r installer/requirements.in # via -r binary_installer/requirements.in
importlib-metadata==5.0.0 \ importlib-metadata==5.0.0 \
--hash=sha256:da31db32b304314d044d3c12c79bd59e307889b287ad12ff387b3500835fc2ab \ --hash=sha256:da31db32b304314d044d3c12c79bd59e307889b287ad12ff387b3500835fc2ab \
--hash=sha256:ddb0e35065e8938f867ed4928d0ae5bf2a53b7773871bfe6bcc7e4fcdc7dea43 --hash=sha256:ddb0e35065e8938f867ed4928d0ae5bf2a53b7773871bfe6bcc7e4fcdc7dea43
@ -610,9 +615,9 @@ jsonschema==4.17.0 \
# via # via
# altair # altair
# jsonmerge # jsonmerge
k-diffusion @ https://github.com/invoke-ai/k-diffusion/archive/7f16b2c33411f26b3eae78d10648d625cb0c1095.zip \ k-diffusion @ https://github.com/Birch-san/k-diffusion/archive/363386981fee88620709cf8f6f2eea167bd6cd74.zip \
--hash=sha256:c3f2c84036aa98c3abf4552fafab04df5ca472aa639982795e05bb1db43ce5e4 --hash=sha256:8eac5cdc08736e6d61908a1b2948f2b2f62691b01dc1aab978bddb3451af0d66
# via -r installer/requirements.in # via -r binary_installer/requirements.in
kiwisolver==1.4.4 \ kiwisolver==1.4.4 \
--hash=sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b \ --hash=sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b \
--hash=sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166 \ --hash=sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166 \
@ -1005,6 +1010,7 @@ numpy==1.23.4 \
# tifffile # tifffile
# torch-fidelity # torch-fidelity
# torchmetrics # torchmetrics
# torchsde
# torchvision # torchvision
# transformers # transformers
oauthlib==3.2.2 \ oauthlib==3.2.2 \
@ -1091,7 +1097,7 @@ pathtools==0.1.2 \
picklescan==0.0.5 \ picklescan==0.0.5 \
--hash=sha256:368cf1b9a075bc1b6460ad82b694f260532b836c82f99d13846cd36e1bbe7f9a \ --hash=sha256:368cf1b9a075bc1b6460ad82b694f260532b836c82f99d13846cd36e1bbe7f9a \
--hash=sha256:57153eca04d5df5009f2cdd595aef261b8a6f27e03046a1c84f672aa6869c592 --hash=sha256:57153eca04d5df5009f2cdd595aef261b8a6f27e03046a1c84f672aa6869c592
# via -r installer/requirements.in # via -r binary_installer/requirements.in
pillow==9.3.0 \ pillow==9.3.0 \
--hash=sha256:03150abd92771742d4a8cd6f2fa6246d847dcd2e332a18d0c15cc75bf6703040 \ --hash=sha256:03150abd92771742d4a8cd6f2fa6246d847dcd2e332a18d0c15cc75bf6703040 \
--hash=sha256:073adb2ae23431d3b9bcbcff3fe698b62ed47211d0716b067385538a1b0f28b8 \ --hash=sha256:073adb2ae23431d3b9bcbcff3fe698b62ed47211d0716b067385538a1b0f28b8 \
@ -1300,11 +1306,11 @@ pyparsing==3.0.9 \
# packaging # packaging
pypatchmatch @ https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip \ pypatchmatch @ https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip \
--hash=sha256:4ad6ec95379e7d122d494ff76633cc7cf9b71330d5efda147fceba81e3dc6cd2 --hash=sha256:4ad6ec95379e7d122d494ff76633cc7cf9b71330d5efda147fceba81e3dc6cd2
# via -r installer/requirements.in # via -r binary_installer/requirements.in
pyreadline3==3.4.1 \ pyreadline3==3.4.1 \
--hash=sha256:6f3d1f7b8a31ba32b73917cefc1f28cc660562f39aea8646d30bd6eff21f7bae \ --hash=sha256:6f3d1f7b8a31ba32b73917cefc1f28cc660562f39aea8646d30bd6eff21f7bae \
--hash=sha256:b0efb6516fd4fb07b45949053826a62fa4cb353db5be2bbb4a7aa1fdd1e345fb --hash=sha256:b0efb6516fd4fb07b45949053826a62fa4cb353db5be2bbb4a7aa1fdd1e345fb
# via -r installer/requirements.in # via -r binary_installer/requirements.in
pyrsistent==0.19.2 \ pyrsistent==0.19.2 \
--hash=sha256:055ab45d5911d7cae397dc418808d8802fb95262751872c841c170b0dbf51eed \ --hash=sha256:055ab45d5911d7cae397dc418808d8802fb95262751872c841c170b0dbf51eed \
--hash=sha256:111156137b2e71f3a9936baf27cb322e8024dac3dc54ec7fb9f0bcf3249e68bb \ --hash=sha256:111156137b2e71f3a9936baf27cb322e8024dac3dc54ec7fb9f0bcf3249e68bb \
@ -1441,7 +1447,7 @@ qudida==0.0.4 \
realesrgan==0.3.0 \ realesrgan==0.3.0 \
--hash=sha256:0d36da96ab9f447071606e91f502ccdfb08f80cc82ee4f8caf720c7745ccec7e \ --hash=sha256:0d36da96ab9f447071606e91f502ccdfb08f80cc82ee4f8caf720c7745ccec7e \
--hash=sha256:59336c16c30dd5130eff350dd27424acb9b7281d18a6810130e265606c9a6088 --hash=sha256:59336c16c30dd5130eff350dd27424acb9b7281d18a6810130e265606c9a6088
# via -r installer/requirements.in # via -r binary_installer/requirements.in
regex==2022.10.31 \ regex==2022.10.31 \
--hash=sha256:052b670fafbe30966bbe5d025e90b2a491f85dfe5b2583a163b5e60a85a321ad \ --hash=sha256:052b670fafbe30966bbe5d025e90b2a491f85dfe5b2583a163b5e60a85a321ad \
--hash=sha256:0653d012b3bf45f194e5e6a41df9258811ac8fc395579fa82958a8b76286bea4 \ --hash=sha256:0653d012b3bf45f194e5e6a41df9258811ac8fc395579fa82958a8b76286bea4 \
@ -1656,6 +1662,7 @@ scipy==1.9.3 \
# scikit-learn # scikit-learn
# torch-fidelity # torch-fidelity
# torchdiffeq # torchdiffeq
# torchsde
semver==2.13.0 \ semver==2.13.0 \
--hash=sha256:ced8b23dceb22134307c1b8abfa523da14198793d9787ac838e70e29e77458d4 \ --hash=sha256:ced8b23dceb22134307c1b8abfa523da14198793d9787ac838e70e29e77458d4 \
--hash=sha256:fa0fe2722ee1c3f57eac478820c3a5ae2f624af8264cbdf9000c980ff7f75e3f --hash=sha256:fa0fe2722ee1c3f57eac478820c3a5ae2f624af8264cbdf9000c980ff7f75e3f
@ -1663,7 +1670,7 @@ semver==2.13.0 \
send2trash==1.8.0 \ send2trash==1.8.0 \
--hash=sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d \ --hash=sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d \
--hash=sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08 --hash=sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08
# via -r installer/requirements.in # via -r binary_installer/requirements.in
sentry-sdk==1.10.1 \ sentry-sdk==1.10.1 \
--hash=sha256:06c0fa9ccfdc80d7e3b5d2021978d6eb9351fa49db9b5847cf4d1f2a473414ad \ --hash=sha256:06c0fa9ccfdc80d7e3b5d2021978d6eb9351fa49db9b5847cf4d1f2a473414ad \
--hash=sha256:105faf7bd7b7fa25653404619ee261527266b14103fe1389e0ce077bd23a9691 --hash=sha256:105faf7bd7b7fa25653404619ee261527266b14103fe1389e0ce077bd23a9691
@ -1754,11 +1761,11 @@ smmap==5.0.0 \
streamlit==1.14.0 \ streamlit==1.14.0 \
--hash=sha256:62556d873567e1b3427bcd118a57ee6946619f363bd6bba38df2d1f8225ecba0 \ --hash=sha256:62556d873567e1b3427bcd118a57ee6946619f363bd6bba38df2d1f8225ecba0 \
--hash=sha256:e078b8143d150ba721bdb9194218e311c5fe1d6d4156473a2dea6cc848a6c9fc --hash=sha256:e078b8143d150ba721bdb9194218e311c5fe1d6d4156473a2dea6cc848a6c9fc
# via -r installer/requirements.in # via -r binary_installer/requirements.in
taming-transformers-rom1504==0.0.6 \ taming-transformers-rom1504==0.0.6 \
--hash=sha256:051b5804c58caa247bcd51d17ddb525b4d5f892a29d42dc460f40e3e9e34e5d8 \ --hash=sha256:051b5804c58caa247bcd51d17ddb525b4d5f892a29d42dc460f40e3e9e34e5d8 \
--hash=sha256:73fe5fc1108accee4236ee6976e0987ab236afad0af06cb9f037641a908d2c32 --hash=sha256:73fe5fc1108accee4236ee6976e0987ab236afad0af06cb9f037641a908d2c32
# via -r installer/requirements.in # via -r binary_installer/requirements.in
tb-nightly==2.11.0a20221106 \ tb-nightly==2.11.0a20221106 \
--hash=sha256:8940457ee42db92f01da8bcdbbea1a476735eda559dde5976f5728919960af4a --hash=sha256:8940457ee42db92f01da8bcdbbea1a476735eda559dde5976f5728919960af4a
# via # via
@ -1783,7 +1790,7 @@ tensorboard-plugin-wit==1.8.1 \
# tensorboard # tensorboard
test-tube==0.7.5 \ test-tube==0.7.5 \
--hash=sha256:1379c33eb8cde3e9b36610f87da0f16c2e06496b1cfebac473df4e7be2faa124 --hash=sha256:1379c33eb8cde3e9b36610f87da0f16c2e06496b1cfebac473df4e7be2faa124
# via -r installer/requirements.in # via -r binary_installer/requirements.in
threadpoolctl==3.1.0 \ threadpoolctl==3.1.0 \
--hash=sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b \ --hash=sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b \
--hash=sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380 --hash=sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380
@ -1843,7 +1850,7 @@ torch==1.12.0+cu116 ; platform_system == "Linux" or platform_system == "Windows"
--hash=sha256:aa43d7b54b86f723f17c5c44df1078c59a6149fc4d42fbef08aafab9d61451c9 \ --hash=sha256:aa43d7b54b86f723f17c5c44df1078c59a6149fc4d42fbef08aafab9d61451c9 \
--hash=sha256:f772be831447dd01ebd26cbedf619e668d1b269d69bf6b4ff46b1378362bff26 --hash=sha256:f772be831447dd01ebd26cbedf619e668d1b269d69bf6b4ff46b1378362bff26
# via # via
# -r installer/requirements.in # -r binary_installer/requirements.in
# accelerate # accelerate
# basicsr # basicsr
# clean-fid # clean-fid
@ -1859,11 +1866,12 @@ torch==1.12.0+cu116 ; platform_system == "Linux" or platform_system == "Windows"
# torch-fidelity # torch-fidelity
# torchdiffeq # torchdiffeq
# torchmetrics # torchmetrics
# torchsde
# torchvision # torchvision
torch-fidelity==0.3.0 \ torch-fidelity==0.3.0 \
--hash=sha256:3d3e33db98919759cc4f3f24cb27e1e74bdc7c905d90a780630e4e1c18492b66 \ --hash=sha256:3d3e33db98919759cc4f3f24cb27e1e74bdc7c905d90a780630e4e1c18492b66 \
--hash=sha256:d01284825595feb7dc3eae3dc9a0d8ced02be764813a3483f109bc142b52a1d3 --hash=sha256:d01284825595feb7dc3eae3dc9a0d8ced02be764813a3483f109bc142b52a1d3
# via -r installer/requirements.in # via -r binary_installer/requirements.in
torchdiffeq==0.2.3 \ torchdiffeq==0.2.3 \
--hash=sha256:b5b01ec1294a2d8d5f77e567bf17c5de1237c0573cb94deefa88326f0e18c338 \ --hash=sha256:b5b01ec1294a2d8d5f77e567bf17c5de1237c0573cb94deefa88326f0e18c338 \
--hash=sha256:fe75f434b9090ac0c27702e02bed21472b0f87035be6581f51edc5d4013ea31a --hash=sha256:fe75f434b9090ac0c27702e02bed21472b0f87035be6581f51edc5d4013ea31a
@ -1872,6 +1880,10 @@ torchmetrics==0.10.2 \
--hash=sha256:43757d82266969906fc74b6e80766fcb2a0d52d6c3d09e3b7c98cf3b733fd20c \ --hash=sha256:43757d82266969906fc74b6e80766fcb2a0d52d6c3d09e3b7c98cf3b733fd20c \
--hash=sha256:daa29d96bff5cff04d80eec5b9f5076993d6ac9c2d2163e88b6b31f8d38f7c25 --hash=sha256:daa29d96bff5cff04d80eec5b9f5076993d6ac9c2d2163e88b6b31f8d38f7c25
# via pytorch-lightning # via pytorch-lightning
torchsde==0.2.5 \
--hash=sha256:222be9e15610d37a4b5a71cfa0c442178f9fd9ca02f6522a3e11c370b3d0906b \
--hash=sha256:4c34373a94a357bdf60bbfee00c850f3563d634491555820b900c9a4f7eff300
# via k-diffusion
torchvision==0.13.0+cu116 ; platform_system == "Linux" or platform_system == "Windows" \ torchvision==0.13.0+cu116 ; platform_system == "Linux" or platform_system == "Windows" \
--hash=sha256:1696feadf1921c8fa1549bad774221293298288ebedaa14e44bc3e57e964a369 \ --hash=sha256:1696feadf1921c8fa1549bad774221293298288ebedaa14e44bc3e57e964a369 \
--hash=sha256:572544b108eaf12638f3dca0f496a453c4b8d8256bcc8333d5355df641c0380c \ --hash=sha256:572544b108eaf12638f3dca0f496a453c4b8d8256bcc8333d5355df641c0380c \
@ -1882,7 +1894,7 @@ torchvision==0.13.0+cu116 ; platform_system == "Linux" or platform_system == "Wi
--hash=sha256:cb6bf0117b8f4b601baeae54e8a6bb5c4942b054835ba997f438ddcb7adcfb90 \ --hash=sha256:cb6bf0117b8f4b601baeae54e8a6bb5c4942b054835ba997f438ddcb7adcfb90 \
--hash=sha256:d1a3c124645e3460b3e50b54eb89a2575a5036bfa618f15dc4f5d635c716069d --hash=sha256:d1a3c124645e3460b3e50b54eb89a2575a5036bfa618f15dc4f5d635c716069d
# via # via
# -r installer/requirements.in # -r binary_installer/requirements.in
# basicsr # basicsr
# clean-fid # clean-fid
# clip # clip
@ -1921,10 +1933,13 @@ tqdm==4.64.1 \
# taming-transformers-rom1504 # taming-transformers-rom1504
# torch-fidelity # torch-fidelity
# transformers # transformers
trampoline==0.1.2 \
--hash=sha256:36cc9a4ff9811843d177fc0e0740efbd7da39eadfe6e50c9e2937cbc06d899d9
# via torchsde
transformers==4.24.0 \ transformers==4.24.0 \
--hash=sha256:486f353a8e594002e48be0e2aba723d96eda839e63bfe274702a4b5eda85559b \ --hash=sha256:486f353a8e594002e48be0e2aba723d96eda839e63bfe274702a4b5eda85559b \
--hash=sha256:b7ab50039ef9bf817eff14ab974f306fd20a72350bdc9df3a858fd009419322e --hash=sha256:b7ab50039ef9bf817eff14ab974f306fd20a72350bdc9df3a858fd009419322e
# via -r installer/requirements.in # via -r binary_installer/requirements.in
typing-extensions==4.4.0 \ typing-extensions==4.4.0 \
--hash=sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa \ --hash=sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa \
--hash=sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e --hash=sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e

View File

@ -1,5 +1,6 @@
--prefer-binary --prefer-binary
--extra-index-url https://download.pytorch.org/whl/torch_stable.html --extra-index-url https://download.pytorch.org/whl/torch_stable.html
--extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https://download.pytorch.org --trusted-host https://download.pytorch.org
accelerate~=0.14 accelerate~=0.14
albumentations albumentations
@ -26,6 +27,7 @@ transformers
picklescan picklescan
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip
https://github.com/TencentARC/GFPGAN/archive/2eac2033893ca7f427f4035d80fe95b92649ac56.zip https://github.com/invoke-ai/GFPGAN/archive/3f5d2397361199bc4a91c08bb7d80f04d7805615.zip ; platform_system=='Windows'
https://github.com/invoke-ai/k-diffusion/archive/7f16b2c33411f26b3eae78d10648d625cb0c1095.zip https://github.com/invoke-ai/GFPGAN/archive/c796277a1cf77954e5fc0b288d7062d162894248.zip ; platform_system=='Linux' or platform_system=='Darwin'
https://github.com/Birch-san/k-diffusion/archive/363386981fee88620709cf8f6f2eea167bd6cd74.zip
https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip

View File

@ -130,20 +130,34 @@ file should contain the startup options as you would type them on the
command line (`--steps=10 --grid`), one argument per line, or a command line (`--steps=10 --grid`), one argument per line, or a
mixture of both using any of the accepted command switch formats: mixture of both using any of the accepted command switch formats:
!!! example "" !!! example "my unmodified initialization file"
```bash ```bash title="~/.invokeai" linenums="1"
--web # InvokeAI initialization file
--steps=28 # This is the InvokeAI initialization file, which contains command-line default values.
--grid # Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
-f 0.6 -C 11.0 -A k_euler_a # or renaming it and then running configure_invokeai.py again.
# The --root option below points to the folder in which InvokeAI stores its models, configs and outputs.
--root="/Users/mauwii/invokeai"
# the --outdir option controls the default location of image files.
--outdir="/Users/mauwii/invokeai/outputs"
# You may place other frequently-used startup commands here, one or more per line.
# Examples:
# --web --host=0.0.0.0
# --steps=20
# -Ak_euler_a -C10.0
``` ```
Note that the initialization file only accepts the command line arguments. !!! note
There are additional arguments that you can provide on the `invoke>` command
line (such as `-n` or `--iterations`) that cannot be entered into this file. The initialization file only accepts the command line arguments.
Also be alert for empty blank lines at the end of the file, which will cause There are additional arguments that you can provide on the `invoke>` command
an arguments error at startup time. line (such as `-n` or `--iterations`) that cannot be entered into this file.
Also be alert for empty blank lines at the end of the file, which will cause
an arguments error at startup time.
## List of prompt arguments ## List of prompt arguments
@ -195,15 +209,17 @@ Here are the invoke> command that apply to txt2img:
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. | | `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory | | `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
Note that the width and height of the image must be multiples of 64. You can !!! note
provide different values, but they will be rounded down to the nearest multiple
of 64.
### This is an example of img2img: the width and height of the image must be multiples of 64. You can
provide different values, but they will be rounded down to the nearest multiple
of 64.
``` !!! example "This is a example of img2img"
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
``` ```bash
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
```
This will modify the indicated vacation photograph by making it more like the This will modify the indicated vacation photograph by making it more like the
prompt. Results will vary greatly depending on what is in the image. We also ask prompt. Results will vary greatly depending on what is in the image. We also ask
@ -253,7 +269,7 @@ description of the part of the image to replace. For example, if you have an
image of a breakfast plate with a bagel, toast and scrambled eggs, you can image of a breakfast plate with a bagel, toast and scrambled eggs, you can
selectively mask the bagel and replace it with a piece of cake this way: selectively mask the bagel and replace it with a piece of cake this way:
``` ```bash
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
``` ```
@ -265,7 +281,7 @@ are getting too much or too little masking you can adjust the threshold down (to
get more mask), or up (to get less). In this example, by passing `-tm` a higher get more mask), or up (to get less). In this example, by passing `-tm` a higher
value, we are insisting on a more stringent classification. value, we are insisting on a more stringent classification.
``` ```bash
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6 invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
``` ```
@ -275,16 +291,16 @@ You can load and use hundreds of community-contributed Textual
Inversion models just by typing the appropriate trigger phrase. Please Inversion models just by typing the appropriate trigger phrase. Please
see [Concepts Library](CONCEPTS.md) for more details. see [Concepts Library](CONCEPTS.md) for more details.
# Other Commands ## Other Commands
The CLI offers a number of commands that begin with "!". The CLI offers a number of commands that begin with "!".
## Postprocessing images ### Postprocessing images
To postprocess a file using face restoration or upscaling, use the `!fix` To postprocess a file using face restoration or upscaling, use the `!fix`
command. command.
### `!fix` #### `!fix`
This command runs a post-processor on a previously-generated image. It takes a This command runs a post-processor on a previously-generated image. It takes a
PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen` PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
@ -311,19 +327,19 @@ Some examples:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 [1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
``` ```
### !mask #### `!mask`
This command takes an image, a text prompt, and uses the `clipseg` algorithm to This command takes an image, a text prompt, and uses the `clipseg` algorithm to
automatically generate a mask of the area that matches the text prompt. It is automatically generate a mask of the area that matches the text prompt. It is
useful for debugging the text masking process prior to inpainting with the useful for debugging the text masking process prior to inpainting with the
`--text_mask` argument. See [INPAINTING.md] for details. `--text_mask` argument. See [INPAINTING.md] for details.
## Model selection and importation ### Model selection and importation
The CLI allows you to add new models on the fly, as well as to switch among them The CLI allows you to add new models on the fly, as well as to switch among them
rapidly without leaving the script. rapidly without leaving the script.
### !models #### `!models`
This prints out a list of the models defined in `config/models.yaml'. The active This prints out a list of the models defined in `config/models.yaml'. The active
model is bold-faced model is bold-faced
@ -336,7 +352,7 @@ laion400m not loaded <no description>
waifu-diffusion not loaded Waifu Diffusion v1.3 waifu-diffusion not loaded Waifu Diffusion v1.3
</pre> </pre>
### !switch <model> #### `!switch <model>`
This quickly switches from one model to another without leaving the CLI script. This quickly switches from one model to another without leaving the CLI script.
`invoke.py` uses a memory caching system; once a model has been loaded, `invoke.py` uses a memory caching system; once a model has been loaded,
@ -361,7 +377,7 @@ invoke> !switch waifu-diffusion
| Making attention of type 'vanilla' with 512 in_channels | Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision | Using faster float16 precision
>> Model loaded in 18.24s >> Model loaded in 18.24s
>> Max VRAM used to load the model: 2.17G >> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G >> Current VRAM usage:2.17G
>> Setting Sampler to k_lms >> Setting Sampler to k_lms
@ -381,7 +397,7 @@ laion400m not loaded <no description>
waifu-diffusion cached Waifu Diffusion v1.3 waifu-diffusion cached Waifu Diffusion v1.3
</pre> </pre>
### !import_model <path/to/model/weights> #### `!import_model <path/to/model/weights>`
This command imports a new model weights file into InvokeAI, makes it available This command imports a new model weights file into InvokeAI, makes it available
for image generation within the script, and writes out the configuration for the for image generation within the script, and writes out the configuration for the
@ -428,10 +444,10 @@ OK to import [n]? <b>y</b>
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels | Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision | Using faster float16 precision
invoke> invoke>
</pre> </pre>
###!edit_model <name_of_model> #### `!edit_model <name_of_model>`
The `!edit_model` command can be used to modify a model that is already defined The `!edit_model` command can be used to modify a model that is already defined
in `config/models.yaml`. Call it with the short name of the model you wish to in `config/models.yaml`. Call it with the short name of the model you wish to
@ -468,12 +484,12 @@ text... Outputs: [2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix
"outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512
-H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 ``` -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 ```
## History processing ### History processing
The CLI provides a series of convenient commands for reviewing previous actions, The CLI provides a series of convenient commands for reviewing previous actions,
retrieving them, modifying them, and re-running them. retrieving them, modifying them, and re-running them.
### !history #### `!history`
The invoke script keeps track of all the commands you issue during a session, The invoke script keeps track of all the commands you issue during a session,
allowing you to re-run them. On Mac and Linux systems, it also writes the allowing you to re-run them. On Mac and Linux systems, it also writes the
@ -485,20 +501,22 @@ during the session (Windows), or the most recent 1000 commands (Mac|Linux). You
can then repeat a command by using the command `!NNN`, where "NNN" is the can then repeat a command by using the command `!NNN`, where "NNN" is the
history line number. For example: history line number. For example:
```bash !!! example ""
invoke> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
### !fetch ```bash
invoke> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
####`!fetch`
This command retrieves the generation parameters from a previously generated This command retrieves the generation parameters from a previously generated
image and either loads them into the command line (Linux|Mac), or prints them image and either loads them into the command line (Linux|Mac), or prints them
@ -508,33 +526,36 @@ a folder with image png files, and wildcard \*.png to retrieve the dream command
used to generate the images, and save them to a file commands.txt for further used to generate the images, and save them to a file commands.txt for further
processing. processing.
This example loads the generation command for a single png file: !!! example "load the generation command for a single png file"
```bash ```bash
invoke> !fetch 0000015.8929913.png invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running: # the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5 invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
``` ```
This one fetches the generation commands from a batch of files and stores them !!! example "fetch the generation commands from a batch of files and store them into `selected.txt`"
into `selected.txt`:
```bash ```bash
invoke> !fetch outputs\selected-imgs\*.png selected.txt invoke> !fetch outputs\selected-imgs\*.png selected.txt
``` ```
### !replay #### `!replay`
This command replays a text file generated by !fetch or created manually This command replays a text file generated by !fetch or created manually
``` !!! example
invoke> !replay outputs\selected-imgs\selected.txt
```
Note that these commands may behave unexpectedly if given a PNG file that was ```bash
not generated by InvokeAI. invoke> !replay outputs\selected-imgs\selected.txt
```
### !search <search string> !!! note
These commands may behave unexpectedly if given a PNG file that was
not generated by InvokeAI.
#### `!search <search string>`
This is similar to !history but it only returns lines that contain This is similar to !history but it only returns lines that contain
`search string`. For example: `search string`. For example:
@ -544,7 +565,7 @@ invoke> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194 [21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
``` ```
### `!clear` #### `!clear`
This clears the search history from memory and disk. Be advised that this This clears the search history from memory and disk. Be advised that this
operation is irreversible and does not issue any warnings! operation is irreversible and does not issue any warnings!

View File

@ -1,130 +1,110 @@
--- ---
title: The Hugging Face Concepts Library and Importing Textual Inversion files title: Concepts Library
--- ---
# :material-file-document: Concepts Library # :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
## Using Textual Inversion Files ## Using Textual Inversion Files
Textual inversion (TI) files are small models that customize the output of Textual inversion (TI) files are small models that customize the output of
Stable Diffusion image generation. They can augment SD with Stable Diffusion image generation. They can augment SD with specialized subjects
specialized subjects and artistic styles. They are also known as and artistic styles. They are also known as "embeds" in the machine learning
"embeds" in the machine learning world. world.
Each TI file introduces one or more vocabulary terms to the SD Each TI file introduces one or more vocabulary terms to the SD model. These are
model. These are known in InvokeAI as "triggers." Triggers are often, known in InvokeAI as "triggers." Triggers are often, but not always, denoted
but not always, denoted using angle brackets as in using angle brackets as in "&lt;trigger-phrase&gt;". The two most common type of
"&lt;trigger-phrase&gt;". The two most common type of TI files that you'll TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
encounter are `.pt` and `.bin` files, which are produced by different different TI training packages. InvokeAI supports both formats, but its
TI training packages. InvokeAI supports both formats, but its [built-in [built-in TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
The [Hugging Face company](https://huggingface.co/sd-concepts-library) The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
has amassed a large ligrary of &gt;800 community-contributed TI files amassed a large ligrary of &gt;800 community-contributed TI files covering a
covering a broad range of subjects and styles. InvokeAI has built-in broad range of subjects and styles. InvokeAI has built-in support for this
support for this library which downloads and merges TI files library which downloads and merges TI files automatically upon request. You can
automatically upon request. You can also install your own or others' also install your own or others' TI files by placing them in a designated
TI files by placing them in a designated directory. directory.
### An Example ### An Example
Here are a few examples to illustrate how it works. All these images Here are a few examples to illustrate how it works. All these images were
were generated using the command-line client and the Stable Diffusion generated using the command-line client and the Stable Diffusion 1.5 model:
1.5 model:
Japanese gardener | Japanese gardener | Japanese gardener &lt;ghibli-face&gt; | Japanese gardener &lt;hoi4-leaders&gt; | Japanese gardener &lt;cartoona-animals&gt; |
<br> | :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
<img src="../assets/concepts/image1.png"> | ![](../assets/concepts/image1.png) | ![](../assets/concepts/image2.png) | ![](../assets/concepts/image3.png) | ![](../assets/concepts/image4.png) |
Japanese gardener &lt;ghibli-face&gt;
<br>
<img src="../assets/concepts/image2.png">
Japanese gardener &lt;hoi4-leaders&gt;
<br>
<img src="../assets/concepts/image3.png">
Japanese gardener &lt;cartoona-animals&gt;
<br>
<img src="../assets/concepts/image4.png">
You can also combine styles and concepts: You can also combine styles and concepts:
A portrait of &lt;alf&gt; in &lt;cartoona-animal&gt; style <figure markdown>
<br> ![](../assets/concepts/image5.png)
<img src="../assets/concepts/image5.png"> <figcaption>A portrait of &lt;alf&gt; in &lt;cartoona-animal&gt; style</figcaption>
</figure>
## Using a Hugging Face Concept ## Using a Hugging Face Concept
Hugging Face TI concepts are downloaded and installed automatically as Hugging Face TI concepts are downloaded and installed automatically as you
you require them. This requires your machine to be connected to the require them. This requires your machine to be connected to the Internet. To
Internet. To find out what each concept is for, you can browse the find out what each concept is for, you can browse the
[Hugging Face concepts [Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
library](https://huggingface.co/sd-concepts-library) and look at look at examples of what each concept produces.
examples of what each concept produces.
When you have an idea of a concept you wish to try, go to the When you have an idea of a concept you wish to try, go to the command-line
command-line client (CLI) and type a "&lt;" character and the beginning client (CLI) and type a "&lt;" character and the beginning of the Hugging Face
of the Hugging Face concept name you wish to load. Press the Tab key, concept name you wish to load. Press the Tab key, and the CLI will show you all
and the CLI will show you all matching concepts. You can also type "&lt;" matching concepts. You can also type "&lt;" and Tab to get a listing of all ~800
and Tab to get a listing of all ~800 concepts, but be prepared to concepts, but be prepared to scroll up to see them all! If there is more than
scroll up to see them all! If there is more than one match you can one match you can continue to type and Tab until the concept is completed.
continue to type and Tab until the concept is completed.
For example if you type "&lt;x" and Tab, you'll be prompted with the completions: For example if you type "&lt;x" and Tab, you'll be prompted with the
completions:
``` ```
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz> <xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
``` ```
Now type "id" and press Tab. It will be autocompleted to Now type "id" and press Tab. It will be autocompleted to "&lt;xidiversity&gt;"
"&lt;xidiversity&gt;" because this is a unique match. because this is a unique match.
Finish your prompt and generate as usual. You may include multiple Finish your prompt and generate as usual. You may include multiple concept terms
concept terms in the prompt. in the prompt.
If you have never used this concept before, you will see a message If you have never used this concept before, you will see a message that the TI
that the TI model is being downloaded and installed. After this, the model is being downloaded and installed. After this, the concept will be saved
concept will be saved locally (in the `models/sd-concepts-library` locally (in the `models/sd-concepts-library` directory) for future use.
directory) for future use.
Several steps happen during downloading and Several steps happen during downloading and installation, including a scan of
installation, including a scan of the file for malicious code. Should the file for malicious code. Should any errors occur, you will be warned and the
any errors occur, you will be warned and the concept will fail to concept will fail to load. Generation will then continue treating the trigger
load. Generation will then continue treating the trigger term as a term as a normal string of characters (e.g. as literal "&lt;ghibli-face&gt;").
normal string of characters (e.g. as literal "&lt;ghibli-face&gt;").
Currently auto-installation of concepts is a feature only available on Currently auto-installation of concepts is a feature only available on the
the command-line client. Support for the WebUI is a work in progress. command-line client. Support for the WebUI is a work in progress.
## Installing your Own TI Files ## Installing your Own TI Files
You may install any number of `.pt` and `.bin` files simply by copying You may install any number of `.pt` and `.bin` files simply by copying them into
them into the `embeddings` directory of the InvokeAI runtime directory the `embeddings` directory of the InvokeAI runtime directory (usually `invokeai`
(usually `invokeai` in your home directory). You may create in your home directory). You may create subdirectories in order to organize the
subdirectories in order to organize the files in any way you wish. Be files in any way you wish. Be careful not to overwrite one file with another.
careful not to overwrite one file with another. For example, TI files For example, TI files generated by the Hugging Face toolkit share the named
generated by the Hugging Face toolkit share the named `learned_embedding.bin`. You can use subdirectories to keep them distinct.
`learned_embedding.bin`. You can use subdirectories to keep them
distinct.
At startup time, InvokeAI will scan the `embeddings` directory and At startup time, InvokeAI will scan the `embeddings` directory and load any TI
load any TI files it finds there. At startup you will see a message files it finds there. At startup you will see a message similar to this one:
similar to this one:
``` ```bash
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight> >> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
``` ```
Note the "*" trigger term. This is a placeholder term that many early Note the `*` trigger term. This is a placeholder term that many early TI
TI tutorials taught people to use rather than a more descriptive tutorials taught people to use rather than a more descriptive term.
term. Unfortunately, if you have multiple TI files that all use this Unfortunately, if you have multiple TI files that all use this term, only the
term, only the first one loaded will be triggered by use of the term. first one loaded will be triggered by use of the term.
To avoid this problem, you can use the `merge_embeddings.py` script to To avoid this problem, you can use the `merge_embeddings.py` script to merge two
merge two or more TI files together. If it encounters a collision of or more TI files together. If it encounters a collision of terms, the script
terms, the script will prompt you to select new terms that do not will prompt you to select new terms that do not collide. See
collide. See [Textual Inversion](TEXTUAL_INVERSION.md) for details. [Textual Inversion](TEXTUAL_INVERSION.md) for details.
## Further Reading ## Further Reading

View File

@ -12,21 +12,19 @@ stable diffusion to build the prompt on top of the image you provide, preserving
the original's basic shape and layout. To use it, provide the `--init_img` the original's basic shape and layout. To use it, provide the `--init_img`
option as shown here: option as shown here:
```commandline !!! example ""
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
```
This will take the original image shown here: ```commandline
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
```
<figure markdown> <figure markdown>
![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 }
</figure>
and generate a new image based on it as shown here: | original image | generated image |
| :------------: | :-------------: |
| ![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 } | ![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 } |
<figure markdown> </figure>
![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 }
</figure>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep (`-f`) controls how much the original will be modified, ranging from `0.0` (keep
@ -88,13 +86,15 @@ from a prompt. If the step count is 10, then the "latent space" (Stable
Diffusion's internal representation of the image) for the prompt "fire" with Diffusion's internal representation of the image) for the prompt "fire" with
seed `1592514025` develops something like this: seed `1592514025` develops something like this:
```bash !!! example ""
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
<figure markdown> ```bash
![latent steps](../assets/img2img/000019.steps.png) invoke> "fire" -s10 -W384 -H384 -S1592514025
</figure> ```
<figure markdown>
![latent steps](../assets/img2img/000019.steps.png){ width=720 }
</figure>
Put simply: starting from a frame of fuzz/static, SD finds details in each frame Put simply: starting from a frame of fuzz/static, SD finds details in each frame
that it thinks look like "fire" and brings them a little bit more into focus, that it thinks look like "fire" and brings them a little bit more into focus,
@ -109,25 +109,23 @@ into the sequence at the appropriate point, with just the right amount of noise.
### A concrete example ### A concrete example
I want SD to draw a fire based on this hand-drawn image: !!! example "I want SD to draw a fire based on this hand-drawn image"
<figure markdown> ![drawing of a fireplace](../assets/img2img/fire-drawing.png){ align=left }
![drawing of a fireplace](../assets/img2img/fire-drawing.png)
</figure>
Let's only do 10 steps, to make it easier to see what's happening. If strength Let's only do 10 steps, to make it easier to see what's happening. If strength
is `0.7`, this is what the internal steps the algorithm has to take will look is `0.7`, this is what the internal steps the algorithm has to take will look
like: like:
<figure markdown> <figure markdown>
![gravity32](../assets/img2img/000032.steps.gravity.png) ![gravity32](../assets/img2img/000032.steps.gravity.png)
</figure> </figure>
With strength `0.4`, the steps look more like this: With strength `0.4`, the steps look more like this:
<figure markdown> <figure markdown>
![gravity30](../assets/img2img/000030.steps.gravity.png) ![gravity30](../assets/img2img/000030.steps.gravity.png)
</figure> </figure>
Notice how much more fuzzy the starting image is for strength `0.7` compared to Notice how much more fuzzy the starting image is for strength `0.7` compared to
`0.4`, and notice also how much longer the sequence is with `0.7`: `0.4`, and notice also how much longer the sequence is with `0.7`:

View File

@ -39,10 +39,6 @@ If you do not run this script in advance, the GFPGAN module will attempt to
download the models files the first time you try to perform facial download the models files the first time you try to perform facial
reconstruction. reconstruction.
## Usage
You will now have access to two new prompt arguments.
### Upscaling ### Upscaling
`-U : <upscaling_factor> <upscaling_strength>` `-U : <upscaling_factor> <upscaling_strength>`
@ -119,7 +115,7 @@ You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect. strength of the restoration effect.
### Usage ### CodeFormer Usage
The following command will perform face restoration with CodeFormer instead of The following command will perform face restoration with CodeFormer instead of
the default gfpgan. the default gfpgan.
@ -160,7 +156,7 @@ A new file named `000044.2945021133.fixed.png` will be created in the output
directory. Note that the `!fix` command does not replace the original file, directory. Note that the `!fix` command does not replace the original file,
unlike the behavior at generate time. unlike the behavior at generate time.
### Disabling ## How to disable
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries, If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_restore` and you can disable them on the invoke.py command line with the `--no_restore` and

5
docs/features/index.md Normal file
View File

@ -0,0 +1,5 @@
---
title: Overview
---
Here you can find the documentation for different features.

View File

@ -100,6 +100,10 @@ You wil need one of the following:
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux only) - :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip. - :fontawesome-brands-apple: An Apple computer with an M1 chip.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not come with sufficient VRAM
to render 512x512 images.
### :fontawesome-solid-memory: Memory ### :fontawesome-solid-memory: Memory
- At least 12 GB Main Memory RAM. - At least 12 GB Main Memory RAM.

View File

@ -1,4 +1,8 @@
# How to build "binary" installers (InvokeAI-mac/windows/linux_on_*.zip) ---
title: build binary installers
---
# :simple-buildkite: How to build "binary" installers (InvokeAI-mac/windows/linux_on_*.zip)
## 1. Ensure `installers/requirements.in` is correct ## 1. Ensure `installers/requirements.in` is correct

View File

@ -162,6 +162,12 @@ the command-line client's `!import_model` command.
Type a bit of the path name and hit ++tab++ in order to get a choice of Type a bit of the path name and hit ++tab++ in order to get a choice of
possible completions. possible completions.
!!! tip "on Windows, you can drag model files onto the command-line"
Once you have typed in `!import_model `, you can drag the model `.ckpt` file
onto the command-line to insert the model path. This way, you don't need to
type it or copy/paste.
4. Follow the wizard's instructions to complete installation as shown in the 4. Follow the wizard's instructions to complete installation as shown in the
example here: example here:

View File

@ -35,7 +35,7 @@ recommended model weights files.
## Steps to Install ## Steps to Install
1. Download the 1. Download the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/tag/2.2.0-rc4) of [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
InvokeAI's installer for your platform. Look for a file named `InvokeAI-binary-<your platform>.zip` InvokeAI's installer for your platform. Look for a file named `InvokeAI-binary-<your platform>.zip`
2. Place the downloaded package someplace where you have plenty of HDD space, 2. Place the downloaded package someplace where you have plenty of HDD space,

View File

@ -10,7 +10,6 @@ The source installer is a shell script that attempts to automate every step
needed to install and run InvokeAI on a stock computer running recent versions needed to install and run InvokeAI on a stock computer running recent versions
of Linux, MacOS or Windows. It will leave you with a version that runs a stable of Linux, MacOS or Windows. It will leave you with a version that runs a stable
version of InvokeAI with the option to upgrade to experimental versions later. version of InvokeAI with the option to upgrade to experimental versions later.
It is not as foolproof as the [InvokeAI installer](INSTALL_INVOKE.md)
Before you begin, make sure that you meet the Before you begin, make sure that you meet the
[hardware requirements](index.md#Hardware_Requirements) and has the appropriate [hardware requirements](index.md#Hardware_Requirements) and has the appropriate
@ -27,12 +26,12 @@ Though there are multiple steps, there really is only one click involved to kick
off the process. off the process.
1. The source installer is distributed in ZIP files. Go to the 1. The source installer is distributed in ZIP files. Go to the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/tag/2.2.0-rc4), and [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
look for a series of files named: look for a series of files named:
- invokeAI-src-installer-mac.zip - [invokeAI-src-installer-2.2.3-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-mac.zip)
- invokeAI-src-installer-windows.zip - [invokeAI-src-installer-2.2.3-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-windows.zip)
- invokeAI-src-installer-linux.zip - [invokeAI-src-installer-2.2.3-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-linux.zip)
Download the one that is appropriate for your operating system. Download the one that is appropriate for your operating system.
@ -51,18 +50,30 @@ off the process.
inflating: invokeAI\readme.txt inflating: invokeAI\readme.txt
``` ```
3. If you are using a desktop GUI, double-click the installer file. It will be 3. If you are a macOS user, you may need to install the Xcode command line tools.
These are a set of tools that are needed to run certain applications in a Terminal,
including InvokeAI. This package is provided directly by Apple.
To install, open a terminal window and run `xcode-select --install`. You will get
a macOS system popup guiding you through the install. If you already have them
installed, you will instead see some output in the Terminal advising you that the
tools are already installed.
More information can be found here:
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
4. If you are using a desktop GUI, double-click the installer file. It will be
named `install.bat` on Windows systems and `install.sh` on Linux and named `install.bat` on Windows systems and `install.sh` on Linux and
Macintosh systems. Macintosh systems.
4. Alternatively, from the command line, run the shell script or .bat file: 5. Alternatively, from the command line, run the shell script or .bat file:
```cmd ```cmd
C:\Documents\Linco> cd invokeAI C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> install.bat C:\Documents\Linco\invokeAI> install.bat
``` ```
5. Sit back and let the install script work. It will install various binary 6. Sit back and let the install script work. It will install various binary
requirements including Conda, Git and Python, then download the current requirements including Conda, Git and Python, then download the current
InvokeAI code and install it along with its dependencies. InvokeAI code and install it along with its dependencies.
@ -75,7 +86,7 @@ off the process.
and nothing is happening, you can interrupt the script with ^C. You may restart and nothing is happening, you can interrupt the script with ^C. You may restart
it and it will pick up where it left off. it and it will pick up where it left off.
6. After installation completes, the installer will launch a script called 7. After installation completes, the installer will launch a script called
`configure_invokeai.py`, which will guide you through the first-time process of `configure_invokeai.py`, which will guide you through the first-time process of
selecting one or more Stable Diffusion model weights files, downloading and selecting one or more Stable Diffusion model weights files, downloading and
configuring them. configuring them.
@ -91,7 +102,7 @@ off the process.
prompted) and configure InvokeAI to use the previously-downloaded files. The prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [Installing Models](INSTALLING_MODELS.md). process for this is described in [Installing Models](INSTALLING_MODELS.md).
7. The script will now exit and you'll be ready to generate some images. The 8. The script will now exit and you'll be ready to generate some images. The
invokeAI directory will contain numerous files. Look for a shell script invokeAI directory will contain numerous files. Look for a shell script
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
by double-clicking it or typing its name at the command-line: by double-clicking it or typing its name at the command-line:

View File

@ -5,32 +5,7 @@ title: Overview
We offer several ways to install InvokeAI, each one suited to your We offer several ways to install InvokeAI, each one suited to your
experience and preferences. experience and preferences.
1. [InvokeAI binary installer](INSTALL_INVOKE.md) 1. [InvokeAI source code installer](INSTALL_SOURCE.md)
This is a installer script that installs InvokeAI and all the
third party libraries it depends on. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
When a new InvokeAI release is available, you will run an `update`
script to download and install the new version. Intermediate versions
that contain experimental and possibly unstable features will not be
available.
This installer is designed for people who want the system to "just
work", don't have an interest in tinkering with it, and do not
care about upgrading to unreleased experimental features.
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
- The tab autocomplete feature of the command-line client,
which completes commonly used filenames and commands, will
not work in this version. All Web UI functions are fully
operational, however.
2. [InvokeAI source code installer](INSTALL_SOURCE.md)
This is a script that will install Python, the Anaconda ("conda") This is a script that will install Python, the Anaconda ("conda")
package manager, all of InvokeAI's its essential third party package manager, all of InvokeAI's its essential third party
libraries and InvokeAI itself. It includes access to a "developer libraries and InvokeAI itself. It includes access to a "developer
@ -42,12 +17,20 @@ experience and preferences.
script. This method is recommended for individuals who wish to script. This method is recommended for individuals who wish to
stay on the cutting edge of InvokeAI development and are not stay on the cutting edge of InvokeAI development and are not
afraid of occasional breakage. afraid of occasional breakage.
To get started go to the bottom of the
[Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
and download the .zip file for your platform. Unzip the file.
If you are on a Windows system, double-click on the `install.bat`
script. On a Mac or Linux system, navigate to the file `install.sh`
from within the terminal application, and run the script.
Sit back and watch the script run.
**Important Caveats** **Important Caveats**
- This script is a bit cranky and occasionally hangs or times out, - This script is a bit cranky and occasionally hangs or times out,
forcing you to cancel and restart the script (it will pick up where forcing you to cancel and restart the script (it will pick up where
it left off). It also takes noticeably longer to run than the it left off).
binary installer.
2. [Manual Installation](INSTALL_MANUAL.md) 2. [Manual Installation](INSTALL_MANUAL.md)

View File

@ -79,7 +79,7 @@ title: Manual Installation, Linux
and obtaining an access token for downloading. It will then download and and obtaining an access token for downloading. It will then download and
install the weights files for you. install the weights files for you.
Please look [here](INSTALLING_MODELS.md) for a manual process for doing Please look [here](../INSTALL_MANUAL.md) for a manual process for doing
the same thing. the same thing.
7. Start generating images! 7. Start generating images!
@ -112,7 +112,7 @@ title: Manual Installation, Linux
To use an alternative model you may invoke the `!switch` command in To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The Client](../../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`. model names are defined in `configs/models.yaml`.
8. Subsequently, to relaunch the script, be sure to run "conda activate 8. Subsequently, to relaunch the script, be sure to run "conda activate

View File

@ -150,7 +150,7 @@ will do our best to help.
To use an alternative model you may invoke the `!switch` command in To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The Client](../../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`. model names are defined in `configs/models.yaml`.
--- ---

View File

@ -7,7 +7,7 @@ title: Manual Installation, Windows
## **Notebook install (semi-automated)** ## **Notebook install (semi-automated)**
We have a We have a
[Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb) [Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
with cell-by-cell installation steps. It will download the code in this repo as with cell-by-cell installation steps. It will download the code in this repo as
one of the steps, so instead of cloning this repo, simply download the notebook one of the steps, so instead of cloning this repo, simply download the notebook
from the link above and load it up in VSCode (with the appropriate extensions from the link above and load it up in VSCode (with the appropriate extensions
@ -75,7 +75,7 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
obtaining an access token for downloading. It will then download and install the obtaining an access token for downloading. It will then download and install the
weights files for you. weights files for you.
Please look [here](INSTALLING_MODELS.md) for a manual process for doing the Please look [here](../INSTALL_MANUAL.md) for a manual process for doing the
same thing. same thing.
8. Start generating images! 8. Start generating images!
@ -108,7 +108,7 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
To use an alternative model you may invoke the `!switch` command in To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The Client](../../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`. model names are defined in `configs/models.yaml`.
9. Subsequently, to relaunch the script, first activate the Anaconda 9. Subsequently, to relaunch the script, first activate the Anaconda

View File

@ -15,16 +15,16 @@ We thank them for all of their time and hard work.
## **Current core team** ## **Current core team**
* lstein (Lincoln Stein) - Co-maintainer * @lstein (Lincoln Stein) - Co-maintainer
* blessedcoolant - Co-maintainer * @blessedcoolant - Co-maintainer
* hipsterusername (Kent Keirsey) - Product Manager * @hipsterusername (Kent Keirsey) - Product Manager
* psychedelicious - Web Team Leader * @psychedelicious - Web Team Leader
* Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard * @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* damian0815 - Attention Systems and Gameplay Engineer * @damian0815 - Attention Systems and Gameplay Engineer
* mauwii (Matthias Wild) - Continuous integration and product maintenance engineer * @mauwii (Matthias Wild) - Continuous integration and product maintenance engineer
* Netsvetaev (Artur Netsvetaev) - UI/UX Developer * @Netsvetaev (Artur Netsvetaev) - UI/UX Developer
* tildebyte - general gadfly and resident (self-appointed) know-it-all * @tildebyte - general gadfly and resident (self-appointed) know-it-all
* keturn - Lead for Diffusers port * @keturn - Lead for Diffusers port
## **Contributions by** ## **Contributions by**

View File

@ -5,7 +5,9 @@ otherwise have to be passed through long and complex call chains.
It defines a Namespace object named "Globals" that contains It defines a Namespace object named "Globals" that contains
the attributes: the attributes:
- root - the root directory under which "models" and "outputs" can be found - root - the root directory under which "models" and "outputs" can be found
- initfile - path to the initialization file
- try_patchmatch - option to globally disable loading of 'patchmatch' module
''' '''
import os import os

View File

@ -15,7 +15,7 @@ import sys
import traceback import traceback
import warnings import warnings
from pathlib import Path from pathlib import Path
from typing import Dict from typing import Dict, Union
from urllib import request from urllib import request
import requests import requests
@ -70,9 +70,9 @@ this program and resume later.\n'''
) )
#-------------------------------------------- #--------------------------------------------
def postscript(): def postscript(errors: None):
print( if not any(errors):
'''\n** Model Installation Successful **\nYou're all set! You may now launch InvokeAI using one of these two commands: message='''\n** Model Installation Successful **\nYou're all set! You may now launch InvokeAI using one of these two commands:
Web version: Web version:
python scripts/invoke.py --web (connect to http://localhost:9090) python scripts/invoke.py --web (connect to http://localhost:9090)
Command-line version: Command-line version:
@ -85,7 +85,14 @@ automated installation script, execute "invoke.sh" (Linux/Mac) or
Have fun! Have fun!
''' '''
)
else:
message=f"\n** There were errors during installation. It is possible some of the models were not fully downloaded.\n"
for err in errors:
message += f"\t - {err}\n"
message += "Please check the logs above and correct any issues."
print(message)
#--------------------------------------------- #---------------------------------------------
def yes_or_no(prompt:str, default_yes=True): def yes_or_no(prompt:str, default_yes=True):
@ -220,8 +227,7 @@ This involves a few easy steps.
access_token = HfFolder.get_token() access_token = HfFolder.get_token()
if access_token is not None: if access_token is not None:
print('found') print('found')
else:
if access_token is None:
print('not found') print('not found')
print(''' print('''
4. Thank you! The last step is to enter your HuggingFace access token so that 4. Thank you! The last step is to enter your HuggingFace access token so that
@ -239,6 +245,7 @@ This involves a few easy steps.
Token: ''' Token: '''
) )
access_token = getpass_asterisk.getpass_asterisk() access_token = getpass_asterisk.getpass_asterisk()
HfFolder.save_token(access_token)
return access_token return access_token
#--------------------------------------------- #---------------------------------------------
@ -594,17 +601,27 @@ def download_safety_checker():
print('...success',file=sys.stderr) print('...success',file=sys.stderr)
#------------------------------------- #-------------------------------------
def download_weights(opt:dict): def download_weights(opt:dict) -> Union[str, None]:
# Authenticate to Huggingface using environment variables.
# If successful, authentication will persist for either interactive or non-interactive use.
# Default env var expected by HuggingFace is HUGGING_FACE_HUB_TOKEN.
if not (access_token := HfFolder.get_token()):
# If unable to find an existing token or expected environment, try the non-canonical environment variable (widely used in the community and supported as per docs)
if (access_token := os.getenv("HUGGINGFACE_TOKEN")):
# set the environment variable here instead of simply calling huggingface_hub.login(token), to maintain consistent behaviour.
# when calling the .login() method, the token is cached in the user's home directory. When the env var is used, the token is NOT cached.
os.environ['HUGGING_FACE_HUB_TOKEN'] = access_token
if opt.yes_to_all: if opt.yes_to_all:
models = recommended_datasets() models = recommended_datasets()
access_token = HfFolder.get_token()
if len(models)>0 and access_token is not None: if len(models)>0 and access_token is not None:
successfully_downloaded = download_weight_datasets(models, access_token) successfully_downloaded = download_weight_datasets(models, access_token)
update_config_file(successfully_downloaded,opt) update_config_file(successfully_downloaded,opt)
return return
else: else:
print('** Cannot download models because no Hugging Face access token could be found. Please re-run without --yes') print('** Cannot download models because no Hugging Face access token could be found. Please re-run without --yes')
return return "could not download model weights from Huggingface due to missing or invalid access token"
else: else:
choice = user_wants_to_download_weights() choice = user_wants_to_download_weights()
@ -620,10 +637,13 @@ def download_weights(opt:dict):
return return
print('** LICENSE AGREEMENT FOR WEIGHT FILES **') print('** LICENSE AGREEMENT FOR WEIGHT FILES **')
# We are either already authenticated, or will be asked to provide the token interactively
access_token = authenticate() access_token = authenticate()
print('\n** DOWNLOADING WEIGHTS **') print('\n** DOWNLOADING WEIGHTS **')
successfully_downloaded = download_weight_datasets(models, access_token) successfully_downloaded = download_weight_datasets(models, access_token)
update_config_file(successfully_downloaded,opt) update_config_file(successfully_downloaded,opt)
if len(successfully_downloaded) < len(models):
return "some of the model weights downloads were not successful"
#------------------------------------- #-------------------------------------
def get_root(root:str=None)->str: def get_root(root:str=None)->str:
@ -818,9 +838,12 @@ def main():
or not os.path.exists(os.path.join(Globals.root,'configs/stable-diffusion/v1-inference.yaml')): or not os.path.exists(os.path.join(Globals.root,'configs/stable-diffusion/v1-inference.yaml')):
initialize_rootdir(Globals.root,opt.yes_to_all) initialize_rootdir(Globals.root,opt.yes_to_all)
# Optimistically try to download all required assets. If any errors occur, add them and proceed anyway.
errors=set()
if opt.interactive: if opt.interactive:
print('** DOWNLOADING DIFFUSION WEIGHTS **') print('** DOWNLOADING DIFFUSION WEIGHTS **')
download_weights(opt) errors.add(download_weights(opt))
else: else:
config_path = Path(Globals.root, opt.config_file or Default_config_file) config_path = Path(Globals.root, opt.config_file or Default_config_file)
if config_path.exists(): if config_path.exists():
@ -835,7 +858,7 @@ def main():
download_codeformer() download_codeformer()
download_clipseg() download_clipseg()
download_safety_checker() download_safety_checker()
postscript() postscript(errors=errors)
except KeyboardInterrupt: except KeyboardInterrupt:
print('\nGoodbye! Come back soon.') print('\nGoodbye! Come back soon.')
except Exception as e: except Exception as e:

View File

@ -2,6 +2,8 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
VERSION='2.2.3'
# make the installer zip for linux and mac # make the installer zip for linux and mac
rm -rf invokeAI rm -rf invokeAI
mkdir -p invokeAI mkdir -p invokeAI
@ -9,8 +11,8 @@ cp install.sh.in invokeAI/install.sh
chmod a+x invokeAI/install.sh chmod a+x invokeAI/install.sh
cp readme.txt invokeAI cp readme.txt invokeAI
zip -r invokeAI-src-installer-linux.zip invokeAI zip -r invokeAI-src-installer-$VERSION-linux.zip invokeAI
zip -r invokeAI-src-installer-mac.zip invokeAI zip -r invokeAI-src-installer-$VERSION-mac.zip invokeAI
# make the installer zip for windows # make the installer zip for windows
rm -rf invokeAI rm -rf invokeAI
@ -19,7 +21,7 @@ cp install.bat.in invokeAI/install.bat
cp readme.txt invokeAI cp readme.txt invokeAI
cp WinLongPathsEnabled.reg invokeAI cp WinLongPathsEnabled.reg invokeAI
zip -r invokeAI-src-installer-windows.zip invokeAI zip -r invokeAI-src-installer-$VERSION-windows.zip invokeAI
rm -rf invokeAI rm -rf invokeAI
echo "The installer zips are ready to be distributed.." echo "The installer zips are ready to be distributed.."

View File

@ -9,6 +9,9 @@
@rem This enables a user to install this project without manually installing conda and git. @rem This enables a user to install this project without manually installing conda and git.
@rem change to the script's directory
PUSHD "%~dp0"
echo "InvokeAI source installer..." echo "InvokeAI source installer..."
echo "" echo ""
echo "Some of the installation steps take a long time to run. Please be patient." echo "Some of the installation steps take a long time to run. Please be patient."

View File

@ -116,13 +116,22 @@ status=$?
if test $status -ne 0 if test $status -ne 0
then then
echo "Something went wrong while installing Python libraries and cannot continue." if [ "$OS_NAME" == "osx" ]; then
echo "See https://invoke-ai.github.io/InvokeAI/INSTALL_SOURCE#troubleshooting for troubleshooting" echo "Python failed to install the environment. You may need to install"
echo "tips, or visit https://invoke-ai.github.io/InvokeAI/#installation for alternative" echo "the Xcode command line tools to proceed. See step number 3 of"
echo "installation methods" echo "https://invoke-ai.github.io/InvokeAI/INSTALL_SOURCE#walk_through for"
echo "installation instructions and then run this script again."
else
echo "Something went wrong while installing Python libraries and cannot continue."
echo "See https://invoke-ai.github.io/InvokeAI/INSTALL_SOURCE#troubleshooting for troubleshooting"
echo "tips, or visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
echo "installation methods"
fi
else else
ln -sf ./source_installer/invoke.sh.in ./invoke.sh cp ./source_installer/invoke.sh.in ./invoke.sh
ln -sf ./source_installer/update.sh.in ./update.sh cp ./source_installer/update.sh.in ./update.sh
chmod a+rx ./source_installer/invoke.sh.in
chmod a+rx ./source_installer/update.sh.in
conda activate invokeai conda activate invokeai
# configure # configure

View File

@ -1,5 +1,6 @@
@echo off @echo off
PUSHD "%~dp0"
set INSTALL_ENV_DIR=%cd%\installer_files\env set INSTALL_ENV_DIR=%cd%\installer_files\env
set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH% set PATH=%INSTALL_ENV_DIR%;%INSTALL_ENV_DIR%\Library\bin;%INSTALL_ENV_DIR%\Scripts;%INSTALL_ENV_DIR%\Library\usr\bin;%PATH%
@ -12,10 +13,10 @@ echo 3. open the developer console
set /P restore="Please enter 1, 2 or 3: " set /P restore="Please enter 1, 2 or 3: "
IF /I "%restore%" == "1" ( IF /I "%restore%" == "1" (
echo Starting the InvokeAI command-line.. echo Starting the InvokeAI command-line..
python scripts\invoke.py python scripts\invoke.py %*
) ELSE IF /I "%restore%" == "2" ( ) ELSE IF /I "%restore%" == "2" (
echo Starting the InvokeAI browser-based UI.. echo Starting the InvokeAI browser-based UI..
python scripts\invoke.py --web python scripts\invoke.py --web %*
) ELSE IF /I "%restore%" == "3" ( ) ELSE IF /I "%restore%" == "3" (
echo Developer Console echo Developer Console
call where python call where python

View File

@ -10,6 +10,11 @@ source "$CONDA_BASEPATH/etc/profile.d/conda.sh" # otherwise conda complains abou
conda activate invokeai conda activate invokeai
# set required env var for torch on mac MPS
if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
if [ "$0" != "bash" ]; then if [ "$0" != "bash" ]; then
echo "Do you want to generate images using the" echo "Do you want to generate images using the"
echo "1. command-line" echo "1. command-line"
@ -17,8 +22,8 @@ if [ "$0" != "bash" ]; then
echo "3. open the developer console" echo "3. open the developer console"
read -p "Please enter 1, 2, or 3: " yn read -p "Please enter 1, 2, or 3: " yn
case $yn in case $yn in
1 ) printf "\nStarting the InvokeAI command-line..\n"; python scripts/invoke.py;; 1 ) printf "\nStarting the InvokeAI command-line..\n"; python scripts/invoke.py $*;;
2 ) printf "\nStarting the InvokeAI browser-based UI..\n"; python scripts/invoke.py --web;; 2 ) printf "\nStarting the InvokeAI browser-based UI..\n"; python scripts/invoke.py --web $*;;
3 ) printf "\nDeveloper Console:\n"; file_name=$(basename "${BASH_SOURCE[0]}"); bash --init-file "$file_name";; 3 ) printf "\nDeveloper Console:\n"; file_name=$(basename "${BASH_SOURCE[0]}"); bash --init-file "$file_name";;
* ) echo "Invalid selection"; exit;; * ) echo "Invalid selection"; exit;;
esac esac