-
+
# InvokeAI: A Stable Diffusion Toolkit
@@ -28,6 +28,7 @@
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
+
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
@@ -38,8 +39,11 @@ _Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
+
# Getting Started with InvokeAI
@@ -81,6 +85,7 @@ instructions, please see:
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
+
#### System
You will need one of the following:
@@ -104,19 +109,24 @@ to render 512x512 images.
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
-### *Web Server & UI*
+### *Web Server & UI*
+
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
### *Unified Canvas*
+
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Advanced Prompt Syntax*
+
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
### *Command Line Interface*
+
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
### Other features
+
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
@@ -126,6 +136,7 @@ For users utilizing a terminal-based environment, or who want to take advantage
- *Model Manager & Support*
### Coming Soon
+
- *Node-Based Architecture & UI*
- And more...
@@ -149,7 +160,7 @@ To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the
If you are unfamiliar with how
to contribute to GitHub projects, here is a
-[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
+[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
diff --git a/docs/installation/020_INSTALL_MANUAL.md b/docs/installation/020_INSTALL_MANUAL.md
index ead1eb6536..1fad83121f 100644
--- a/docs/installation/020_INSTALL_MANUAL.md
+++ b/docs/installation/020_INSTALL_MANUAL.md
@@ -119,10 +119,8 @@ manager, please follow these steps:
6. Run PIP
- Be sure that the `invokeai` environment is active before doing this:
-
```bash
- pip install --prefer-binary -r requirements.txt
+ pip --python invokeai install --use-pep517 .
```
7. Set up the runtime directory
@@ -137,7 +135,7 @@ manager, please follow these steps:
default to `invokeai` in your home directory.
```bash
- configure_invokeai.py --root_dir ~/Programs/invokeai
+ configure_invokeai --root_dir ~/Programs/invokeai
```
The script `configure_invokeai.py` will interactively guide you through the
@@ -452,7 +450,7 @@ time. Note that this method only works with the PIP method.
step.
3. Run one additional step while you are in the source code repository
- directory `pip install .` (note the dot at the end).
+ directory `pip install --use-pep517 .` (note the dot at the end).
4. That's all! Now, whenever you activate the virtual environment,
`invoke.py` will know where to look for the runtime directory without
diff --git a/docs/installation/060_INSTALL_PATCHMATCH.md b/docs/installation/060_INSTALL_PATCHMATCH.md
index 592e1f56c3..989be24731 100644
--- a/docs/installation/060_INSTALL_PATCHMATCH.md
+++ b/docs/installation/060_INSTALL_PATCHMATCH.md
@@ -18,7 +18,13 @@ Windows systems with no extra intervention.
## Macintosh
-PyPatchMatch is not currently supported, but the team is working on it.
+You need to have opencv installed so that pypatchmatch can be built:
+
+```bash
+brew install opencv
+```
+
+The next time you start `invoke`, after sucesfully installing opencv, pypatchmatch will be built.
## Linux
@@ -39,23 +45,16 @@ Prior to installing PyPatchMatch, you need to take the following steps:
sudo apt install python3-opencv libopencv-dev
```
-3. Fix the naming of the `opencv` package configuration file:
-
- ```sh
- cd /usr/lib/x86_64-linux-gnu/pkgconfig/
- ln -sf opencv4.pc opencv.pc
- ```
-
-4. Activate the environment you use for invokeai, either with `conda` or with a
+3. Activate the environment you use for invokeai, either with `conda` or with a
virtual environment.
-5. Install pypatchmatch:
+4. Install pypatchmatch:
```sh
- pip install "git+https://github.com/invoke-ai/PyPatchMatch@0.1.3#egg=pypatchmatch"
+ pip install pypatchmatch
```
-6. Confirm that pypatchmatch is installed. At the command-line prompt enter
+5. Confirm that pypatchmatch is installed. At the command-line prompt enter
`python`, and then at the `>>>` line type
`from patchmatch import patch_match`: It should look like the follwing:
diff --git a/docs/installation/INSTALL_MANUAL.md b/docs/installation/INSTALL_MANUAL.md
index 4d62df9415..f45463c535 100644
--- a/docs/installation/INSTALL_MANUAL.md
+++ b/docs/installation/INSTALL_MANUAL.md
@@ -254,65 +254,10 @@ steps:
source invokeai/bin/activate
```
-4. Pick the correct `requirements*.txt` file for your hardware and operating
- system.
-
- We have created a series of environment files suited for different operating
- systems and GPU hardware. They are located in the
- `environments-and-requirements` directory:
-
-
-
- | filename | OS |
- | :---------------------------------: | :-------------------------------------------------------------: |
- | requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
- | requirements-lin-arm64.txt | Linux running on arm64 systems |
- | requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
- | requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
- | requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU (supports Google Colab too) |
-
-
-
- Select the appropriate requirements file, and make a link to it from
- `requirements.txt` in the top-level InvokeAI directory. The command to do
- this from the top-level directory is:
-
- !!! example ""
-
- === "Macintosh and Linux"
-
- !!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
-
- ```bash
- ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
- ```
-
- === "Windows"
-
- !!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
-
- ```cmd
- copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
- ```
-
- !!! warning
-
- Please do not link or copy `environments-and-requirements/requirements-base.txt`.
- This is a base requirements file that does not have the platform-specific
- libraries. Also, be sure to link or copy the platform-specific file to
- a top-level file named `requirements.txt` as shown here. Running pip on
- a requirements file in a subdirectory will not work as expected.
-
- When this is done, confirm that a file named `requirements.txt` has been
- created in the InvokeAI root directory and that it points to the correct
- file in `environments-and-requirements`.
-
-5. Run PIP
-
- Be sure that the `invokeai` environment is active before doing this:
+4. Run PIP
```bash
- pip install --prefer-binary -r requirements.txt
+ pip --python invokeai install --use-pep517 .
```
---
diff --git a/ldm/invoke/CLI.py b/ldm/invoke/CLI.py
index 58df4bdc92..48d52e5f19 100644
--- a/ldm/invoke/CLI.py
+++ b/ldm/invoke/CLI.py
@@ -485,7 +485,7 @@ def do_command(command:str, gen, opt:Args, completer) -> tuple:
optimize_model(path[1], gen, opt, completer)
completer.add_history(command)
operation = None
-
+
elif command.startswith('!optimize'):
path = shlex.split(command)
@@ -570,7 +570,7 @@ def import_model(model_path:str, gen, opt, completer):
(3) a huggingface repository id
'''
model_name = None
-
+
if model_path.startswith(('http:','https:','ftp:')):
model_name = import_ckpt_model(model_path, gen, opt, completer)
elif os.path.exists(model_path) and model_path.endswith(('.ckpt','.safetensors')) and os.path.isfile(model_path):
@@ -584,12 +584,12 @@ def import_model(model_path:str, gen, opt, completer):
if not model_name:
return
-
+
if not _verify_load(model_name, gen):
print('** model failed to load. Discarding configuration entry')
gen.model_manager.del_model(model_name)
return
-
+
if input('Make this the default model? [n] ') in ('y','Y'):
gen.model_manager.set_default_model(model_name)
@@ -690,7 +690,7 @@ def optimize_model(model_name_or_path:str, gen, opt, completer):
else:
print(f'** {model_name_or_path} is neither an existing model nor the path to a .ckpt file')
return
-
+
if not ckpt_path.is_absolute():
ckpt_path = Path(Globals.root,ckpt_path)
@@ -698,7 +698,7 @@ def optimize_model(model_name_or_path:str, gen, opt, completer):
if diffuser_path.exists():
print(f'** {model_name_or_path} is already optimized. Will not overwrite. If this is an error, please remove the directory {diffuser_path} and try again.')
return
-
+
new_config = gen.model_manager.convert_and_import(
ckpt_path,
diffuser_path,
@@ -747,7 +747,7 @@ def edit_model(model_name:str, gen, opt, completer):
continue
completer.set_line(info[attribute])
info[attribute] = input(f'{attribute}: ') or info[attribute]
-
+
if new_name != model_name:
manager.del_model(model_name)
@@ -1099,7 +1099,7 @@ def report_model_error(opt:Namespace, e:Exception):
if yes_to_all is not None:
sys.argv.append(yes_to_all)
- import configure_invokeai
+ import ldm.invoke.configure_invokeai as configure_invokeai
configure_invokeai.main()
print('** InvokeAI will now restart')
sys.argv = previous_args
diff --git a/scripts/configure_invokeai.py b/ldm/invoke/configure_invokeai.py
similarity index 100%
rename from scripts/configure_invokeai.py
rename to ldm/invoke/configure_invokeai.py
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 0000000000..8359f14e6d
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,130 @@
+[build-system]
+requires = ["setuptools~=65.5", "pip~=22.3", "wheel"]
+build-backend = "setuptools.build_meta"
+
+[project]
+name = "InvokeAI"
+description = "An implementation of Stable Diffusion which provides various new features and options to aid the image generation process"
+requires-python = ">=3.9, <3.11"
+readme = { content-type = "text/markdown", file = "README.md" }
+keywords = ["stable-diffusion", "AI"]
+dynamic = ["version"]
+license = { file = "LICENSE" }
+authors = [{ name = "The InvokeAI Project", email = "lincoln.stein@gmail.com" }]
+classifiers = [
+ 'Development Status :: 4 - Beta',
+ 'Environment :: GPU',
+ 'Environment :: GPU :: NVIDIA CUDA',
+ 'Environment :: MacOS X',
+ 'Intended Audience :: End Users/Desktop',
+ 'Intended Audience :: Developers',
+ 'License :: OSI Approved :: MIT License',
+ 'Operating System :: POSIX :: Linux',
+ 'Operating System :: MacOS',
+ 'Operating System :: Microsoft :: Windows',
+ 'Programming Language :: Python :: 3 :: Only',
+ 'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
+ 'Programming Language :: Python :: 3.10',
+ 'Topic :: Artistic Software',
+ 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
+ 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
+ 'Topic :: Multimedia :: Graphics',
+ 'Topic :: Scientific/Engineering :: Artificial Intelligence',
+ 'Topic :: Scientific/Engineering :: Image Processing',
+]
+dependencies = [
+ "accelerate",
+ "albumentations",
+ "clip_anytorch", # replaceing "clip @ https://github.com/openai/CLIP/archive/eaa22acb90a5876642d0507623e859909230a52d.zip",
+ "clipseg @ https://github.com/invoke-ai/clipseg/archive/relaxed-python-requirement.zip", # is this still necesarry with diffusers?
+ "datasets",
+ "diffusers[torch]~=0.11",
+ "dnspython==2.2.1",
+ "einops",
+ "eventlet",
+ "facexlib",
+ "flask==2.1.3",
+ "flask_cors==3.0.10",
+ "flask_socketio==5.3.0",
+ "flaskwebgui==1.0.3",
+ "getpass_asterisk",
+ "gfpgan==1.3.8",
+ "huggingface-hub>=0.11.1",
+ "imageio",
+ "imageio-ffmpeg",
+ "k-diffusion", # replaceing "k-diffusion @ https://github.com/Birch-san/k-diffusion/archive/refs/heads/mps.zip",
+ "kornia",
+ "npyscreen",
+ "numpy~=1.23",
+ "omegaconf",
+ "opencv-python",
+ "picklescan",
+ "pillow",
+ "pudb",
+ "pypatchmatch",
+ "pyreadline3",
+ "pytorch-lightning==1.7.7",
+ "realesrgan",
+ "requests==2.25.1",
+ "safetensors",
+ "scikit-image>=0.19",
+ "send2trash",
+ "streamlit",
+ "taming-transformers-rom1504",
+ "test-tube>=0.7.5",
+ "torch>=1.13.1",
+ "torch-fidelity",
+ "torchvision>=0.14.1",
+ "torchmetrics",
+ "transformers~=4.25",
+ "windows-curses; sys_platform=='win32'",
+]
+
+[project.optional-dependencies]
+"dist" = ["pip-tools", "pipdeptree", "twine"]
+"docs" = [
+ "mkdocs-material<9.0",
+ "mkdocs-git-revision-date-localized-plugin",
+ "mkdocs-redirects==1.2.0",
+]
+test = ["pytest>6.0.0", "pytest-cov"]
+
+[project.scripts]
+"configure_invokeai" = "ldm.invoke.configure_invokeai:main"
+"dream" = "ldm.invoke:CLI.main"
+"invoke" = "ldm.invoke:CLI.main"
+"legacy_api" = "scripts:legacy_api.main"
+"load_models" = "scripts:configure_invokeai.main"
+"merge_embeddings" = "scripts:merge_embeddings.main"
+"preload_models" = "ldm.invoke.configure_invokeai:main"
+
+[project.urls]
+"Homepage" = "https://invoke-ai.github.io/InvokeAI/"
+"Documentation" = "https://invoke-ai.github.io/InvokeAI/"
+"Source" = "https://github.com/invoke-ai/InvokeAI/"
+"Bug Reports" = "https://github.com/invoke-ai/InvokeAI/issues"
+"Discord" = "https://discord.gg/ZmtBAhwWhy"
+
+[tool.setuptools.dynamic]
+version = { attr = "ldm.invoke.__version__" }
+
+[tool.setuptools.packages.find]
+"where" = ["."]
+"include" = ["assets", "backend*", "configs*", "frontend.dist*", "ldm*"]
+
+[tool.setuptools.package-data]
+"assets" = ["caution.png"]
+"backend" = ["**.png"]
+"configs" = ["*.example", "**/*.yaml", "*.txt"]
+"frontend.dist" = ["**"]
+
+[tool.setuptools.exclude-package-data]
+configs = ["models.yaml"]
+
+[tool.pytest.ini_options]
+minversion = "6.0"
+addopts = "-p pytest_cov --junitxml=junit/test-results.xml --cov=./ldm/ --cov=./backend --cov-branch"
+python_files = ["test_*.py"]
+pythonpath = [".venv/lib/python3.9", ".venv/lib/python3.10"]
+testpaths = ["tests"]
diff --git a/scripts/load_models.py b/scripts/load_models.py
index e64d0b821d..89ae5d7588 100755
--- a/scripts/load_models.py
+++ b/scripts/load_models.py
@@ -5,7 +5,7 @@
# two machines must share a common .cache directory.
import warnings
-import configure_invokeai
+import ldm.invoke.configure_invokeai as configure_invokeai
if __name__ == '__main__':
configure_invokeai.main()
diff --git a/scripts/preload_models.py b/scripts/preload_models.py
index e64d0b821d..89ae5d7588 100755
--- a/scripts/preload_models.py
+++ b/scripts/preload_models.py
@@ -5,7 +5,7 @@
# two machines must share a common .cache directory.
import warnings
-import configure_invokeai
+import ldm.invoke.configure_invokeai as configure_invokeai
if __name__ == '__main__':
configure_invokeai.main()
diff --git a/setup.py b/setup.py
deleted file mode 100644
index c9809c0c92..0000000000
--- a/setup.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import sys
-import os
-import re
-from setuptools import setup, find_packages
-
-def list_files(directory):
- listing = list()
- for root, dirs, files in os.walk(directory,topdown=False):
- pair = (root,[os.path.join(root,f) for f in files])
- listing.append(pair)
- return listing
-
-
-def get_version()->str:
- from ldm.invoke import __version__ as version
- return version
-
-# The canonical version number is stored in the file ldm/invoke/_version.py
-VERSION = get_version()
-DESCRIPTION = ('An implementation of Stable Diffusion which provides various new features'
- ' and options to aid the image generation process')
-LONG_DESCRIPTION = ('This version of Stable Diffusion features a slick WebGUI, an'
- ' interactive command-line script that combines text2img and img2img'
- ' functionality in a "dream bot" style interface, and multiple features'
- ' and other enhancements.')
-HOMEPAGE = 'https://github.com/invoke-ai/InvokeAI'
-FRONTEND_FILES = list_files('frontend/dist')
-FRONTEND_FILES.append(('assets',['assets/caution.png']))
-print(FRONTEND_FILES)
-
-REQUIREMENTS=[
- 'accelerate',
- 'albumentations',
- 'diffusers',
- 'eventlet',
- 'flask_cors',
- 'flask_socketio',
- 'flaskwebgui',
- 'getpass_asterisk',
- 'imageio-ffmpeg',
- 'pyreadline3',
- 'realesrgan',
- 'send2trash',
- 'streamlit',
- 'taming-transformers-rom1504',
- 'test-tube',
- 'torch-fidelity',
- 'torch',
- 'torchvision',
- 'transformers',
- 'picklescan',
- 'clip',
- 'clipseg',
- 'gfpgan',
- 'k-diffusion',
- 'pypatchmatch',
-]
-
-setup(
- name='InvokeAI',
- version=VERSION,
- description=DESCRIPTION,
- long_description=LONG_DESCRIPTION,
- author='The InvokeAI Project',
- author_email='lincoln.stein@gmail.com',
- url=HOMEPAGE,
- license='MIT',
- packages=find_packages(exclude=['tests.*']),
- install_requires=REQUIREMENTS,
- dependency_links=['https://download.pytorch.org/whl/torch_stable.html'],
- python_requires='>=3.9, <4',
- classifiers=[
- 'Development Status :: 4 - Beta',
- 'Environment :: GPU',
- 'Environment :: GPU :: NVIDIA CUDA',
- 'Environment :: MacOS X',
- 'Intended Audience :: End Users/Desktop',
- 'Intended Audience :: Developers',
- 'License :: OSI Approved :: MIT License',
- 'Operating System :: POSIX :: Linux',
- 'Operating System :: MacOS',
- 'Operating System :: Microsoft :: Windows',
- 'Programming Language :: Python :: 3 :: Only,'
- 'Programming Language :: Python :: 3.8',
- 'Programming Language :: Python :: 3.9',
- 'Programming Language :: Python :: 3.10',
- 'Topic :: Artistic Software',
- 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
- 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
- 'Topic :: Multimedia :: Graphics',
- 'Topic :: Scientific/Engineering :: Artificial Intelligence',
- 'Topic :: Scientific/Engineering :: Image Processing',
- ],
- scripts = ['scripts/invoke.py','scripts/configure_invokeai.py', 'scripts/sd-metadata.py',
- 'scripts/preload_models.py', 'scripts/images2prompt.py',
- 'scripts/textual_inversion_fe.py','scripts/textual_inversion.py',
- 'scripts/merge_models_fe.py', 'scripts/merge_models.py',
- ],
- data_files=FRONTEND_FILES,
-)