InvokeAI/pyproject.toml
Lincoln Stein 4536e4a8b6
Model Manager Refactor: Install remote models and store their tags and other metadata (#5361)
* add basic functionality for model metadata fetching from hf and civitai

* add storage

* start unit tests

* add unit tests and documentation

* add missing dependency for pytests

* remove redundant fetch; add modified/published dates; updated docs

* add code to select diffusers files based on the variant type

* implement Civitai installs

* make huggingface parallel downloading work

* add unit tests for model installation manager

- Fixed race condition on selection of download destination path
- Add fixtures common to several model_manager_2 unit tests
- Added dummy model files for testing diffusers and safetensors downloading/probing
- Refactored code for selecting proper variant from list of huggingface repo files
- Regrouped ordering of methods in model_install_default.py

* improve Civitai model downloading

- Provide a better error message when Civitai requires an access token (doesn't give a 403 forbidden, but redirects
  to the HTML of an authorization page -- arrgh)
- Handle case of Civitai providing a primary download link plus additional links for VAEs, config files, etc

* add routes for retrieving metadata and tags

* code tidying and documentation

* fix ruff errors

* add file needed to maintain test root diretory in repo for unit tests

* fix self->cls in classmethod

* add pydantic plugin for mypy

* use TestSession instead of requests.Session to prevent any internet activity

improve logging

fix error message formatting

fix logging again

fix forward vs reverse slash issue in Windows install tests

* Several fixes of problems detected during PR review:

- Implement cancel_model_install_job and get_model_install_job routes
  to allow for better control of model download and install.
- Fix thread deadlock that occurred after cancelling an install.
- Remove unneeded pytest_plugins section from tests/conftest.py
- Remove unused _in_terminal_state() from model_install_default.
- Remove outdated documentation from several spots.
- Add workaround for Civitai API results which don't return correct
  URL for the default model.

* fix docs and tests to match get_job_by_source() rather than get_job()

* Update invokeai/backend/model_manager/metadata/fetch/huggingface.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Call CivitaiMetadata.model_validate_json() directly

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Second round of revisions suggested by @ryanjdick:

- Fix type mismatch in `list_all_metadata()` route.
- Do not have a default value for the model install job id
- Remove static class variable declarations from non Pydantic classes
- Change `id` field to `model_id` for the sqlite3 `model_tags` table.
- Changed AFTER DELETE triggers to ON DELETE CASCADE for the metadata and tags tables.
- Made the `id` field of the `model_metadata` table into a primary key to achieve uniqueness.

* Code cleanup suggested in PR review:

- Narrowed the declaration of the `parts` attribute of the download progress event
- Removed auto-conversion of str to Url in Url-containing sources
- Fixed handling of `InvalidModelConfigException`
- Made unknown sources raise `NotImplementedError` rather than `Exception`
- Improved status reporting on cached HuggingFace access tokens

* Multiple fixes:

- `job.total_size` returns a valid size for locally installed models
- new route `list_models` returns a paged summary of model, name,
  description, tags and other essential info
- fix a few type errors

* consolidated all invokeai root pytest fixtures into a single location

* Update invokeai/backend/model_manager/metadata/metadata_store.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Small tweaks in response to review comments:

- Remove flake8 configuration from pyproject.toml
- Use `id` rather than `modelId` for huggingface `ModelInfo` object
- Use `last_modified` rather than `LastModified` for huggingface `ModelInfo` object
- Add `sha256` field to file metadata downloaded from huggingface
- Add `Invoker` argument to the model installer `start()` and `stop()` routines
  (but made it optional in order to facilitate use of the service outside the API)
- Removed redundant `PRAGMA foreign_keys` from metadata store initialization code.

* Additional tweaks and minor bug fixes

- Fix calculation of aggregate diffusers model size to only count the
  size of files, not files + directories (which gives different unit test
  results on different filesystems).
- Refactor _get_metadata() and _get_download_urls() to have distinct code paths
  for Civitai, HuggingFace and URL sources.
- Forward the `inplace` flag from the source to the job and added unit test for this.
- Attach cached model metadata to the job rather than to the model install service.

* fix unit test that was breaking on windows due to CR/LF changing size of test json files

* fix ruff formatting

* a few last minor fixes before merging:

- Turn job `error` and `error_type` into properties derived from the exception.
- Add TODO comment about the reason for handling temporary directory destruction
  manually rather than using tempfile.tmpdir().

* add unit tests for reporting HTTP download errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-14 19:54:53 +00:00

280 lines
9.6 KiB
TOML

[build-system]
requires = ["setuptools~=65.5", "pip~=22.3", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "InvokeAI"
description = "An implementation of Stable Diffusion which provides various new features and options to aid the image generation process"
requires-python = ">=3.10, <3.12"
readme = { content-type = "text/markdown", file = "README.md" }
keywords = ["stable-diffusion", "AI"]
dynamic = ["version"]
license = { file = "LICENSE" }
authors = [{ name = "The InvokeAI Project", email = "lincoln.stein@gmail.com" }]
classifiers = [
'Development Status :: 4 - Beta',
'Environment :: GPU',
'Environment :: GPU :: NVIDIA CUDA',
'Environment :: MacOS X',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: POSIX :: Linux',
'Operating System :: MacOS',
'Operating System :: Microsoft :: Windows',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.10',
'Topic :: Artistic Software',
'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
'Topic :: Multimedia :: Graphics',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Topic :: Scientific/Engineering :: Image Processing',
]
dependencies = [
# Core generation dependencies, pinned for reproducible builds.
"accelerate==0.25.0",
"basicsr==1.4.2",
"clip_anytorch==2.5.2", # replacing "clip @ https://github.com/openai/CLIP/archive/eaa22acb90a5876642d0507623e859909230a52d.zip",
"compel==2.0.2",
"controlnet-aux==0.0.7",
"diffusers[torch]==0.25.0",
"invisible-watermark==0.2.0", # needed to install SDXL base and refiner using their repo_ids
"mediapipe==0.10.7", # needed for "mediapipeface" controlnet model
"numpy==1.26.3", # >1.24.0 is needed to use the 'strict' argument to np.testing.assert_array_equal()
"onnx==1.15.0",
"onnxruntime==1.16.3",
"opencv-python==4.9.0.80",
"pytorch-lightning==2.1.3",
"safetensors==0.4.1",
"timm==0.6.13", # needed to override timm latest in controlnet_aux, see https://github.com/isl-org/ZoeDepth/issues/26
"torch==2.1.2",
"torchmetrics==0.11.4",
"torchsde==0.2.6",
"torchvision==0.16.2",
"transformers==4.36.2",
# Core application dependencies, pinned for reproducible builds.
"fastapi-events==0.10.0",
"fastapi==0.108.0",
"huggingface-hub==0.20.2",
"pydantic-settings==2.1.0",
"pydantic==2.5.3",
"python-socketio==5.11.0",
"uvicorn[standard]==0.25.0",
# Auxiliary dependencies, pinned only if necessary.
"albumentations",
"click",
"datasets",
"dnspython~=2.4.0",
"dynamicprompts",
"easing-functions",
"einops",
"facexlib",
"imohash",
"matplotlib", # needed for plotting of Penner easing functions
"npyscreen",
"omegaconf",
"picklescan",
"pillow",
"prompt-toolkit",
"pympler~=1.0.1",
"pypatchmatch",
'pyperclip',
"pyreadline3",
"python-multipart",
"requests~=2.28.2",
"rich~=13.3",
"scikit-image~=0.21.0",
"semver~=3.0.1",
"send2trash",
"test-tube~=0.7.5",
"windows-curses; sys_platform=='win32'",
]
[project.optional-dependencies]
"xformers" = [
# Core generation dependencies, pinned for reproducible builds.
"xformers==0.0.23.post1; sys_platform!='darwin'",
# Auxiliary dependencies, pinned only if necessary.
"triton; sys_platform=='linux'",
]
"onnx" = ["onnxruntime"]
"onnx-cuda" = ["onnxruntime-gpu"]
"onnx-directml" = ["onnxruntime-directml"]
"dist" = ["pip-tools", "pipdeptree", "twine"]
"docs" = [
"mkdocs-material<9.0",
"mkdocs-git-revision-date-localized-plugin",
"mkdocs-redirects==1.2.0",
]
"dev" = ["jurigged", "pudb"]
"test" = [
"ruff==0.1.11",
"ruff-lsp",
"mypy",
"pre-commit",
"pytest>6.0.0",
"pytest-cov",
"pytest-datadir",
"requests_testadapter",
]
[project.scripts]
# legacy entrypoints; provided for backwards compatibility
"configure_invokeai.py" = "invokeai.frontend.install.invokeai_configure:invokeai_configure"
"textual_inversion.py" = "invokeai.frontend.training:invokeai_textual_inversion"
# shortcut commands to start web ui
# "invokeai --web" will launch the web interface
# "invokeai" = "invokeai.frontend.legacy_launch_invokeai:main"
# new shortcut to launch web interface
"invokeai-web" = "invokeai.app.api_app:invoke_api"
# full commands
"invokeai-configure" = "invokeai.frontend.install.invokeai_configure:invokeai_configure"
"invokeai-merge" = "invokeai.frontend.merge:invokeai_merge_diffusers"
"invokeai-ti" = "invokeai.frontend.training:invokeai_textual_inversion"
"invokeai-model-install" = "invokeai.frontend.install.model_install:main"
"invokeai-migrate3" = "invokeai.backend.install.migrate_to_3:main"
"invokeai-update" = "invokeai.frontend.install.invokeai_update:main"
"invokeai-metadata" = "invokeai.backend.image_util.invoke_metadata:main"
"invokeai-node-web" = "invokeai.app.api_app:invoke_api"
"invokeai-import-images" = "invokeai.frontend.install.import_images:main"
"invokeai-db-maintenance" = "invokeai.backend.util.db_maintenance:main"
[project.urls]
"Homepage" = "https://invoke-ai.github.io/InvokeAI/"
"Documentation" = "https://invoke-ai.github.io/InvokeAI/"
"Source" = "https://github.com/invoke-ai/InvokeAI/"
"Bug Reports" = "https://github.com/invoke-ai/InvokeAI/issues"
"Discord" = "https://discord.gg/ZmtBAhwWhy"
[tool.setuptools.dynamic]
version = { attr = "invokeai.version.__version__" }
[tool.setuptools.packages.find]
"where" = ["."]
"include" = [
"invokeai.assets.fonts*",
"invokeai.version*",
"invokeai.generator*",
"invokeai.backend*",
"invokeai.frontend*",
"invokeai.frontend.web.dist*",
"invokeai.frontend.web.static*",
"invokeai.configs*",
"invokeai.app*",
]
[tool.setuptools.package-data]
"invokeai.app.assets" = ["**/*.png"]
"invokeai.app.services.workflow_records.default_workflows" = ["*.json"]
"invokeai.assets.fonts" = ["**/*.ttf"]
"invokeai.backend" = ["**.png"]
"invokeai.configs" = ["*.example", "**/*.yaml", "*.txt"]
"invokeai.frontend.web.dist" = ["**"]
"invokeai.frontend.web.static" = ["**"]
"invokeai.app.invocations" = ["**"]
#=== Begin: PyTest and Coverage
[tool.pytest.ini_options]
addopts = "--cov-report term --cov-report html --cov-report xml --strict-markers -m \"not slow\""
markers = [
"slow: Marks tests as slow. Disabled by default. To run all tests, use -m \"\". To run only slow tests, use -m \"slow\".",
]
[tool.coverage.run]
branch = true
source = ["invokeai"]
omit = ["*tests*", "*migrations*", ".venv/*", "*.env"]
[tool.coverage.report]
show_missing = true
fail_under = 85 # let's set something sensible on Day 1 ...
[tool.coverage.json]
output = "coverage/coverage.json"
pretty_print = true
[tool.coverage.html]
directory = "coverage/html"
[tool.coverage.xml]
output = "coverage/index.xml"
#=== End: PyTest and Coverage
#=== Begin: Ruff
[tool.ruff]
line-length = 120
ignore = [
"E501", # https://docs.astral.sh/ruff/rules/line-too-long/
"C901", # https://docs.astral.sh/ruff/rules/complex-structure/
"B008", # https://docs.astral.sh/ruff/rules/function-call-in-default-argument/
"B904", # https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/
]
select = ["B", "C", "E", "F", "W", "I"]
exclude = [
".git",
"__pycache__",
"build",
"dist",
"invokeai/frontend/web/node_modules/",
".venv*",
]
#=== End: Ruff
#=== Begin: MyPy
# global mypy config
[tool.mypy]
ignore_missing_imports = true # ignores missing types in third-party libraries
strict = true
plugins = "pydantic.mypy"
exclude = ["tests/*"]
# overrides for specific modules
[[tool.mypy.overrides]]
follow_imports = "skip" # skips type checking of the modules listed below
module = [
"invokeai.app.api.routers.models",
"invokeai.app.invocations.compel",
"invokeai.app.invocations.latent",
"invokeai.app.services.invocation_stats.invocation_stats_default",
"invokeai.app.services.model_manager.model_manager_base",
"invokeai.app.services.model_manager.model_manager_default",
"invokeai.app.services.model_records.model_records_sql",
"invokeai.app.util.controlnet_utils",
"invokeai.backend.image_util.txt2mask",
"invokeai.backend.image_util.safety_checker",
"invokeai.backend.image_util.patchmatch",
"invokeai.backend.image_util.invisible_watermark",
"invokeai.backend.install.model_install_backend",
"invokeai.backend.ip_adapter.ip_adapter",
"invokeai.backend.ip_adapter.resampler",
"invokeai.backend.ip_adapter.unet_patcher",
"invokeai.backend.model_management.convert_ckpt_to_diffusers",
"invokeai.backend.model_management.lora",
"invokeai.backend.model_management.model_cache",
"invokeai.backend.model_management.model_manager",
"invokeai.backend.model_management.model_merge",
"invokeai.backend.model_management.model_probe",
"invokeai.backend.model_management.model_search",
"invokeai.backend.model_management.models.*", # this is needed to ignore the module's `__init__.py`
"invokeai.backend.model_management.models.base",
"invokeai.backend.model_management.models.controlnet",
"invokeai.backend.model_management.models.ip_adapter",
"invokeai.backend.model_management.models.lora",
"invokeai.backend.model_management.models.sdxl",
"invokeai.backend.model_management.models.stable_diffusion",
"invokeai.backend.model_management.models.vae",
"invokeai.backend.model_management.seamless",
"invokeai.backend.model_management.util",
"invokeai.backend.stable_diffusion.diffusers_pipeline",
"invokeai.backend.stable_diffusion.diffusion.cross_attention_control",
"invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion",
"invokeai.backend.util.hotfixes",
"invokeai.backend.util.mps_fixes",
"invokeai.backend.util.util",
"invokeai.frontend.install.model_install",
]
#=== End: MyPy