9432336e2b
## Summary This three two model manager-related methods to the InvocationContext uniform API. They are accessible via `context.models.*`: 1. **`load_local_model(model_path: Path, loader: Optional[Callable[[Path], AnyModel]] = None) -> LoadedModelWithoutConfig`** *Load the model located at the indicated path.* This will load a local model (.safetensors, .ckpt or diffusers directory) into the model manager RAM cache and return its `LoadedModelWithoutConfig`. If the optional loader argument is provided, the loader will be invoked to load the model into memory. Otherwise the method will call `safetensors.torch.load_file()` `torch.load()` (with a pickle scan), or `from_pretrained()` as appropriate to the path type. Be aware that the `LoadedModelWithoutConfig` object differs from `LoadedModel` by having no `config` attribute. Here is an example of usage: ``` def invoke(self, context: InvocatinContext) -> ImageOutput: model_path = Path('/opt/models/RealESRGAN_x4plus.pth') loadnet = context.models.load_local_model(model_path) with loadnet as loadnet_model: upscaler = RealESRGAN(loadnet=loadnet_model,...) ``` --- 2. **`load_remote_model(source: str | AnyHttpUrl, loader: Optional[Callable[[Path], AnyModel]] = None) -> LoadedModelWithoutConfig`** *Load the model located at the indicated URL or repo_id.* This is similar to `load_local_model()` but it accepts either a HugginFace repo_id (as a string), or a URL. The model's file(s) will be downloaded to `models/.download_cache` and then loaded, returning a ``` def invoke(self, context: InvocatinContext) -> ImageOutput: model_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth' loadnet = context.models.load_remote_model(model_url) with loadnet as loadnet_model: upscaler = RealESRGAN(loadnet=loadnet_model,...) ``` --- 3. **`download_and_cache_model( source: str | AnyHttpUrl, access_token: Optional[str] = None, timeout: Optional[int] = 0) -> Path`** Download the model file located at source to the models cache and return its Path. This will check `models/.download_cache` for the desired model file and download it from the indicated source if not already present. The local Path to the downloaded file is then returned. --- ## Other Changes This PR performs a migration, in which it renames `models/.cache` to `models/.convert_cache`, and migrates previously-downloaded ESRGAN, openpose, DepthAnything and Lama inpaint models from the `models/core` directory into `models/.download_cache`. There are a number of legacy model files in `models/core`, such as GFPGAN, which are no longer used. This PR deletes them and tidies up the `models/core` directory. ## Related Issues / Discussions I have systematically replaced all the calls to `download_with_progress_bar()`. This function is no longer used elsewhere and has been removed. <!--WHEN APPLICABLE: List any related issues or discussions on github or discord. If this PR closes an issue, please use the "Closes #1234" format, so that the issue will be automatically closed when the PR merges.--> ## QA Instructions I have added unit tests for the three new calls. You may test that the `load_and_cache_model()` call is working by running the upscaler within the web app. On first try, you will see the model file being downloaded into the models `.cache` directory. On subsequent tries, the model will either load from RAM (if it hasn't been displaced) or will be loaded from the filesystem. <!--WHEN APPLICABLE: Describe how we can test the changes in this PR.--> ## Merge Plan Squash merge when approved. <!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like DB schemas, may need some care when merging. For example, a careful rebase by the change author, timing to not interfere with a pending release, or a message to contributors on discord after merging.--> ## Checklist - [X] _The PR has a short but descriptive title, suitable for a changelog_ - [X] _Tests added / updated (if applicable)_ - [X] _Documentation added / updated (if applicable)_ |
||
---|---|---|
.dev_scripts | ||
.github | ||
coverage | ||
docker | ||
docs | ||
installer | ||
invokeai | ||
scripts | ||
tests | ||
.dockerignore | ||
.editorconfig | ||
.git-blame-ignore-revs | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
.pre-commit-config.yaml | ||
.prettierrc.yaml | ||
flake.lock | ||
flake.nix | ||
InvokeAI_Statement_of_Values.md | ||
LICENSE | ||
LICENSE-SD1+SD2.txt | ||
LICENSE-SDXL.txt | ||
Makefile | ||
mkdocs.yml | ||
pyproject.toml | ||
README.md | ||
Stable_Diffusion_v1_Model_Card.md |
Invoke - Professional Creative AI Tools for Visual Media
To learn more about Invoke, or implement our Business solutions, visit invoke.com
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
Installation and Updates - Documentation and Tutorials - Bug Reports - Contributing
Quick Start
-
Download and unzip the installer from the bottom of the latest release.
-
Run the installer script.
- Windows: Double-click on the
install.bat
script. - macOS: Open a Terminal window, drag the file
install.sh
from Finder into the Terminal, and press enter. - Linux: Run
install.sh
.
- Windows: Double-click on the
-
When prompted, enter a location for the install and select your GPU type.
-
Once the install finishes, find the directory you selected during install. The default location is
C:\Users\Username\invokeai
for Windows or~/invokeai
for Linux/macOS. -
Run the launcher script (
invoke.bat
for Windows,invoke.sh
for macOS and Linux) the same way you ran the installer script in step 2. -
Select option 1 to start the application. Once it starts up, open your browser and go to http://localhost:9090.
-
Open the model manager tab to install a starter model and then you'll be ready to generate.
More detail, including hardware requirements and manual install instructions, are available in the installation documentation.
Troubleshooting, FAQ and Support
Please review our FAQ for solutions to common installation problems and other issues.
For more help, please join our Discord.
Features
Full details on features can be found in our documentation.
Web Server & UI
Invoke runs a locally hosted web server & React UI with an industry-leading user experience.
Unified Canvas
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
Workflows & Nodes
Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
Board & Gallery Management
Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
Other features
- Support for both ckpt and diffusers models
- SD1.5, SD2.0, and SDXL support
- Upscaling Tools
- Embedding Manager & Support
- Model Manager & Support
- Workflow creation & management
- Node-Based Architecture
Contributing
Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.
Get started with contributing by reading our contribution documentation, joining the #dev-chat or the GitHub discussion board.
We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.
Thanks
Invoke is a combined effort of passionate and talented people from across the world. We thank them for their time, hard work and effort.
Original portions of the software are Copyright © 2024 by respective contributors.