mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
af239fa122
20 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Lincoln Stein
|
d582cf2961 | default launcher to choice [1] not [2] | ||
Lincoln Stein
|
69ef1e1e56 | speculative change to upgrade script | ||
Lincoln Stein
|
218eb8522f | tweak launcher option wording | ||
Lincoln Stein
|
117536324c | the "restore" env variable in .bat launcher confuses pydantic | ||
Lincoln Stein
|
d2dc1ed26f |
make InvokeAI package installable
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the installer script. Main changes. 1. Move static web pages into `invokeai/frontend/web` and modify the API to look for them there. This allows pip to copy the files into the distribution directory so that user no longer has to be in repo root to launch. 2. Update invoke.sh and invoke.bat to launch the new web application properly. This also changes the wording for launching the CLI from "generate images" to "explore the InvokeAI node system," since I would not recommend using the CLI to generate images routinely. 3. Fix a bug in the checkpoint converter script that was identified during testing. 4. Better error reporting when checkpoint converter fails. 5. Rebuild front end. |
||
Lincoln Stein
|
d4fb16825e | move static into invokeai.frontend.web directory for dist install | ||
Lincoln Stein
|
9b157b6532 |
fix several issues with Windows installs
1. resize installer window to give more room for configure and download forms 2. replace '\' with '/' in directory names to allow user to drag-and-drop folders into the dialogue boxes that accept directories. 3. similar change in CLI for the !import_model and !convert_model commands 4. better error reporting when a model download fails due to network errors 5. put the launcher scripts into a loop so that menu reappears after invokeai, merge script, etc exits. User can quit with "Q". 6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist. 7. cleaned up status reporting when installing models |
||
Lincoln Stein
|
fff41a7349 | merged with main | ||
Lincoln Stein
|
9d8236c59d |
tested and working on Ubuntu
- You can now achieve several effects: `invokeai-configure` This will use console-based UI to initialize invokeai.init, download support models, and choose and download SD models `invokeai-configure --yes` Without activating the GUI, populate invokeai.init with default values, download support models and download the "recommended" SD models `invokeai-configure --default_only` As above, but only download the default SD model (currently SD-1.5) `invokeai-model-install` Select and install models. This can be used to download arbitrary models from the Internet, install HuggingFace models using their repo_id, or watch a directory for models to load at startup time `invokeai-model-install --yes` Import the recommended SD models without a GUI `invokeai-model-install --default_only` As above, but only import the default model |
||
Lincoln Stein
|
7545e38655 | frontend design done; functionality not hooked up yet | ||
Lincoln Stein
|
714fff39ba |
add new console frontend to initial model selection, and other improvements
1. The invokeai-configure script has now been refactored. The work of selecting and downloading initial models at install time is now done by a script named invokeai-initial-models (module name is ldm.invoke.config.initial_model_select) The calling arguments for invokeai-configure have not changed, so nothing should break. After initializing the root directory, the script calls invokeai-initial-models to let the user select the starting models to install. 2. invokeai-initial-models puts up a console GUI with checkboxes to indicate which models to install. It respects the --default_only and --yes arguments so that CI will continue to work. 3. User can now edit the VAE assigned to diffusers models in the CLI. 4. Fixed a bug that caused a crash during model loading when the VAE is set to None, rather than being empty. |
||
Lincoln Stein
|
24e32f6ae2 |
add 'update' action to launcher script
- Adds an update action to launcher script - This action calls new python script `invokeai-update`, which prompts user to update to latest release version, main development version, or an arbitrary git tag or branch name. - It then uses `pip` to update to whatever tag was specified. |
||
Lincoln Stein
|
c6d13e679f |
make menu options in invoke.bat the same as options in invoke.sh
- This makes the launcher options menu on Windows look and act the same as the Linux/Mac launcher, which previously was lacking the command-line help option and didn't list item (6) as an option. |
||
Lincoln Stein
|
c50b64ec1d | correct default menu entry in install.bat file | ||
Eugene Brodsky
|
6cbdd88fe2 | (installer) correctly call invokeai entrypoints in .bat launch script | ||
Eugene Brodsky
|
34e0d7aaa8 | (config) rename all mentions of scripts/configure_invokeai.py to the new invokeai-configure command | ||
Lincoln Stein
|
2817f8a428 | update launcher shell scripts for new script names & paths | ||
Lincoln Stein
|
48deb3e49d | add model merging documentation and launcher script menu entries | ||
Kevin Turner
|
6fdbc1978d
|
use 🧨diffusers model (#1583)
* initial commit of DiffusionPipeline class * spike: proof of concept using diffusers for txt2img * doc: type hints for Generator * refactor(model_cache): factor out load_ckpt * model_cache: add ability to load a diffusers model pipeline and update associated things in Generate & Generator to not instantly fail when that happens * model_cache: fix model default image dimensions * txt2img: support switching diffusers schedulers * diffusers: let the scheduler do its scaling of the initial latents Remove IPNDM scheduler; it is not behaving. * web server: update image_progress callback for diffusers data * diffusers: restore prompt weighting feature * diffusers: fix set-sampler error following model switch * diffusers: use InvokeAIDiffuserComponent for conditioning * cross_attention_control: stub (no-op) implementations for diffusers * model_cache: let offload_model work with DiffusionPipeline, sorta. * models.yaml.example: add diffusers-format model, set as default * test-invoke-conda: use diffusers-format model test-invoke-conda: put huggingface-token where the library can use it * environment-mac: upgrade to diffusers 0.7 (from 0.6) this was already done for linux; mac must have been lost in the merge. * preload_models: explicitly load diffusers models In non-interactive mode too, as long as you're logged in. * fix(model_cache): don't check `model.config` in diffusers format clean-up from recent merge. * diffusers integration: support img2img * dev: upgrade to diffusers 0.8 (from 0.7.1) We get to remove some code by using methods that were factored out in the base class. * refactor: remove backported img2img.get_timesteps now that we can use it directly from diffusers 0.8.1 * ci: use diffusers model * dev: upgrade to diffusers 0.9 (from 0.8.1) * lint: correct annotations for Python 3.9. * lint: correct AttributeError.name reference for Python 3.9. * CI: prefer diffusers-1.4 because it no longer requires a token The RunwayML models still do. * build: there's yet another place to update requirements? * configure: try to download models even without token Models in the CompVis and stabilityai repos no longer require them. (But runwayml still does.) * configure: add troubleshooting info for config-not-found * fix(configure): prepend root to config path * fix(configure): remove second `default: true` from models example * CI: simplify test-on-push logic now that we don't need secrets The "test on push but only in forks" logic was only necessary when tests didn't work for PRs-from-forks. * create an embedding_manager for diffusers * internal: avoid importing diffusers DummyObject see https://github.com/huggingface/diffusers/issues/1479 * fix "config attributes…not expected" diffusers warnings. * fix deprecated scheduler construction * work around an apparent MPS torch bug that causes conditioning to have no effect * 🚧 post-rebase repair * preliminary support for outpainting (no masking yet) * monkey-patch diffusers.attention and use Invoke lowvram code * add always_use_cpu arg to bypass MPS * add cross-attention control support to diffusers (fails on MPS) For unknown reasons MPS produces garbage output with .swap(). Use --always_use_cpu arg to invoke.py for now to test this code on MPS. * diffusers support for the inpainting model * fix debug_image to not crash with non-RGB images. * inpainting for the normal model [WIP] This seems to be performing well until the LAST STEP, at which point it dissolves to confetti. * fix off-by-one bug in cross-attention-control (#1774) prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness). based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly. * refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary * inpainting for the normal model. I think it works this time. * diffusers: reset num_vectors_per_token sync with |
||
Lincoln Stein
|
0439b51a26
|
Simple Installer for Unified Directory Structure, Initial Implementation (#1819)
* partially working simple installer * works on linux * fix linux requirements files * read root environment variable in right place * fix cat invokeai.init in test workflows * fix classical cp error in test-invoke-pip.yml * respect --root argument now * untested bat installers added * windows install.bat now working fix logic to find frontend files * rename simple_install to "installer" 1. simple_install => 'installer' 2. source and binary install directories are removed * enable update scripts to update requirements - Also pin requirements to known working commits. - This may be a breaking change; exercise with caution - No functional testing performed yet! * update docs and installation requirements NOTE: This may be a breaking commit! Due to the way the installer works, I have to push to a public branch in order to do full end-to-end testing. - Updated installation docs, removing binary and source installers and substituting the "simple" unified installer. - Pin requirements for the "http:" downloads to known working commits. - Removed as much as possible the invoke-ai forks of others' repos. * fix directory path for installer * correct requirement/environment errors * exclude zip files in .gitignore * possible fix for dockerbuild * ready for torture testing - final Windows bat file tweaks - copy environments-and-requirements to the runtime directory so that the `update.sh` script can run. This is not ideal, since we lose control over the requirements. Better for the update script to pull the proper updated requirements script from the repository. * allow update.sh/update.bat to install arbitrary InvokeAI versions - Can pass the zip file path to any InvokeAI release, branch, commit or tag, and the installer will try to install it. - Updated documentation - Added Linux Python install hints. * use binary installer's :err_exit function * user diffusers 0.10.0 * added logic for CPPFLAGS on mac * improve windows install documentation - added information on a couple of gotchas I experienced during windows installation, including DLL loading errors experienced when Visual Studio C++ Redistributable was not present. * tagged to pull from 2.2.4-rc1 - also fix error of shell window closing immediately if suitable python not found Co-authored-by: mauwii <Mauwii@outlook.de> |