* partially working simple installer
* works on linux
* fix linux requirements files
* read root environment variable in right place
* fix cat invokeai.init in test workflows
* fix classical cp error in test-invoke-pip.yml
* respect --root argument now
* untested bat installers added
* windows install.bat now working
fix logic to find frontend files
* rename simple_install to "installer"
1. simple_install => 'installer'
2. source and binary install directories are removed
* enable update scripts to update requirements
- Also pin requirements to known working commits.
- This may be a breaking change; exercise with caution
- No functional testing performed yet!
* update docs and installation requirements
NOTE: This may be a breaking commit! Due to the way the installer
works, I have to push to a public branch in order to do full end-to-end
testing.
- Updated installation docs, removing binary and source installers and
substituting the "simple" unified installer.
- Pin requirements for the "http:" downloads to known working commits.
- Removed as much as possible the invoke-ai forks of others' repos.
* fix directory path for installer
* correct requirement/environment errors
* exclude zip files in .gitignore
* possible fix for dockerbuild
* ready for torture testing
- final Windows bat file tweaks
- copy environments-and-requirements to the runtime directory so that
the `update.sh` script can run.
This is not ideal, since we lose control over the
requirements. Better for the update script to pull the proper
updated requirements script from the repository.
* allow update.sh/update.bat to install arbitrary InvokeAI versions
- Can pass the zip file path to any InvokeAI release, branch, commit or tag,
and the installer will try to install it.
- Updated documentation
- Added Linux Python install hints.
* use binary installer's :err_exit function
* user diffusers 0.10.0
* added logic for CPPFLAGS on mac
* improve windows install documentation
- added information on a couple of gotchas I experienced during
windows installation, including DLL loading errors experienced
when Visual Studio C++ Redistributable was not present.
* tagged to pull from 2.2.4-rc1
- also fix error of shell window closing immediately if suitable
python not found
Co-authored-by: mauwii <Mauwii@outlook.de>
* attention maps saving to /tmp
* tidy up diffusers branch backporting of cross attention refactoring
* base64-encoding the attention maps image for generationResult
* cleanup/refactor conditioning.py
* attention maps and tokens being sent to web UI
* attention maps: restrict count to actual token count and improve robustness
* add argument type hint to image_to_dataURL function
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* add windows to test runners
* disable fail-fast for debugging
* re-enable login shell for conda workflow
also fix expression to exclude windows from run tests
* enable fail-fast again
* fix condition, pin runner verisons
* remove feature branch from push trigger
since already being triggered now via PR
* use gfpgan from pypi for windows
curious if this would fix the installation here as well
since worked for #1802
* unpin basicsr for windows
* for curiosity enabling testing for windows as well
* disable pip cache
since windows failed with a memory error now
but was working before it had a cache
* use matrix.github-env
* set platform specific root and outdir
* disable tests for windows since memory error
I guess the windows installation uses more space than linux
and for this they have less swap memory
In the event where no `init_mask` is given and `invert_mask` is set to True, the script will raise the following error:
```bash
AttributeError: 'NoneType' object has no attribute 'mode'
```
The new implementation will only run inversion when both variables are valid.
* update docker build (cloud) action with additional metadata, new labels
* (docker) also add aarch64 cloud build and remove arch suffix
* (docker) architecture suffix is needed for now
* (docker) don't build aarch64 for now
* add docker build optimized for size; do not copy models to image
useful for cloud deployments. attempts to utilize docker layer
caching as effectively as possible. also some quick tools to help with
building
* add workflow to build cloud img in ci
* push cloud image in addition to building
* (ci) also tag docker images with git SHA
* (docker) rework Makefile for easy cache population and local use
* support the new conda-less install; further optimize docker build
* (ci) clean up the build-cloud-img action
* improve the Makefile for local use
* move execution of invoke script from entrypoint to cmd, allows overriding the cmd if needed (e.g. in Runpod
* remove unnecessary copyright statements
* (docs) add a section on running InvokeAI in the cloud using Docker
* (docker) add patchmatch to the cloud image; improve build caching; simplify Makefile
* (docker) fix pip requirements path to use binary_installer directory
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).
based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
- Pass command-line arguments through to invoke.py via the .bat and .sh scripts.
- Remove obsolete warning message from binary install.bat
- Make sure that current working directory matches where .bat file is installed
also "Macintosh" → "macOS" to improve "We Support macOS Properly And Not Halfassed Like Other OSS Projects" signalling
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Several users have been trying to run InvokeAI on GTX 1650 and 1660
cards. They really can't because these cards don't work with
half-precision and only have 4-6GB of memory. Added a warning to
the docs (in two places) about this problem.
- Add Xcode installation instructions to source installer walkthrough
- Fix link to source installer page from installer overview
- If OSX install crashes, script will tell Mac users to go to the docs
to learn how to install Xcode
NB: if we remove the version from the zip file names, we can link
directly to assets in the latest GH release from documentation without
the need to keep the links updated