- Fixed codeformer module so that the facexlib files are downloaded
into their pre-stored location in models/gfpgan/weights (shared
with the GFPGAN module)
* installer tweaks in preparation for v2.2.5
- pin numpy to 1.23.* to avoid requirements conflict with numba
- update.sh and update.bat now accept a tag or branch string, not a URL
- update scripts download latest requirements-base before updating.
* update.bat.in debugged and working
* update pulls from "latest" now
* bump version number
* fix permissions on create_installer.sh
* give Linux user option of installing ROCm or CUDA
* rc2.2.5 (install.sh) relative path fixes (#2155)
* (installer) fix bug in resolution of relative paths in linux install script
point installer at 2.2.5-rc1
selecting ~/Data/myapps/ as location would create a ./~/Data/myapps
instead of expanding the ~/ to the value of ${HOME}
also, squash the trailing slash in path, if it was entered by the user
* (installer) add option to automatically start the app after install
also: when exiting, print the command to get back into the app
* remove extraneous whitespace
* model_cache applies rootdir to config path
* bring installers up to date with 2.2.5-rc2
* bump rc version
* create_installer now adds version number
* rebuild frontend
* bump rc#
* add locales to frontend dist package
- bump to patchlevel 6
* bump patchlevel
* use invoke-ai version of GFPGAN
- This version is very slightly modified to allow weights files
to be pre-downloaded by the configure script.
* fix formatting error during startup
* bump patch level
* workaround #2 for GFPGAN facexlib() weights downloading
* bump patch
* ready for merge and release
* remove extraneous comment
* set PYTORCH_ENABLE_MPS_FALLBACK directly in invoke.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
- A couple of users have reported that switching back and forth
between ckpt models is causing a "GPU out of memory" crash.
Traceback suggests there is actually a CPU RAM issue.
- This speculative test simply performs a round of garbage collection
before the point where the crash occurs.
* add example of using -from_file to read from a script
Addresses #1654, #473, #566, #1008 at least partially.
* fix bug in code example
* improve docs for !fetch and !replay
* enable rendering of images in GH WebUI
also fix indention in some bullet lists
Co-authored-by: mauwii <Mauwii@outlook.de>
When using the inpainting model, the following sequence of events
would cause a predictable crash:
1. Use unified canvas to outcrop a portion of the image.
2. Accept outcropped image and import into img2img
3. Try any img2img operation
This closes#1596.
The crash was:
```
operands could not be broadcast together with shapes (320,512) (512,576)
Traceback (most recent call last):
File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1125, in generate_images
self.generate.prompt2image(
File "/data/lstein/InvokeAI/ldm/generate.py", line 492, in prompt2image
results = generator.generate(
File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 98, in generate
image = make_image(x_T)
File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 138, in make_image
return self.sample_to_image(samples)
File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 173, in sample_to_image
corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 148, in repaste_and_color_correct
mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (320,512) (512,576)
```
This error was caused by the image and its mask not being of identical
size due to the outcropping operation. The ultimate cause of this
error has something to do with different code paths being followed in
the `inpaint` vs the `omnibus` modules.
Since omnibus will be obsoleted by diffusers, I have chosen just to
work around the problem rather than track it down to its source. The
only ill effect is that color correction will not be applied to the
first image created by `img2img` after applying the outcrop and
immediately importing into the img2img canvas. Since the inpainting
model has less of a color drift problem than the standard model, this
is unlikely to be problematic.
* defer patchmatch loading
Because of the way that patchmatch was loaded early at import time, it
was impossible to turn off the attempted loading with --no-patchmatch.
In addition, the patchmatch loading messages appear early on during
initialization, interfering with ability to print out the version
cleanly when --version provided to invoke script.
This commit creates a thin wrapper class for patch_match that is only
loaded when needed, solving both problems.
* create a singleton patchmatch object for use in inpainting
This creates a thin wrapper to patchmatch which loads the module
on demand, respecting the global "trypatchmatch" option.
* address 2d round of issues in PR 2039 comments
* Patchmatch->PatchMatch and misc cleanup
* add --version to invoke.py arguments
This commit allows invoke.py to print out its name and version
number when given the --version argument. I had to move some
status messages around in order to make the output clean.
There is still an early message about initializing patchmatch
that interferes with a clean print of the version, and in fact the
--no-patchmatch argument is not doing anything. This will be the
subject of a subsequent PR.
* export APP_ID and APP_VERSION
Needed to support the web backend.
* (config) handle huggingface token more gracefully
* (docs) document HuggingFace token requirement for Concepts
* (cli) deprecate the --(no)-interactive CLI flag
It was previously only used to skip the SD weights download, and therefore
the prompt for Huggingface token (the "interactive" part).
Now that we don't need a Huggingface token
to download the SD weights at all, we can replace this flag with
"--skip-sd-weights", to clearly describe its purpose
The `--(no)-interactive` flag still functions the same, but shows a deprecation message
* (cli) fix emergency_model_reconfigure argument parsing
* (config) fix installation issues on systems with non-UTF8 locale
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Before making a concept download request to HuggingFace, the concepts
library module now checks the concept name against a downloaded list
of all the concepts currently known to HuggingFace. If the requested
concept is not on the list, then no download request is made.
* add an argument that lets user specify folders to scan for weights
This PR adds a `--weight_folders` argument to invoke.py. Using
argparse, it adds a "weight_folders" attribute to the Args object, and
can be used like this:
```
'''test.py'''
from ldm.invoke.args import Args
args = Args().parse_args()
for folder in args.weight_folders:
print(folder)
```
Example output:
```
python test.py --weight_folders /tmp/weights /home/fred/invokeai/weights "./my folder with spaces/weight files"
/tmp/weights
/home/fred/invokeai/weights
./my folder with spaces/weight files
```
* change --weight_folders to --weight_dirs
* Add Embedding Parsing
* Add Embedding Parsing
* Return token_dim in embedding_info
* fixes to handle other variants
1. Handle the case of a .bin file being mislabeled .pt (seen in the
wild at https://cyberes.github.io/stable-diffusion-textual-inversion-models/)
2. Handle the "broken" .pt files reported by https://github.com/invoke-ai/InvokeAI/issues/1829
3. When token name is not available, use the basename of the pt or bin file rather than the
whole path.
fixes#1829
* remove whitespace
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* Correct timestep for img2img initial noise addition
* apply fix to inpaint and txt2img2img as well
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
- The invoke.py script now checks that the root (runtime) directory contains
the expected config/models.yaml file and if it doesn't exits with a helpful
error message about how to set the proper root.
- Formerly the script would fail with a "bad model" message and try to redownload
its models, which is not helpful in the case that the root is missing or
damaged.
* Push dockerfile (#18)
* update build-container.yml
* add login step to build-container.yml
* update job name
* update matrix: add registry and platforms
also set latest only for cuda image
* quote string
* use latest for amd and cuda image
* separate images for cuda and amd
* change latest from auto to true
* configure_invoke -y instead of --interactive
* fix argument to --yes
* update matrix:
- use flavor instead of pip-requirements
- add flavor `cloud`
- add `dockerfile`
* introduce INVOKE_MODEL_RECONFIGURE
* add `--cap-add=sys_nice` to run.sh
* update Dockerfile: install wheel
* only have main branch in action again
* disable push of cloud image for now
since it still has it's own workflow, but PoC succeeded
* remove now untrue comments in top
* install pip, setuptools and wheel in sep. step
* add labels to the image
* remove doubled installation of wheel
This commit fixes the root search logic to be as follows:
1) The `--root_dir` command line argument
2) The contents of environment variable INVOKEAI_ROOT
3) The VIRTUAL_ENV environment variable, plus '..'
4) $HOME/invokeai
(3) is the new feature. Since we are now recommending to install
InvokeAI and its dependencies into the .venv in the root directory,
this should be a reliable choice.
- Using relative root addresses was causing problems when the
current working directory was changed after start time.
- This commit makes the root address absolute at start time, such
that changing the working directory later on doesn't break anything.
* partially working simple installer
* works on linux
* fix linux requirements files
* read root environment variable in right place
* fix cat invokeai.init in test workflows
* fix classical cp error in test-invoke-pip.yml
* respect --root argument now
* untested bat installers added
* windows install.bat now working
fix logic to find frontend files
* rename simple_install to "installer"
1. simple_install => 'installer'
2. source and binary install directories are removed
* enable update scripts to update requirements
- Also pin requirements to known working commits.
- This may be a breaking change; exercise with caution
- No functional testing performed yet!
* update docs and installation requirements
NOTE: This may be a breaking commit! Due to the way the installer
works, I have to push to a public branch in order to do full end-to-end
testing.
- Updated installation docs, removing binary and source installers and
substituting the "simple" unified installer.
- Pin requirements for the "http:" downloads to known working commits.
- Removed as much as possible the invoke-ai forks of others' repos.
* fix directory path for installer
* correct requirement/environment errors
* exclude zip files in .gitignore
* possible fix for dockerbuild
* ready for torture testing
- final Windows bat file tweaks
- copy environments-and-requirements to the runtime directory so that
the `update.sh` script can run.
This is not ideal, since we lose control over the
requirements. Better for the update script to pull the proper
updated requirements script from the repository.
* allow update.sh/update.bat to install arbitrary InvokeAI versions
- Can pass the zip file path to any InvokeAI release, branch, commit or tag,
and the installer will try to install it.
- Updated documentation
- Added Linux Python install hints.
* use binary installer's :err_exit function
* user diffusers 0.10.0
* added logic for CPPFLAGS on mac
* improve windows install documentation
- added information on a couple of gotchas I experienced during
windows installation, including DLL loading errors experienced
when Visual Studio C++ Redistributable was not present.
* tagged to pull from 2.2.4-rc1
- also fix error of shell window closing immediately if suitable
python not found
Co-authored-by: mauwii <Mauwii@outlook.de>
* attention maps saving to /tmp
* tidy up diffusers branch backporting of cross attention refactoring
* base64-encoding the attention maps image for generationResult
* cleanup/refactor conditioning.py
* attention maps and tokens being sent to web UI
* attention maps: restrict count to actual token count and improve robustness
* add argument type hint to image_to_dataURL function
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
In the event where no `init_mask` is given and `invert_mask` is set to True, the script will raise the following error:
```bash
AttributeError: 'NoneType' object has no attribute 'mode'
```
The new implementation will only run inversion when both variables are valid.
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).
based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
Some users have been complaining that the CLI "freezes" for a while
before the invoke> prompt appears. I believe this is due to internet
delay while the concepts library names are downloaded by the autocompleter.
I have changed logic so that the concepts are downloaded the first time
the user types a < and tabs.
- make the warnings about patchmatch less redundant
- only warn about being unable to load concepts from Hugging Face
library once
- do not crash when unable to load concepts from Hugging Face
due to network connectivity issues