The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.
The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.
Finally, paste the original image over the tile image.
I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.
The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.
Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
We have had a few bugs with v4 related to file encodings, especially on Windows.
Windows uses its own character encodings instead of `utf-8`, often `cp1252`. Some characters cannot be decoded using `utf-8`, causing `UnicodeDecodeError`.
There are a couple places where this can cause problems:
- In the installer bootstrap, we install or upgrade `pip` and decode the result, using `subprocess`.
The input to this includes the user's home dir. In #6105, the user had one of the problematic characters in their username. `subprocess` attempts and fails to decode the username, which crashes the installer.
To fix this, we need to use `locale.getpreferredencoding()` when executing the command.
- Similarly, in the model install service and config class, we attempt to load a yaml config file. If a problematic character is in the path to the file (which often includes the user's home dir), we can get the same error.
One example is #6129 in which the models.yaml migration fails.
To fix this, we need to open the file with `locale.getpreferredencoding()`.
- Remove `CUDA_AND_DML`. This was for onnx, which we have since removed.
- Remove `AUTODETECT`. This option causes problems for windows users, as it falls back on default pypi index resulting in a non-CUDA torch being installed.
- Add more explicit settings for extra index URL, based on the torch website
- Fix bug where `xformers` wasn't installed on linux and/or windows when autodetect was selected
This will be fairly common in v4 updates. The root cause is models not being added to the `models.yaml` file in v3, so we don't correctly migrate the models to the db.
The docs describe how to use `Scan Folder` to restore missing models.
Compare the installed paths to determine if the model is already installed. Fixes an issue where installed models showed up as uninstalled or vice-versa. Related to relative vs absolute path handling.
Renaming the model file to the model name introduces unnecessary contraints on model names.
For example, a model name can technically be any length, but a model _filename_ cannot be too long.
There are also constraints on valid characters for filenames which shouldn't be applied to model record names.
I believe the old behaviour is a holdover from the old system.
## Summary
This PR adds support for IP Adapter safetensor files for direct usage
inside InvokeAI.
# TEST
You can download the [Composition
Adapters](https://huggingface.co/ostris/ip-composition-adapter) which
weren't previously supported in Invoke and try them out. Every other IP
Adapter model should work too.
If you pick a Safetensor IP Adapter model, you will also need to set
ViT-H or ViT-G next to it. This is a raw implementation. Can refine it
further based on feedback.
Prompt: `Spiderman holding a bunny` -- Exact same composition as the
adapter image.
![opera_UHlo1IyXPT](https://github.com/invoke-ai/InvokeAI/assets/54517381/00bf9f0b-149f-478d-87ca-3252b68d1054)
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.