- Rename configure_invokeai.py to invokeai_configure.py to be
consistent with installed script name
- Remove warning message about half-precision models not being
available during the model download process.
- adjust estimated file size reported by configure
- guesstimate disk space needed for "all" models
- fix up the "latest" tag to be named 'v2.3-latest'
`torch` wasn't seeing the environment variable. I suspect this is because it was imported before the variable was set, so was running with a different environment.
Many `torch` ops are supported on MPS so this wasn't noticed immediately, but some samplers like k_dpm_2 still use unsupported operations and need this fallback.
* remove non maintained Dockerfile
* adapt Docker related files to latest changes
- also build the frontend when building the image
- skip user response if INVOKE_MODEL_RECONFIGURE is set
- split INVOKE_MODEL_RECONFIGURE to support more than one argument
* rename `docker-build` dir to `docker`
* update build-container.yml
- rename image to invokeai
- add cpu flavor
- add metadata to build summary
- enable caching
- remove build-cloud-img.yml
* fix yarn cache path, link copyjob
Crashes would occur in the invokeai-configure script if no HF token
was found in cache and the user declines to provide one when prompted.
The reason appears to be that on Linux systems getpass_asterisk()
raises an EOFError when no input is provided
On windows10, getpass_asterisk() does not raise the EOFError, but
returns an empty string instead. This patch detects this and raises
the exception so that the control logic is preserved.
- Added modest adaptive behavior; if the screen is wide enough the three
checklists of models will be arranged in a horizontal row.
- Added color support
* refactor ckpt_to_diffuser to allow converted pipeline to remain in memory
- This idea was introduced by Damian
- Note that although I attempted to use the updated HuggingFace module
pipelines/stable_diffusion/convert_from_ckpt.py, it was unable to
convert safetensors files for reasons I didn't dig into.
- Default is to extract EMA weights.
* add --ckpt_convert option to load legacy ckpt files as diffusers models
- not quite working - I'm getting artifacts and glitches in the
converted diffuser models
- leave as draft for time being
* do not include safety checker in converted files
* add ability to control which vae is used
API now allows the caller to pass an external VAE model to the
checkpoint conversion process. In this way, if an external VAE is
specified in the checkpoint's config stanza, this VAE will be used
when constructing the diffusers model.
Tested with both regular and inpainting 1.X models.
Not tested with SD 2.X models!
---------
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
1. only load triton on linux machines
2. require pip >= 23.0 so that editable installs can run without setup.py
3. model files default to SD-1.5, not 2.1
4. use diffusers model of inpainting rather than ckpt
5. selected a new set of initial models based on # of likes at huggingface
- implement the following pattern for finding data files under both
regular and editable install conditions:
import invokeai.foo.bar as bar
path = bar.__path__[0]
- this *seems* to work reliably with Python 3.9. Testing on 3.10 needs
to be performed.
- fixes a spurious "unknown model name" error when trying to edit the
short name of an existing model.
- relaxes naming requirements to include the ':' and '/' characters
in model names
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.