This is the same as PR #1537 except that it removes a redundant
`scripts` argument from `setup.py` that appeared at some point.
I also had to unpin the github dependencies in `requirements.in` in
order to get conda CI tests to pass. However, dependencies are still
pinned in `requirements-base.txt` and the environment files, and install
itself is working. So I think we are good.
1. removed redundant `data_files` argument from setup.py
2. upped requirement to Python >= 3.9. This is due to a feature
used in `argparse` that is only available in 3.9 or higher.
* add test-invoke-pip.yml
* update requirements-base.txt to fix tests
* install requirements-base.txt separate
since it requires to have torch already installed
also restore origin requirements-base.txt after suc. test in my fork
* restore origin requirements
add `basicsr>=1.4.2` to requirements-base.txt
remove second installation step
* re-add previously overseen req in lin-cuda
* fix typo in setup.py - `scripts/preload_models.py`
* use GFBGAN from branch `basicsr-1.4.2`
* remove `basicsr>=1.4.2` from base reqs
* add INVOKEAI_ROOT to env
* disable upgrade of `pip`, `setuptools` and `wheel`
* try to use a venv which should not contain `wheel`
* add relative path to pip command
* use `configure_invokeai.py --no-interactive --yes`
* set grpcio to `<1.51.0`
* revert changes to use venv
* remove `--prefer-binary`
* disable step to create models.yaml
since this will not be used anymore with new `configure_invokeai.py`
* use `pip install --no-binary=":all:"`
* another try to use venv
* try uninstalling wheel before installing reqs
* dont use requirements.txt as filename
* update cache-dependency-path
* add facexlib to requirements-base.txt
* first install requirements-base.txt
* first install `-e .`, then install requirements
I know that this is obviously the wrong order, but still have a feeling
* add facexlib to requirements.in
* remove `-e .` from reqs and install after reqs
* unpin torch and torchvision in requirements.in
* fix model dl path
* fix curl output path
* create directory before downloading model
* set INVOKEAI_ROOT_PATH
https://docs.github.com/en/actions/learn-github-actions/environment-variables#naming-conventions-for-environment-variables
* INVOKEAI_ROOT ${{ env.GITHUB_WORKSPACE }}/invokeai
* fix matrix stable-diffusion-model-dl-path
* fix INVOKEAI_ROOT
* fix INVOKEAI_ROOT
* add --root and --outdir to run-tests step
* create models.yaml from example
* fix scripts variable in setup.py
by removing unused scripts
* fix archive-results path
* fix workflow to reflect latest code changes
* fix copy paste error
* fix job name
* fix matrix.stable-diffusion-model
* restructure matrix
* fix `activate conda env` step
* update the environment yamls
use same 4 git packages as for pip
* rename job in test-invoke-conda
* add tqdm to environment-lin-amd.yml
* fix python commands in test-invoke-conda.yml
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
This corrects behavior of --no-interactive, which was in fact
asking for interaction!
New behavior:
If you pass --no-interactive it will behave exactly as it did before
and completely skip the downloading of SD models.
If you pass --yes it will do almost the same, but download the
recommended models. The combination of the two arguments is the same
as --no-interactive.
- If initial model fails to load, invoke.py will inform the user that
something is wrong with models.yaml or the models themselves and
drop user into configure_invokeai.py to repair the problem.
- The model caching system will longer try to reload the current model
if there is none.
- fixes broken setup.py in current dev
- it is just an alias for configure_invokeai.py
- preload_models.py will be deprecated, but for now
it is a second alias
- install scripts:
- allow EN-abling pip cache (use 'use-cache' as an arg to the install script)
- debug message showing which sourceball we're downloading
- add 'wheel' to pip update, so we can speed up installs from source (and quiet deprecations)
- install.sh: use absolute path for micromamba
- setup.py:
- fill 'install_requires' using 'requirements.in'
- fix 'load_models' script name
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
remove duplicate import: os
ldm.util.ask_user is imported only once now
introduce textwrap and contextlib packages to clean up the code
return, returns None implicitly so it is omitted
a function returns None by default so it is omitted
dict.get returns None by default if the value is not found so it is omitted
type of True is a bool and if the module only returns True then it should not return anything in the first place
added some indentations and line breaks to further improve readability
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
The step in which the new models.yaml file replaces the old one was
crashing on Windows due to the fact that on Windows, the os.rename()
function will refuse to replace an existing file, unlike the behavior
on Linux and Mac. The os.replace() function, which was introduced in
python3, supposedly fixes this.
- dangling debug messages in several files, introduced during
testing of the external root directory
- these need to be removed before they are interpreted as errors by users
'requirements.in':
- add picklescan
- finally find a good compromise for torch (==1.12.0) and
torchvision (==0.13.0) across all platforms
'invoke.sh: hotfix for MacOS - add `export PYTORCH_ENABLE_MPS_FALLBACK=1`
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
- Loader is renamed `configure_invokeai.py`, but `preload_models.py` is retained
(as a shell) for backward compatibility
- At startup, if no runtime root directory exists and no `.invokeai` startup file is
present, user will be prompted to select the runtime and outputs directories.
- Also expanded the number of initial models offered to the user to include the
most "liked" ones from HuggingFace, including the two trinart models, the
PaperCut model, and the VoxelArt model.
- Created a configuration file for initial models to be offered to the user, at
configs/INITIAL_MODELS.yaml
- dangling debug messages in several files, introduced during
testing of the external root directory
- these need to be removed before they are interpreted as errors by users