This is the same as PR #1537 except that it removes a redundant
`scripts` argument from `setup.py` that appeared at some point.
I also had to unpin the github dependencies in `requirements.in` in
order to get conda CI tests to pass. However, dependencies are still
pinned in `requirements-base.txt` and the environment files, and install
itself is working. So I think we are good.
- When invokeai installed with `pip install .`, the frontend
will be in the venv directory under invokeai.
- When invokeai installed with `pip install -e .`, the frontend
will be in the source repo.
-invoke_ai_web_sever.py will look in both places using relative
addressing.
1. removed redundant `data_files` argument from setup.py
2. upped requirement to Python >= 3.9. This is due to a feature
used in `argparse` that is only available in 3.9 or higher.
* add test-invoke-pip.yml
* update requirements-base.txt to fix tests
* install requirements-base.txt separate
since it requires to have torch already installed
also restore origin requirements-base.txt after suc. test in my fork
* restore origin requirements
add `basicsr>=1.4.2` to requirements-base.txt
remove second installation step
* re-add previously overseen req in lin-cuda
* fix typo in setup.py - `scripts/preload_models.py`
* use GFBGAN from branch `basicsr-1.4.2`
* remove `basicsr>=1.4.2` from base reqs
* add INVOKEAI_ROOT to env
* disable upgrade of `pip`, `setuptools` and `wheel`
* try to use a venv which should not contain `wheel`
* add relative path to pip command
* use `configure_invokeai.py --no-interactive --yes`
* set grpcio to `<1.51.0`
* revert changes to use venv
* remove `--prefer-binary`
* disable step to create models.yaml
since this will not be used anymore with new `configure_invokeai.py`
* use `pip install --no-binary=":all:"`
* another try to use venv
* try uninstalling wheel before installing reqs
* dont use requirements.txt as filename
* update cache-dependency-path
* add facexlib to requirements-base.txt
* first install requirements-base.txt
* first install `-e .`, then install requirements
I know that this is obviously the wrong order, but still have a feeling
* add facexlib to requirements.in
* remove `-e .` from reqs and install after reqs
* unpin torch and torchvision in requirements.in
* fix model dl path
* fix curl output path
* create directory before downloading model
* set INVOKEAI_ROOT_PATH
https://docs.github.com/en/actions/learn-github-actions/environment-variables#naming-conventions-for-environment-variables
* INVOKEAI_ROOT ${{ env.GITHUB_WORKSPACE }}/invokeai
* fix matrix stable-diffusion-model-dl-path
* fix INVOKEAI_ROOT
* fix INVOKEAI_ROOT
* add --root and --outdir to run-tests step
* create models.yaml from example
* fix scripts variable in setup.py
by removing unused scripts
* fix archive-results path
* fix workflow to reflect latest code changes
* fix copy paste error
* fix job name
* fix matrix.stable-diffusion-model
* restructure matrix
* fix `activate conda env` step
* update the environment yamls
use same 4 git packages as for pip
* rename job in test-invoke-conda
* add tqdm to environment-lin-amd.yml
* fix python commands in test-invoke-conda.yml
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
This corrects behavior of --no-interactive, which was in fact
asking for interaction!
New behavior:
If you pass --no-interactive it will behave exactly as it did before
and completely skip the downloading of SD models.
If you pass --yes it will do almost the same, but download the
recommended models. The combination of the two arguments is the same
as --no-interactive.
- If initial model fails to load, invoke.py will inform the user that
something is wrong with models.yaml or the models themselves and
drop user into configure_invokeai.py to repair the problem.
- The model caching system will longer try to reload the current model
if there is none.
- fixes broken setup.py in current dev
- it is just an alias for configure_invokeai.py
- preload_models.py will be deprecated, but for now
it is a second alias
- install scripts:
- allow EN-abling pip cache (use 'use-cache' as an arg to the install script)
- debug message showing which sourceball we're downloading
- add 'wheel' to pip update, so we can speed up installs from source (and quiet deprecations)
- install.sh: use absolute path for micromamba
- setup.py:
- fill 'install_requires' using 'requirements.in'
- fix 'load_models' script name
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
remove duplicate import: os
ldm.util.ask_user is imported only once now
introduce textwrap and contextlib packages to clean up the code
return, returns None implicitly so it is omitted
a function returns None by default so it is omitted
dict.get returns None by default if the value is not found so it is omitted
type of True is a bool and if the module only returns True then it should not return anything in the first place
added some indentations and line breaks to further improve readability
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>