Commit Graph

2292 Commits

Author SHA1 Message Date
Lincoln Stein
fdf41cc739
Installer final tweaks (#1550)
This is the same as PR #1537 except that it removes a redundant
`scripts` argument from `setup.py` that appeared at some point.

I also had to unpin the github dependencies in `requirements.in` in
order to get conda CI tests to pass. However, dependencies are still
pinned in `requirements-base.txt` and the environment files, and install
itself is working. So I think we are good.
2022-11-25 11:24:30 -05:00
Lincoln Stein
e98068a546 unpinned clip, clipseg, gfpgan and k-diffusion
- conflicts with their counterparts in the environment files was
  causing the CI conda-based tests to fail.
- installer seems to work still
2022-11-25 04:49:41 +00:00
Lincoln Stein
b945ae4e01 two more fixups
1. removed redundant `data_files` argument from setup.py
2. upped requirement to Python >= 3.9. This is due to a feature
   used in `argparse` that is only available in 3.9 or higher.
2022-11-25 03:50:52 +00:00
Lincoln Stein
b23c471cf0
make mauwii code owner for docker build (#1549) 2022-11-25 04:46:10 +01:00
Lincoln Stein
964e584bd3 remove redundant scripts arg from setup.py 2022-11-25 03:03:24 +00:00
Lincoln Stein
461358bdde
Merge branch 'development' into feat-install-unify-setup-requirements-pip 2022-11-24 21:55:39 -05:00
Matthias Wild
2433cc344a
add test-invoke-pip.yml (#1521)
* add test-invoke-pip.yml

* update requirements-base.txt to fix tests

* install requirements-base.txt separate
since it requires to have torch already installed
also restore origin requirements-base.txt after suc. test in my fork

* restore origin requirements
add `basicsr>=1.4.2` to requirements-base.txt
remove second installation step

* re-add previously overseen req in lin-cuda

* fix typo in setup.py - `scripts/preload_models.py`

* use GFBGAN from branch `basicsr-1.4.2`

* remove `basicsr>=1.4.2` from base reqs

* add INVOKEAI_ROOT to env

* disable upgrade of `pip`, `setuptools` and `wheel`

* try to use a venv which should not contain `wheel`

* add relative path to pip command

* use `configure_invokeai.py --no-interactive --yes`

* set grpcio to `<1.51.0`

* revert changes to use venv

* remove `--prefer-binary`

* disable step to create models.yaml
since this will not be used anymore with new `configure_invokeai.py`

* use `pip install --no-binary=":all:"`

* another try to use venv

* try uninstalling wheel before installing reqs

* dont use requirements.txt as filename

* update cache-dependency-path

* add facexlib to requirements-base.txt

* first install requirements-base.txt

* first install `-e .`, then install requirements
I know that this is obviously the wrong order, but still have a feeling

* add facexlib to requirements.in

* remove `-e .` from reqs and install after reqs

* unpin torch and torchvision in requirements.in

* fix model dl path

* fix curl output path

* create directory before downloading model

* set INVOKEAI_ROOT_PATH
https://docs.github.com/en/actions/learn-github-actions/environment-variables#naming-conventions-for-environment-variables

* INVOKEAI_ROOT ${{ env.GITHUB_WORKSPACE }}/invokeai

* fix matrix stable-diffusion-model-dl-path

* fix INVOKEAI_ROOT

* fix INVOKEAI_ROOT

* add --root and --outdir to run-tests step

* create models.yaml from example

* fix scripts variable in setup.py
by removing unused scripts

* fix archive-results path

* fix workflow to reflect latest code changes

* fix copy paste error

* fix job name

* fix matrix.stable-diffusion-model

* restructure matrix

* fix `activate conda env` step

* update the environment yamls
use same 4 git packages as for pip

* rename job in test-invoke-conda

* add tqdm to environment-lin-amd.yml

* fix python commands in test-invoke-conda.yml

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-11-25 01:24:24 +01:00
Lincoln Stein
16df759499 fix non-interactive behavior of config_invokeai.py
This corrects behavior of --no-interactive, which was in fact
asking for interaction!

New behavior:

If you pass --no-interactive it will behave exactly as it did before
and completely skip the downloading of SD models.

If you pass --yes it will do almost the same, but download the
recommended models. The combination of the two arguments is the same
as --no-interactive.
2022-11-23 23:07:47 -05:00
Ben Alkov
5a1a36ec29 feat(install); unify setup.py, requirements.in, pip
This allows populating setup.py's 'install_requires' directly from 'requirements.in'

- setup.py:
  - read 'requirements.in' instead of 'requirements.txt'
  - add correct upstream pytorch repo to "dependency_links"
- requirements.in:
  - append "name @" to git packages
  - fix torch repo URL -> 'download.pytorch.org/whl/torch_stable.html'

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-23 13:34:42 -05:00
Lincoln Stein
71c4f401b0 Merge branch 'fix-model-load-error-handling' of github.com:/invoke-ai/InvokeAI into fix-model-load-error-handling 2022-11-22 19:15:06 +00:00
Lincoln Stein
c59b9897d9 remove file that shouldn't have been in PR 2022-11-22 19:14:52 +00:00
Lincoln Stein
4cf1c856ed
Merge branch 'development' into fix-model-load-error-handling 2022-11-22 14:11:26 -05:00
David Burnett
a78a1020be grpcio 1.51.0 is broken on M1 Macs. limit it to last good version til fixed 2022-11-22 12:05:45 -05:00
Lincoln Stein
90cb7a6442 fix behavior when models.yaml missing entirely 2022-11-22 16:56:38 +00:00
Lincoln Stein
8f5cded86e fix regression in ldm.invoke.model_cache.list_models()
- this was introduced in PR #1525 and not caught during my
  code review
2022-11-22 16:46:26 +00:00
Lincoln Stein
02d02a86b1 gracefully handle broken or missing models at initial load time
- If initial model fails to load, invoke.py will inform the user that
  something is wrong with models.yaml or the models themselves and
  drop user into configure_invokeai.py to repair the problem.

- The model caching system will longer try to reload the current model
  if there is none.
2022-11-22 16:36:11 +00:00
Lincoln Stein
ba9c695463 Merge branch 'development' into fix-model-load-error-reporting 2022-11-22 16:24:00 +00:00
Lincoln Stein
8202f34f38 Merge remote-tracking branch 'origin' into fix-model-load-error-reporting 2022-11-22 16:22:29 +00:00
Lincoln Stein
40a7f47d22 change typehint "a|b" operation to Union[a,b] to run on Python < 3.10
- this incompatibility was introduced by #1525 and missed during
  code review
2022-11-22 11:21:04 -05:00
Lincoln Stein
37bcf9cc47 this small fix adds back the load_models.py script
- fixes broken setup.py in current dev
- it is just an alias for configure_invokeai.py
- preload_models.py will be deprecated, but for now
  it is a second alias
2022-11-22 11:21:04 -05:00
Ben Alkov
0340d9ad53 fix(install): more fixes
- install scripts:
   - allow EN-abling pip cache (use 'use-cache' as an arg to the install script)
   - debug message showing which sourceball we're downloading
   - add 'wheel' to pip update, so we can speed up installs from source (and quiet deprecations)
- install.sh: use absolute path for micromamba
- setup.py:
  - fill 'install_requires' using 'requirements.in'
  - fix 'load_models' script name
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-22 11:06:50 -05:00
Damian Stewart
0d35a67e9c
fix run-breaking typo (#1532) 2022-11-22 14:27:23 +01:00
Damian Stewart
1260e28d94 fix typo 2022-11-22 14:21:15 +01:00
devops117
229f782e3b check the function signatures and add some easy annotations
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-22 08:14:58 -05:00
devops117
c15b839dd4 remove additional newline from the textwrap.dedent string
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-22 08:14:58 -05:00
devops117
a095214e52 cleanup ldm/invoke/model_cache.py
remove duplicate import: os
ldm.util.ask_user is imported only once now
introduce textwrap and contextlib packages to clean up the code
return, returns None implicitly so it is omitted
a function returns None by default so it is omitted
dict.get returns None by default if the value is not found so it is omitted
type of True is a bool and if the module only returns True then it should not return anything in the first place
added some indentations and line breaks to further improve readability

Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-22 08:14:58 -05:00
Lincoln Stein
8e81425e89 fix outcropping crash when png has no InvokeAI metadata
- Closes #1461
2022-11-21 16:35:00 -05:00
Damian Stewart
c5cbe8f87d
When doing -t/--log_tokenization, also log prompt parser output (#1529)
The log was deleted at some point, this brings it back when user does
`--log_tokenization`/`-t`
2022-11-21 19:37:55 +01:00
Damian Stewart
e0581a2c37 when doing --log_tokenization/-t also log parsed prompt 2022-11-21 19:27:44 +01:00
Lincoln Stein
32f538bf3a fix another place where rename() should be replace() 2022-11-21 08:44:26 -05:00
Lincoln Stein
3c5a14a814 fixes configure_invokeai.py crash on Windows systems
The step in which the new models.yaml file replaces the old one was
crashing on Windows due to the fact that on Windows, the os.rename()
function will refuse to replace an existing file, unlike the behavior
on Linux and Mac. The os.replace() function, which was introduced in
python3, supposedly fixes this.
2022-11-21 08:44:26 -05:00
Lincoln Stein
0661256b61 Merge branch 'interactive-configuration' into development 2022-11-20 23:32:28 +00:00
David Burnett
602e35db65 Fix issues with '.' not being consisent when run using web gui. 2022-11-20 18:22:13 -05:00
mauwii
bc7ece771d instead linking modelfile use custom models.yaml 2022-11-20 18:21:34 -05:00
Lincoln Stein
38bdb440d0 remove several debugging messages
- dangling debug messages in several files, introduced during
  testing of the external root directory
- these need to be removed before they are interpreted as errors by users
2022-11-20 18:20:40 -05:00
Ben Alkov
ce8c2bea2f fix(install): load_models needs to be absolutely last
setup.py: Put in the name of the *product*, not the project

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-20 12:23:20 -05:00
Ben Alkov
3ac0f11e97 toil(invoke): more meaningful messaging
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-20 12:23:20 -05:00
Ben Alkov
98fe49cb55 hotfix(unified install): last minute changes missing from PR #1506
'requirements.in':
  - add picklescan
  - finally find a good compromise for torch (==1.12.0) and
    torchvision (==0.13.0) across all platforms
'invoke.sh: hotfix for MacOS - add `export PYTORCH_ENABLE_MPS_FALLBACK=1`

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-20 12:23:20 -05:00
Damian Stewart
2b7e3abe57
fix(args): fix INITFILE spelling (#1518)
fixes #1516
2022-11-20 01:45:27 +01:00
Kevin Turner
150c4a5d2d
fix(args): fix INITFILE spelling 2022-11-19 12:01:02 -08:00
Lincoln Stein
0381a853b5 add interactive configuration to the model loader
- Loader is renamed `configure_invokeai.py`, but `preload_models.py` is retained
  (as a shell) for backward compatibility

- At startup, if no runtime root directory exists and no `.invokeai` startup file is
  present, user will be prompted to select the runtime and outputs directories.

- Also expanded the number of initial models offered to the user to include the
  most "liked" ones from HuggingFace, including the two trinart models, the
  PaperCut model, and the VoxelArt model.

- Created a configuration file for initial models to be offered to the user, at
  configs/INITIAL_MODELS.yaml
2022-11-19 19:20:28 +00:00
Damian Stewart
c79ec204ec
Fixed default --embiggen_strength to None to avoid it being printed on every run (#1515) 2022-11-19 13:24:11 +01:00
JPPhoto
8d3b1582a5 Fixed default to None 2022-11-19 11:50:26 +00:00
Lincoln Stein
5fd7d71a7a remove several debugging messages
- dangling debug messages in several files, introduced during
  testing of the external root directory
- these need to be removed before they are interpreted as errors by users
2022-11-18 21:14:28 +00:00
Kevin Coakley
19e2cff18c Fix micromamba tar command for macOS
Moved the -O from after the file to after the tar command for compatibility with macOS

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2022-11-18 16:08:17 -05:00
Kevin Coakley
1f0220697b Fix micromamba tar command for macOS
Moved the -O from after the file to after the tar command for compatibility with macOS

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2022-11-18 15:57:06 -05:00
Lincoln Stein
18ae3949ef fix typo in error message 2022-11-18 20:53:49 +00:00
Lincoln Stein
aa95510444
Merge branch 'development' into create-invokeai-run-directory 2022-11-18 15:27:51 -05:00
Lincoln Stein
f33df25830 address all review comments; needs testing 2022-11-18 15:25:23 -05:00
Lincoln Stein
3a5a8ceba5 Merge branch 'create-invokeai-run-directory' of github.com:/invoke-ai/InvokeAI into create-invokeai-run-directory 2022-11-18 19:35:45 +00:00