Commit Graph

2750 Commits

Author SHA1 Message Date
Lincoln Stein
1c9fd00f98 this is likely the penultimate rc 2023-02-06 12:03:08 -05:00
Lincoln Stein
8ab66a211c
force torch reinstall (#2532)
For the torch and torchvision libraries **only**, the installer will now
pass the pip `--force-reinstall` option. This is intended to fix issues
with the user getting a CPU-only version of torch and then not being
able to replace it.
2023-02-06 11:58:57 -05:00
Lincoln Stein
bc03ff8b30
Merge branch 'main' into install/force-torch-reinstall 2023-02-06 11:31:57 -05:00
blessedcoolant
0247d63511 Build (negative-prompt-box) 2023-02-07 05:21:09 +13:00
blessedcoolant
7604b36577 Add Negative Prompts Box 2023-02-07 05:21:09 +13:00
blessedcoolant
4a026bd46e Organize language picker items alphabetically 2023-02-07 05:21:09 +13:00
blessedcoolant
6241fc19e0 Fix the model manager edit placeholder not being full height 2023-02-07 05:21:09 +13:00
blessedcoolant
25d7d71dd8 Slightly decrease the size of the tab list icons 2023-02-07 05:21:09 +13:00
Jonathan
2432adb38f
In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
Matthias Wild
a485d45400
Update test-invoke-pip.yml (#2524)
test-invoke-pip.yml:
- enable caching of pip dependencies in `actions/setup-python@v4`
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since we currently use 190.96 GB of 10 GB while I
am writing this)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results

model_manager.py:
- read files in chunks when calculating sha (windows runner is crashing
otherwise)
2023-02-06 12:56:15 +01:00
mauwii
a40bdef29f update model_manager.py
- read files in chunks when calculating sha
  - windows runner is crashing without
2023-02-06 12:30:10 +01:00
mauwii
fc2670b4d6 update test-invoke-pip.yml
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since currently 183.59 GB of 10 GB are Used)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
2023-02-06 12:30:10 +01:00
Lincoln Stein
c3807b044d
Merge branch 'main' into install/force-torch-reinstall 2023-02-06 00:18:38 -05:00
Jonathan
b7ab025f40
Update base.py (#2543)
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
Lincoln Stein
633f702b39
fix crash in txt2img and img2img w/ inpainting models and perlin > 0 (#2544)
- get_perlin_noise() was returning 9 channels; fixed code to return
noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 23:50:32 -05:00
Lincoln Stein
0240656361 fix crash in txt2img and img2img w/ inpainting models and perlin > 0
- get_perlin_noise() was returning 9 channels; fixed code to return
  noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 22:55:08 -05:00
Matthias Wild
05bb9e444b
update pypi_helper.py (#2533)
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-06 03:34:52 +01:00
Lincoln Stein
0076757767
Merge branch 'main' into dev/ci/update-pypi-helper 2023-02-05 21:10:49 -05:00
Lincoln Stein
6ab03c4d08
fix crash in both textual_inversion and merge front ends when not enough models defined (#2540)
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
is.
2023-02-05 19:34:07 -05:00
Lincoln Stein
142016827f fix formatting bugs in both textual_inversion and merge front ends
- Issue is that if insufficient diffusers models are defined in
  models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
  is.
2023-02-05 18:35:01 -05:00
Lincoln Stein
466a82bcc2
Updates frontend README.md (#2539) 2023-02-05 17:25:25 -05:00
Lincoln Stein
05349f6cdc
Merge branch 'main' into dev/ci/update-pypi-helper 2023-02-05 17:13:09 -05:00
psychedelicious
ab585aefae
Update README.md 2023-02-06 09:07:44 +11:00
Matthias Wild
083ce9358b
hotfix build-container.yml (#2537)
fix broken tag
2023-02-05 22:30:23 +01:00
Lincoln Stein
f56cf2400a
Merge branch 'main' into install/force-torch-reinstall 2023-02-05 15:40:35 -05:00
mauwii
fc53f6d47c
hotfix build-container.yml 2023-02-05 21:25:44 +01:00
Matthias Wild
2f70daef8f
Issue/2487/address docker issues (#2517)
Address issues of #2487
2023-02-05 21:20:13 +01:00
mauwii
fc2a136eb0
add requested change 2023-02-05 21:15:39 +01:00
Lincoln Stein
ce3da40434 Merge branch 'main' into install/force-torch-reinstall 2023-02-05 15:01:56 -05:00
mauwii
7933f27a72
update pypi_helper.py`
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-05 20:45:31 +01:00
mauwii
1c197c602f update Dockerfile, .dockerignore and workflow
- dont build frontend since complications with QEMU
- set pip cache dir
- add pip cache to all pip related build steps
- dont lock pip cache
- update dockerignore to exclude uneeded files
2023-02-05 20:20:50 +01:00
mauwii
90656aa7bf update Dockerfile
- add build arg `FRONTEND_DIR`
2023-02-05 20:20:50 +01:00
mauwii
394b4a771e update Dockerfile
- remove yarn install args `--prefer-offline` and `--production=false`
2023-02-05 20:20:50 +01:00
mauwii
9c3f548900 update settings output in build.sh 2023-02-05 20:20:50 +01:00
mauwii
5662d2daa8 add invokeai/frontend/dist/** to .dockerignore 2023-02-05 20:20:50 +01:00
mauwii
fc0f966ad2 fix docs 2023-02-05 20:20:50 +01:00
mauwii
eb702a5049 fix env.sh, update Dockerfile, update build.sh
env.sh:
- move check for torch to CONVTAINER_FLAVOR detection

Dockerfile
- only mount `/var/cache/apt` for apt related steps
- remove `docker-clean` from `/etc/apt/apt.conf.d` for BuildKit cache
- remove apt-get clean for BuildKit cache
- only copy frontend to frontend-builder
- mount `/usr/local/share/.cache/yarn` in frountend-builder
- separate steps for yarn install and yarn build
- build pytorch in pyproject-builder

build.sh
- prepare for installation with extras
2023-02-05 20:20:50 +01:00
mauwii
1386d73302 fix env.sh
only try to auto-detect CUDA/ROCm if torch is installed
2023-02-05 20:20:50 +01:00
mauwii
6089f33e54 fix HUGGING_FACE_HUB_TOKEN 2023-02-05 20:20:50 +01:00
mauwii
3a260cf54f update directory from docker-build to docker 2023-02-05 20:20:50 +01:00
mauwii
9949a438f4 update docs with newly added variables
also remove outdated information
2023-02-05 20:20:50 +01:00
mauwii
84c1122208 fix build.sh and env.sh 2023-02-05 20:20:50 +01:00
Lincoln Stein
cc3d431928
2.3.0rc4 (#2514)
This will bring main up to date with v2.3.0-rc4
2023-02-05 14:05:15 -05:00
Lincoln Stein
c44b060a2e
Merge branch 'main' into 2.3.0rc4 2023-02-05 13:40:56 -05:00
Lincoln Stein
eff7fb89d8 installer will --force-reinstall torch 2023-02-05 13:39:46 -05:00
Lincoln Stein
cd5c112fcd
Allow multiple models to be imported by passing a directory. (#2529)
This change allows passing a directory with multiple models in it to be
imported.

Ensures that diffusers directories will still work.

Fixed up some minor type issues.
2023-02-05 13:36:00 -05:00
Lincoln Stein
563867fa99
Merge branch 'main' into main 2023-02-05 12:51:03 -05:00
Lincoln Stein
2e230774c2
Merge branch 'main' into 2.3.0rc4 2023-02-05 12:44:44 -05:00
Lincoln Stein
4ada4c9f1f
Add --log_tokenization to sysargs (#2523)
This allows the --log_tokenization option to be used as a command line
argument (or from invokeai.init), making it possible to view
tokenization information in the terminal when using the web interface.
2023-02-05 11:55:26 -05:00
blessedcoolant
9a6966924c
Merge branch 'main' into main 2023-02-06 05:33:48 +13:00