472c97e4e8
Merge branch 'main' into patch-1
2023-02-06 22:05:47 +02:00
ac23a321b0
build (hires-strength-slider)
2023-02-07 08:22:39 +13:00
f52b233205
Add Hi Res Strength Slider
2023-02-07 08:22:39 +13:00
13474e985b
Merge branch 'main' into patch-1
2023-02-06 12:54:07 -05:00
28b40bebbe
Refactor CUDA cache clearing to add statistical reporting. ( #2553 )
2023-02-06 12:53:30 -05:00
8ab66a211c
force torch reinstall ( #2532 )
...
For the torch and torchvision libraries **only**, the installer will now
pass the pip `--force-reinstall` option. This is intended to fix issues
with the user getting a CPU-only version of torch and then not being
able to replace it.
2023-02-06 11:58:57 -05:00
bc03ff8b30
Merge branch 'main' into install/force-torch-reinstall
2023-02-06 11:31:57 -05:00
0247d63511
Build (negative-prompt-box)
2023-02-07 05:21:09 +13:00
7604b36577
Add Negative Prompts Box
2023-02-07 05:21:09 +13:00
4a026bd46e
Organize language picker items alphabetically
2023-02-07 05:21:09 +13:00
6241fc19e0
Fix the model manager edit placeholder not being full height
2023-02-07 05:21:09 +13:00
25d7d71dd8
Slightly decrease the size of the tab list icons
2023-02-07 05:21:09 +13:00
2432adb38f
In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. ( #2549 )
2023-02-06 10:33:24 -05:00
91acae30bf
Merge branch 'main' into patch-1
2023-02-06 10:14:27 -05:00
a485d45400
Update test-invoke-pip.yml
( #2524 )
...
test-invoke-pip.yml:
- enable caching of pip dependencies in `actions/setup-python@v4`
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since we currently use 190.96 GB of 10 GB while I
am writing this)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
model_manager.py:
- read files in chunks when calculating sha (windows runner is crashing
otherwise)
2023-02-06 12:56:15 +01:00
a40bdef29f
update model_manager.py
...
- read files in chunks when calculating sha
- windows runner is crashing without
2023-02-06 12:30:10 +01:00
fc2670b4d6
update test-invoke-pip.yml
...
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since currently 183.59 GB of 10 GB are Used)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
2023-02-06 12:30:10 +01:00
c3807b044d
Merge branch 'main' into install/force-torch-reinstall
2023-02-06 00:18:38 -05:00
b7ab025f40
Update base.py ( #2543 )
...
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
633f702b39
fix crash in txt2img and img2img w/ inpainting models and perlin > 0 ( #2544 )
...
- get_perlin_noise() was returning 9 channels; fixed code to return
noise for just the 4 image channels and not the mask ones.
- Closes Issue #2541
2023-02-05 23:50:32 -05:00
0240656361
fix crash in txt2img and img2img w/ inpainting models and perlin > 0
...
- get_perlin_noise() was returning 9 channels; fixed code to return
noise for just the 4 image channels and not the mask ones.
- Closes Issue #2541
2023-02-05 22:55:08 -05:00
719a5de506
Merge branch 'main' into patch-1
2023-02-05 21:43:13 -05:00
05bb9e444b
update pypi_helper.py
( #2533 )
...
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-06 03:34:52 +01:00
0076757767
Merge branch 'main' into dev/ci/update-pypi-helper
2023-02-05 21:10:49 -05:00
6ab03c4d08
fix crash in both textual_inversion and merge front ends when not enough models defined ( #2540 )
...
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.
- Now it emits appropriate error messages telling user what the problem
is.
2023-02-05 19:34:07 -05:00
142016827f
fix formatting bugs in both textual_inversion and merge front ends
...
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.
- Now it emits appropriate error messages telling user what the problem
is.
2023-02-05 18:35:01 -05:00
466a82bcc2
Updates frontend README.md ( #2539 )
2023-02-05 17:25:25 -05:00
05349f6cdc
Merge branch 'main' into dev/ci/update-pypi-helper
2023-02-05 17:13:09 -05:00
ab585aefae
Update README.md
2023-02-06 09:07:44 +11:00
083ce9358b
hotfix build-container.yml
( #2537 )
...
fix broken tag
2023-02-05 22:30:23 +01:00
f56cf2400a
Merge branch 'main' into install/force-torch-reinstall
2023-02-05 15:40:35 -05:00
5de5e659d0
Better AMD clarification
...
To better clarify that AMD is supported when using linux
2023-02-05 12:29:50 -08:00
fc53f6d47c
hotfix build-container.yml
2023-02-05 21:25:44 +01:00
2f70daef8f
Issue/2487/address docker issues ( #2517 )
...
Address issues of #2487
2023-02-05 21:20:13 +01:00
fc2a136eb0
add requested change
2023-02-05 21:15:39 +01:00
ce3da40434
Merge branch 'main' into install/force-torch-reinstall
2023-02-05 15:01:56 -05:00
7933f27a72
update pypi_helper.py
`
...
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-05 20:45:31 +01:00
1c197c602f
update Dockerfile, .dockerignore and workflow
...
- dont build frontend since complications with QEMU
- set pip cache dir
- add pip cache to all pip related build steps
- dont lock pip cache
- update dockerignore to exclude uneeded files
2023-02-05 20:20:50 +01:00
90656aa7bf
update Dockerfile
...
- add build arg `FRONTEND_DIR`
2023-02-05 20:20:50 +01:00
394b4a771e
update Dockerfile
...
- remove yarn install args `--prefer-offline` and `--production=false`
2023-02-05 20:20:50 +01:00
9c3f548900
update settings output in build.sh
2023-02-05 20:20:50 +01:00
5662d2daa8
add invokeai/frontend/dist/**
to .dockerignore
2023-02-05 20:20:50 +01:00
fc0f966ad2
fix docs
2023-02-05 20:20:50 +01:00
eb702a5049
fix env.sh, update Dockerfile, update build.sh
...
env.sh:
- move check for torch to CONVTAINER_FLAVOR detection
Dockerfile
- only mount `/var/cache/apt` for apt related steps
- remove `docker-clean` from `/etc/apt/apt.conf.d` for BuildKit cache
- remove apt-get clean for BuildKit cache
- only copy frontend to frontend-builder
- mount `/usr/local/share/.cache/yarn` in frountend-builder
- separate steps for yarn install and yarn build
- build pytorch in pyproject-builder
build.sh
- prepare for installation with extras
2023-02-05 20:20:50 +01:00
1386d73302
fix env.sh
...
only try to auto-detect CUDA/ROCm if torch is installed
2023-02-05 20:20:50 +01:00
6089f33e54
fix HUGGING_FACE_HUB_TOKEN
2023-02-05 20:20:50 +01:00
3a260cf54f
update directory from docker-build
to docker
2023-02-05 20:20:50 +01:00
9949a438f4
update docs with newly added variables
...
also remove outdated information
2023-02-05 20:20:50 +01:00
84c1122208
fix build.sh and env.sh
2023-02-05 20:20:50 +01:00
cc3d431928
2.3.0rc4 ( #2514 )
...
This will bring main up to date with v2.3.0-rc4
2023-02-05 14:05:15 -05:00