Commit Graph

2786 Commits

Author SHA1 Message Date
mauwii
c2e11dfe83 update build-container.yml
- add long sha tag
- update cache-from
Dockerfile:
- re-use `apt-get update`
env.sh/build.sh:
- rename platform to lowercase
2023-02-07 09:27:20 +01:00
mauwii
17e1930229 remove CONTAINER_FLAVOR build arg
also disable currently unused PIP_PACKAGE build arg
will start using it when problems with XFORMERS are sorted out
2023-02-07 09:27:20 +01:00
mauwii
bde94347d3 don't use --linkin COPY 2023-02-07 09:27:20 +01:00
mauwii
b1612afff4 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
1d10d952b2 use cleartext DOCKERHUB_USERNAME 2023-02-07 09:27:20 +01:00
mauwii
9150f9ef3c move LABEL to top 2023-02-07 09:27:20 +01:00
mauwii
7bc0f7cc6c update Docker Hub description 2023-02-07 09:27:20 +01:00
mauwii
c52d11b24c optionally push to DockerHub 2023-02-07 09:27:20 +01:00
mauwii
59486615dd update build-container.yml 2023-02-07 09:27:20 +01:00
mauwii
f0212cd361 update Dockerfile 2023-02-07 09:27:20 +01:00
mauwii
ee4cb5fdc9 add id to Build container 2023-02-07 09:27:20 +01:00
mauwii
75b919237b update cache-from 2023-02-07 09:27:20 +01:00
mauwii
07a9062e1f update .dockerignore and scripts 2023-02-07 09:27:20 +01:00
mauwii
cdb3e18b80 add flavor to pip cache id
to prevent cache invalidation
2023-02-07 09:27:20 +01:00
Lincoln Stein
28a5424242
Update textual inversion doc with the correct CLI name. (#2560) 2023-02-07 01:22:03 -05:00
Lincoln Stein
8d418af20b
Merge branch 'main' into ti-doc-update 2023-02-07 00:59:53 -05:00
Lincoln Stein
055badd611
Diffusers Samplers (#2565)
- Diffusers Sampler list is independent from CKPT Sampler list. And the
app will load the correct list based on what model you have loaded.
- Isolated the activeModelSelector coz this is used in multiple places.
- Possible fix to the white screen bug that some users face. This was
happening because of a possible null in the active model list
description tag. Which should hopefully now be fixed with the new
activeModelSelector.

I'll keep tabs on the last thing. Good to go.
2023-02-07 00:59:32 -05:00
blessedcoolant
944f9e98a7 build (diffusers-samplers) 2023-02-07 18:29:14 +13:00
blessedcoolant
fcffcf5602 Diffusers Samplers
DIsplay sampler list based on the active model.
2023-02-07 18:26:06 +13:00
blessedcoolant
f121dfe120 Update model select to use new active model selector
Hopefully this also fixes the white screen error that some users face.
2023-02-07 18:25:45 +13:00
blessedcoolant
a7dd7b4298 Add activeModelSelector
Active Model details are used in multiple places. So makes sense to have a selector for it.
2023-02-07 18:25:12 +13:00
Lincoln Stein
d5810f6270
Bring main up to date with RC5 (#2555)
Updated the version number
2023-02-06 22:23:58 -05:00
Lincoln Stein
ac6e9238f1
Merge branch 'main' into ti-doc-update 2023-02-06 20:06:33 -05:00
Dan Sully
6343b245ef Update textual inversion doc with the correct CLI name. 2023-02-06 14:51:22 -08:00
Lincoln Stein
8c80da2844
Merge branch 'main' into 2.3.0rc5 2023-02-06 17:38:25 -05:00
Lincoln Stein
a12189e088
fix build-container.yml (#2557)
This should fix the build-container workflow when triggered by a Tag
(that it is failing was mentioned in #2555 )
2023-02-06 15:09:04 -05:00
Lincoln Stein
a56e3014a4
Merge branch 'main' into update/ci/refine-build-container 2023-02-06 14:42:02 -05:00
Lincoln Stein
53d2d34b3d
Merge branch 'main' into 2.3.0rc5 2023-02-06 14:34:16 -05:00
blessedcoolant
ac23a321b0 build (hires-strength-slider) 2023-02-07 08:22:39 +13:00
blessedcoolant
f52b233205 Add Hi Res Strength Slider 2023-02-07 08:22:39 +13:00
mauwii
8242fc8bad
update metadata 2023-02-06 19:58:48 +01:00
Matthias Wild
09b6f7572b
Merge branch 'invoke-ai:main' into main 2023-02-06 19:50:40 +01:00
Lincoln Stein
bde6e96800
Merge branch 'main' into 2.3.0rc5 2023-02-06 12:55:47 -05:00
Jonathan
28b40bebbe
Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
Lincoln Stein
1c9fd00f98 this is likely the penultimate rc 2023-02-06 12:03:08 -05:00
Lincoln Stein
8ab66a211c
force torch reinstall (#2532)
For the torch and torchvision libraries **only**, the installer will now
pass the pip `--force-reinstall` option. This is intended to fix issues
with the user getting a CPU-only version of torch and then not being
able to replace it.
2023-02-06 11:58:57 -05:00
Lincoln Stein
bc03ff8b30
Merge branch 'main' into install/force-torch-reinstall 2023-02-06 11:31:57 -05:00
blessedcoolant
0247d63511 Build (negative-prompt-box) 2023-02-07 05:21:09 +13:00
blessedcoolant
7604b36577 Add Negative Prompts Box 2023-02-07 05:21:09 +13:00
blessedcoolant
4a026bd46e Organize language picker items alphabetically 2023-02-07 05:21:09 +13:00
blessedcoolant
6241fc19e0 Fix the model manager edit placeholder not being full height 2023-02-07 05:21:09 +13:00
blessedcoolant
25d7d71dd8 Slightly decrease the size of the tab list icons 2023-02-07 05:21:09 +13:00
Jonathan
2432adb38f
In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
mauwii
0402766f4d add author label 2023-02-06 14:05:27 +01:00
mauwii
a9ef5d1532 update tags 2023-02-06 14:05:27 +01:00
Matthias Wild
a485d45400
Update test-invoke-pip.yml (#2524)
test-invoke-pip.yml:
- enable caching of pip dependencies in `actions/setup-python@v4`
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since we currently use 190.96 GB of 10 GB while I
am writing this)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results

model_manager.py:
- read files in chunks when calculating sha (windows runner is crashing
otherwise)
2023-02-06 12:56:15 +01:00
mauwii
a40bdef29f update model_manager.py
- read files in chunks when calculating sha
  - windows runner is crashing without
2023-02-06 12:30:10 +01:00
mauwii
fc2670b4d6 update test-invoke-pip.yml
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since currently 183.59 GB of 10 GB are Used)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
2023-02-06 12:30:10 +01:00
Lincoln Stein
c3807b044d
Merge branch 'main' into install/force-torch-reinstall 2023-02-06 00:18:38 -05:00
Jonathan
b7ab025f40
Update base.py (#2543)
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00