b229fe19aa
Merge branch 'main' into lstein/configure-max-cache-size
2023-07-07 01:52:12 +12:00
04b57c408f
Add clip skip option to prompt node
2023-07-06 16:09:40 +03:00
2595c1d86f
LoRA model loading fixes ( #3663 )
...
This PR enables model manager importation of diffusers-style .bin LoRAs.
However, since there is no backend support for this type of LoRA yet,
attempts to use them will result in an unimplemented error.
It closes #3636 and #3637
2023-07-07 01:09:13 +12:00
c2eb6c33b9
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-07 01:00:02 +12:00
94e38e9769
feat(ui): remove delete image button in gallery
...
it was really easy to accidentally click, just commented out, easy to add back or add a setting for it in the future
2023-07-06 22:35:50 +10:00
984121d682
only show delete icon if big enough
2023-07-06 22:35:50 +10:00
6f1268e2b1
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-07 00:32:22 +12:00
405054d802
feat: Add Embedding Picker to Linear UI ( #3654 )
2023-07-07 00:29:19 +12:00
a901a37433
feat(ui): improve no loaded loras UI
2023-07-06 22:26:54 +10:00
e09c07a97d
fix(ui): fix board auto-add
2023-07-06 22:25:05 +10:00
87feae959d
feat(ui): improve no loaded embeddings UI
2023-07-06 22:24:50 +10:00
c21245f590
fix(api): make list models params querys, make path /
, remove defaults
...
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)
Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
fbd6b25b4d
feat(ui): improve ux on TI autcomplete
...
- cursor reinserts at the end of the trigger
- `enter` closes the select
- popover styling
2023-07-06 14:56:37 +10:00
267f0408bb
Update PROMPTS with Dynamic Prompts docs
2023-07-05 23:50:04 -04:00
cc8c34311c
Update LICENSE
2023-07-05 23:46:27 -04:00
2415dc1235
feat(ui): refactor embedding ui; now is autocomplete
2023-07-06 13:40:13 +10:00
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api
2023-07-05 23:16:43 -04:00
f7daa6e71d
all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT
2023-07-05 23:13:01 -04:00
3691b55565
fix autoimport crash
2023-07-05 21:53:08 -04:00
1ee41822bc
restore .gitignore treatment of frontend/web
2023-07-05 21:30:56 -04:00
fbad839d23
add missing .js files
v3.0.0+a8
2023-07-05 21:09:13 -04:00
f610045a14
Merge branch 'main' into mps-fp16-fixes
2023-07-05 21:01:48 -04:00
a7cbcae176
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:57 -04:00
0a6dccd607
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:14 -04:00
43c51ff157
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-05 20:48:15 -04:00
bf25818d76
rebuild front end; bump version
2023-07-05 20:33:28 -04:00
cfa3b2419c
partial implementation of merge
2023-07-05 20:25:47 -04:00
d4550b3059
clean up lint errors in lora.py
2023-07-05 19:18:25 -04:00
83d3a043da
merge latest changes from main
2023-07-05 19:15:53 -04:00
169ff6368b
Update mps_fixes.py - additional torch op for nodes
...
This fixes scaling in the nodes UI.
2023-07-05 17:47:23 -04:00
71dad6d404
Merge branch 'main' into ti-ui
2023-07-05 16:57:31 -04:00
c21bd806f0
default LoRA weight to 0.75
2023-07-05 16:54:23 -04:00
007d125e40
Update README.md
2023-07-05 16:53:37 -04:00
716d154957
Update LICENSE
2023-07-05 16:41:28 -04:00
685a47cc7d
fix crash during lora application
2023-07-05 16:40:47 -04:00
52498cc0b9
Put tokenizer and text encoder in same clip-vit-large-patch14 ( #3662 )
...
This PR fixes the migrate script so that it uses the same directory for
both the tokenizer and text encoder CLIP models. This will fix a crash
that occurred during checkpoint->diffusers conversions
This PR also removes the check for an existing models directory in the
target root directory when `invokeai-migrate3` is run.
2023-07-05 16:29:33 -04:00
cb947bcbf0
Merge branch 'main' into lstein/fix-migrate3-textencoder
2023-07-05 16:23:00 -04:00
bbfb5bb1d4
Remove hardcoded cuda device in model manager init ( #3624 )
...
There was a line in model_manager.py in which the GPU device was
hardcoded to "cuda". This has now been removed.
2023-07-05 16:22:45 -04:00
f8bbec8572
Recognize and load diffusers-style LoRAs (.bin)
...
Prevent double-reporting of autoimported models
- closes #3636
Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:21:23 -04:00
863336acbb
Recognize and load diffusers-style LoRAs (.bin)
...
Prevent double-reporting of autoimported models
- closes #3636
Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:19:16 -04:00
90ae8ce26a
prevent model install crash "torch needs to be restarted with spawn"
2023-07-05 16:18:20 -04:00
ad5d90aca8
prevent model install crash "torch needs to be restarted with spawn"
2023-07-05 15:38:07 -04:00
5b6dd47b9f
add API for model convert
2023-07-05 15:13:21 -04:00
5027d0a603
accept @psychedelicious suggestions above
2023-07-05 14:50:57 -04:00
9f9ce08e44
Merge branch 'main' into lstein/remove-hardcoded-cuda-device
2023-07-05 13:38:33 -04:00
17c5568661
build: remove web ui dist from gitignore ( #3650 )
...
The web UI should manage its own .gitignore
I think would explain why certain files were not making it into the pypi
release
2023-07-05 13:36:16 -04:00
94740e440d
Merge branch 'main' into build/gitignore
2023-07-05 13:35:54 -04:00
021e1eca8e
Merge branch 'main' into mps-fp16-fixes
2023-07-05 13:19:52 -04:00
5fe722900d
allow clip-vit-large-patch14 text encoder to coexist with tokenizer in same directory
2023-07-05 13:15:08 -04:00
cf173b522b
allow clip-vit-large-patch14 text encoder to coexist with tokenizer in same directory
2023-07-05 13:14:41 -04:00