Commit Graph

6429 Commits

Author SHA1 Message Date
75b28eb79b Update CONCEPTS.md 2023-07-06 12:22:52 -04:00
ec7c2f07c6 model merge backend, CLI and TUI working 2023-07-06 12:21:42 -04:00
2eddd5db7d Update and rename TEXTUAL_INVERSION.md to TRAINING.md 2023-07-06 11:52:49 -04:00
82978d3ee5 Update Combinatorial Setting Information 2023-07-06 11:28:21 -04:00
b250d1ec86 Merge branch 'main' into doc_updates_23 2023-07-06 11:24:42 -04:00
48258c4bb8 wip(docs): ELI5 Tutorial For Invocations 2023-07-06 11:24:05 -04:00
d5f90b1a02 Improved loading for UI (#3667)
* load images on gallery render

* wait for models to be loaded before you can invoke

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-06 14:48:42 +00:00
a9e77675a8 Move clip skip to separate node 2023-07-06 17:39:49 +03:00
94faa5de14 fixes ImportError described in #3658.
The issue was introduced by a new release of torchmetrics.
2023-07-06 16:16:02 +02:00
7a0154a7b8 expose max_cache_size to invokeai-configure interface (#3664)
This PR allows the user to set the model manager cache size from within
the `invokeia-configure` TUI.
2023-07-07 01:58:22 +12:00
b229fe19aa Merge branch 'main' into lstein/configure-max-cache-size 2023-07-07 01:52:12 +12:00
04b57c408f Add clip skip option to prompt node 2023-07-06 16:09:40 +03:00
2595c1d86f LoRA model loading fixes (#3663)
This PR enables model manager importation of diffusers-style .bin LoRAs.
However, since there is no backend support for this type of LoRA yet,
attempts to use them will result in an unimplemented error.

It closes #3636 and #3637
2023-07-07 01:09:13 +12:00
c2eb6c33b9 Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-07 01:00:02 +12:00
94e38e9769 feat(ui): remove delete image button in gallery
it was really easy to accidentally click, just commented out, easy to add back or add a setting for it in the future
2023-07-06 22:35:50 +10:00
984121d682 only show delete icon if big enough 2023-07-06 22:35:50 +10:00
6f1268e2b1 Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-07 00:32:22 +12:00
405054d802 feat: Add Embedding Picker to Linear UI (#3654) 2023-07-07 00:29:19 +12:00
a901a37433 feat(ui): improve no loaded loras UI 2023-07-06 22:26:54 +10:00
e09c07a97d fix(ui): fix board auto-add 2023-07-06 22:25:05 +10:00
87feae959d feat(ui): improve no loaded embeddings UI 2023-07-06 22:24:50 +10:00
c21245f590 fix(api): make list models params querys, make path /, remove defaults
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)

Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
fbd6b25b4d feat(ui): improve ux on TI autcomplete
- cursor reinserts at the end of the trigger
- `enter` closes the select
- popover styling
2023-07-06 14:56:37 +10:00
267f0408bb Update PROMPTS with Dynamic Prompts docs 2023-07-05 23:50:04 -04:00
cc8c34311c Update LICENSE 2023-07-05 23:46:27 -04:00
2415dc1235 feat(ui): refactor embedding ui; now is autocomplete 2023-07-06 13:40:13 +10:00
8f5fcb188c Merge branch 'main' into lstein/model-manager-router-api 2023-07-05 23:16:43 -04:00
f7daa6e71d all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT 2023-07-05 23:13:01 -04:00
3691b55565 fix autoimport crash 2023-07-05 21:53:08 -04:00
1ee41822bc restore .gitignore treatment of frontend/web 2023-07-05 21:30:56 -04:00
fbad839d23 add missing .js files v3.0.0+a8 2023-07-05 21:09:13 -04:00
f610045a14 Merge branch 'main' into mps-fp16-fixes 2023-07-05 21:01:48 -04:00
a7cbcae176 expose max_cache_size to invokeai-configure interface 2023-07-05 20:59:57 -04:00
0a6dccd607 expose max_cache_size to invokeai-configure interface 2023-07-05 20:59:14 -04:00
43c51ff157 Merge branch 'main' into lstein/more-model-loading-fixes 2023-07-05 20:48:15 -04:00
bf25818d76 rebuild front end; bump version 2023-07-05 20:33:28 -04:00
cfa3b2419c partial implementation of merge 2023-07-05 20:25:47 -04:00
d4550b3059 clean up lint errors in lora.py 2023-07-05 19:18:25 -04:00
83d3a043da merge latest changes from main 2023-07-05 19:15:53 -04:00
169ff6368b Update mps_fixes.py - additional torch op for nodes
This fixes scaling in the nodes UI.
2023-07-05 17:47:23 -04:00
71dad6d404 Merge branch 'main' into ti-ui 2023-07-05 16:57:31 -04:00
c21bd806f0 default LoRA weight to 0.75 2023-07-05 16:54:23 -04:00
007d125e40 Update README.md 2023-07-05 16:53:37 -04:00
716d154957 Update LICENSE 2023-07-05 16:41:28 -04:00
685a47cc7d fix crash during lora application 2023-07-05 16:40:47 -04:00
52498cc0b9 Put tokenizer and text encoder in same clip-vit-large-patch14 (#3662)
This PR fixes the migrate script so that it uses the same directory for
both the tokenizer and text encoder CLIP models. This will fix a crash
that occurred during checkpoint->diffusers conversions

This PR also removes the check for an existing models directory in the
target root directory when `invokeai-migrate3` is run.
2023-07-05 16:29:33 -04:00
cb947bcbf0 Merge branch 'main' into lstein/fix-migrate3-textencoder 2023-07-05 16:23:00 -04:00
bbfb5bb1d4 Remove hardcoded cuda device in model manager init (#3624)
There was a line in model_manager.py in which the GPU device was
hardcoded to "cuda". This has now been removed.
2023-07-05 16:22:45 -04:00
f8bbec8572 Recognize and load diffusers-style LoRAs (.bin)
Prevent double-reporting of autoimported models
- closes #3636

Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:21:23 -04:00
863336acbb Recognize and load diffusers-style LoRAs (.bin)
Prevent double-reporting of autoimported models
- closes #3636

Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:19:16 -04:00