Lincoln Stein
8d88ad3b8d
restore ability to launch web server with invokeai --web
2023-07-07 10:07:15 -04:00
Lincoln Stein
56f4712814
fix checkpoint VAE handling in migrate script
2023-07-07 09:34:42 -04:00
Lincoln Stein
9f58ed35cf
improve user migration experience
...
- No longer fail root directory probing if invokeai.yaml is missing
(test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
Lincoln Stein
54f3686e3b
merge with main, fix conflicts
2023-07-06 15:21:45 -04:00
Lincoln Stein
f78f10bef6
Merge branch 'lstein/model-manager-router-api'
2023-07-06 15:13:41 -04:00
Lincoln Stein
e9352227f3
add merge api
2023-07-06 15:12:34 -04:00
Lincoln Stein
6cb7df75de
Add REACT API routes for model manager ( #3639 )
...
This is PR adds the following API methods for managing models:
* list_models (GET)
* update_model (PATCH)
* import_model (POST)
* delete_model (DELETE)
* convert_model (PUT)
* merge_models (PUT)
2023-07-06 15:10:37 -04:00
Lincoln Stein
e573a533ae
remove redundant import
2023-07-06 13:24:58 -04:00
Lincoln Stein
581be42c75
Merge branch 'main' into lstein/model-manager-router-api
2023-07-06 13:20:36 -04:00
Lincoln Stein
90c66aab3d
merge with upstream
2023-07-06 13:17:02 -04:00
Lincoln Stein
3e925fbf34
model merging API ready for testing
2023-07-06 13:15:15 -04:00
Lincoln Stein
ec7c2f07c6
model merge backend, CLI and TUI working
2023-07-06 12:21:42 -04:00
Mary Hipp Rogers
d5f90b1a02
Improved loading for UI ( #3667 )
...
* load images on gallery render
* wait for models to be loaded before you can invoke
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-07-06 14:48:42 +00:00
blessedcoolant
7a0154a7b8
expose max_cache_size to invokeai-configure interface ( #3664 )
...
This PR allows the user to set the model manager cache size from within
the `invokeia-configure` TUI.
2023-07-07 01:58:22 +12:00
blessedcoolant
b229fe19aa
Merge branch 'main' into lstein/configure-max-cache-size
2023-07-07 01:52:12 +12:00
blessedcoolant
2595c1d86f
LoRA model loading fixes ( #3663 )
...
This PR enables model manager importation of diffusers-style .bin LoRAs.
However, since there is no backend support for this type of LoRA yet,
attempts to use them will result in an unimplemented error.
It closes #3636 and #3637
2023-07-07 01:09:13 +12:00
blessedcoolant
c2eb6c33b9
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-07 01:00:02 +12:00
psychedelicious
94e38e9769
feat(ui): remove delete image button in gallery
...
it was really easy to accidentally click, just commented out, easy to add back or add a setting for it in the future
2023-07-06 22:35:50 +10:00
Mary Hipp
984121d682
only show delete icon if big enough
2023-07-06 22:35:50 +10:00
blessedcoolant
6f1268e2b1
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-07 00:32:22 +12:00
blessedcoolant
405054d802
feat: Add Embedding Picker to Linear UI ( #3654 )
2023-07-07 00:29:19 +12:00
psychedelicious
a901a37433
feat(ui): improve no loaded loras UI
2023-07-06 22:26:54 +10:00
psychedelicious
e09c07a97d
fix(ui): fix board auto-add
2023-07-06 22:25:05 +10:00
psychedelicious
87feae959d
feat(ui): improve no loaded embeddings UI
2023-07-06 22:24:50 +10:00
psychedelicious
c21245f590
fix(api): make list models params querys, make path /
, remove defaults
...
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)
Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
psychedelicious
fbd6b25b4d
feat(ui): improve ux on TI autcomplete
...
- cursor reinserts at the end of the trigger
- `enter` closes the select
- popover styling
2023-07-06 14:56:37 +10:00
psychedelicious
2415dc1235
feat(ui): refactor embedding ui; now is autocomplete
2023-07-06 13:40:13 +10:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api
2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d
all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT
2023-07-05 23:13:01 -04:00
Lincoln Stein
3691b55565
fix autoimport crash
2023-07-05 21:53:08 -04:00
Lincoln Stein
1ee41822bc
restore .gitignore treatment of frontend/web
2023-07-05 21:30:56 -04:00
Lincoln Stein
fbad839d23
add missing .js files
2023-07-05 21:09:13 -04:00
Lincoln Stein
a7cbcae176
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:57 -04:00
Lincoln Stein
0a6dccd607
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:14 -04:00
Lincoln Stein
43c51ff157
Merge branch 'main' into lstein/more-model-loading-fixes
2023-07-05 20:48:15 -04:00
Lincoln Stein
bf25818d76
rebuild front end; bump version
2023-07-05 20:33:28 -04:00
Lincoln Stein
cfa3b2419c
partial implementation of merge
2023-07-05 20:25:47 -04:00
Lincoln Stein
d4550b3059
clean up lint errors in lora.py
2023-07-05 19:18:25 -04:00
Lincoln Stein
83d3a043da
merge latest changes from main
2023-07-05 19:15:53 -04:00
Lincoln Stein
71dad6d404
Merge branch 'main' into ti-ui
2023-07-05 16:57:31 -04:00
Lincoln Stein
c21bd806f0
default LoRA weight to 0.75
2023-07-05 16:54:23 -04:00
Lincoln Stein
685a47cc7d
fix crash during lora application
2023-07-05 16:40:47 -04:00
Lincoln Stein
52498cc0b9
Put tokenizer and text encoder in same clip-vit-large-patch14 ( #3662 )
...
This PR fixes the migrate script so that it uses the same directory for
both the tokenizer and text encoder CLIP models. This will fix a crash
that occurred during checkpoint->diffusers conversions
This PR also removes the check for an existing models directory in the
target root directory when `invokeai-migrate3` is run.
2023-07-05 16:29:33 -04:00
Lincoln Stein
cb947bcbf0
Merge branch 'main' into lstein/fix-migrate3-textencoder
2023-07-05 16:23:00 -04:00
Lincoln Stein
bbfb5bb1d4
Remove hardcoded cuda device in model manager init ( #3624 )
...
There was a line in model_manager.py in which the GPU device was
hardcoded to "cuda". This has now been removed.
2023-07-05 16:22:45 -04:00
Lincoln Stein
f8bbec8572
Recognize and load diffusers-style LoRAs (.bin)
...
Prevent double-reporting of autoimported models
- closes #3636
Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:21:23 -04:00
Lincoln Stein
863336acbb
Recognize and load diffusers-style LoRAs (.bin)
...
Prevent double-reporting of autoimported models
- closes #3636
Allow autoimport of diffusers-style LoRA models
- closes #3637
2023-07-05 16:19:16 -04:00
Lincoln Stein
90ae8ce26a
prevent model install crash "torch needs to be restarted with spawn"
2023-07-05 16:18:20 -04:00
Lincoln Stein
ad5d90aca8
prevent model install crash "torch needs to be restarted with spawn"
2023-07-05 15:38:07 -04:00
Lincoln Stein
5b6dd47b9f
add API for model convert
2023-07-05 15:13:21 -04:00