Commit Graph

634 Commits

Author SHA1 Message Date
Lincoln Stein
c90345d6a3 deprecate the face restoration option 2023-07-15 18:23:32 -04:00
Lincoln Stein
9faffa2245 revert inadvertent breaking change to config causing test failures (override) 2023-07-15 18:15:59 -04:00
Lincoln Stein
6073cb8020 add documentation on the configuration system 2023-07-15 16:14:47 -04:00
psychedelicious
7b6159f8d6 feat(nodes): emit model loading events
- remove dependency on having access to a `node` during emits, would need a bit of additional args passed through the system and I don't think its necessary at this point. this also allowed us to drop an extraneous fetching/parsing of the session from db.
- provide the invocation context to all `get_model()` calls, so the events are able to be emitted
- test all model loading events in the app and confirm socket events are received
2023-07-16 02:12:01 +10:00
psychedelicious
c7b547ea3e feat(nodes): remove references to restoration services
- remove restoration services
- remove the restore faces nodes
- update tests
2023-07-16 01:12:39 +10:00
psychedelicious
74ca87ac9e feat(nodes): add realesrgan node 2023-07-16 01:06:50 +10:00
psychedelicious
d270f21c85 feat(nodes): valid controlnet weights are -1 to 2 2023-07-15 19:56:44 +10:00
psychedelicious
ae72f372be fix(nodes): do not use hardcoded controlnet model 2023-07-15 19:56:44 +10:00
psychedelicious
29b2e59e65 fix(nodes): fix ref to ctx mgr service, missing import 2023-07-15 19:56:44 +10:00
psychedelicious
82fa39b531 feat(nodes): add controlnet nodes type hint 2023-07-15 19:56:44 +10:00
psychedelicious
788dcbde70 fix(nodes): add missing import 2023-07-15 19:56:44 +10:00
Sergey Borisov
6ab9a5e108 Draft 2023-07-15 19:56:44 +10:00
blessedcoolant
808b2de709
Merge branch 'main' into lstein/model-manager-route-enhancements 2023-07-15 16:56:54 +12:00
Lincoln Stein
2faa7cee37 add rename_model route 2023-07-14 23:03:18 -04:00
Sergey Borisov
7093e5d033 Pad conditionings using zeros and encoder_attention_mask 2023-07-15 00:52:54 +03:00
Lincoln Stein
e71ce83e9c
Merge branch 'main' into lstein/model-manager-route-enhancements 2023-07-14 13:52:55 -04:00
Lincoln Stein
8600aad12b multiple enhancements to model manager REACT API
1. add a /sync route for synchronizing the in-memory model lists to
   models.yaml, the models directory, and the autoimport directories.

2. add optional destination_directories to convert_model and merge_model
   operations.

3. add /ckpt_confs route for retrieving known legacy checkpoint configuration
   files.

4. add /search route for finding all models in a directory located in the server
   filesystem
2023-07-14 13:45:16 -04:00
Jonathan
9960d7ca2a
Allow ImageResizeInvocation w/h to take inputs from other nodes (#3765) 2023-07-15 05:34:13 +12:00
Lincoln Stein
ad076b1174 add model directory search route 2023-07-14 11:14:33 -04:00
Mary Hipp Rogers
545e2f557f
Merge branch 'main' into feat/api/image-max-age 2023-07-14 08:21:44 -04:00
psychedelicious
23c1a6b9d5 fix(nodes): make ResizeLatents w/h optional
now you can connect to them in node editor
2023-07-14 21:42:42 +10:00
Sergey Borisov
9fb0b0959f Make sde and ancestral schedulers reproducible 2023-07-14 05:25:09 +03:00
Sergey Borisov
d8f88c09ea Fix pink output on a lot of samplers 2023-07-14 05:00:33 +03:00
psychedelicious
4d25d702a1 feat(api): app/config route returns available infill methods 2023-07-13 23:26:52 +10:00
maryhipp
fba25792f9 add new route for app config 2023-07-13 23:26:52 +10:00
blessedcoolant
430b9c291f fix: Loras not working correctly with Inpainting 2023-07-13 22:59:38 +12:00
psychedelicious
5f5c93abb4 feat(app): embed PNG info in invokeai_metadata and invokeai_graph
Using just `metadata` and `graph` feel a bit too generic.
2023-07-13 15:40:05 +10:00
psychedelicious
50bef87da7 feat(db,nodes,api): refactor metadata
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.

Additionally, we provide the unexpanded graph with the metadata API response.

Both of these are embedded into the PNGs.

- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
2023-07-13 15:40:05 +10:00
Lincoln Stein
75c5ce46bc merged SDXLModelLoader into ModelLoader invocation 2023-07-11 16:33:08 -04:00
Lincoln Stein
25591788c1 fix conflicts 2023-07-11 15:55:10 -04:00
Lincoln Stein
dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
Lincoln Stein
d32f9f7cb0 reverse logic of gpu_mem_reserved
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
  for model caching (similar to max_cache_size).
2023-07-11 15:16:40 -04:00
Sergey Borisov
358ced6bab SDXL Prompt and t2l nodes draft, add fp32 to vae decode 2023-07-11 18:19:36 +03:00
Lincoln Stein
f6302aa691 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-11 10:57:36 -04:00
Lincoln Stein
23d8a2777e add ability to filter list_models on list of base models 2023-07-10 21:59:32 -04:00
Lincoln Stein
8e42502dfd partial implementation of SDXL model loader 2023-07-10 20:18:30 -04:00
Lincoln Stein
83ec4c983c
Merge branch 'main' into lstein/keep-models-in-vram 2023-07-10 18:47:05 -04:00
Lincoln Stein
c9c61ee459
Update invokeai/app/services/config.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-07-10 18:46:32 -04:00
psychedelicious
00e26ffa9a fix(nodes): remove board_id column from images table
This is extraneous; the `board_images` table holds image-board relationships.
2023-07-10 11:30:35 +10:00
Lincoln Stein
5759a390f9 introduce gpu_mem_reserved configuration parameter 2023-07-09 18:35:04 -04:00
Lincoln Stein
2f3190ad6c merge with main 2023-07-09 13:28:05 -04:00
psychedelicious
c5539b442c feat(api): set max-age for images
Image files are immutable and we expect deletion to result in no further requests for a given image, so we can set the max-age to something thicc.

Resolves #3426
2023-07-09 22:42:05 +10:00
psychedelicious
4640969037 fix(api): fix for borked windows mimetypes registry
It's possible for the Windows mimetypes for js to be changed and cause content-type errors when running the app.

Explicitly set the mimetypes to rectify this. Note that the root cause is a misconfiguration on the client - not our end.

See https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
2023-07-09 11:05:01 +10:00
Lincoln Stein
b6cc4df1d8 report processing stack traces to the console 2023-07-08 19:48:32 -04:00
Lincoln Stein
5f7435955e if models.yaml doesn't exist, rebuild it 2023-07-08 15:13:51 -04:00
Lincoln Stein
92d4486214 don't write 'version:' to the invokeai.yaml file 2023-07-08 12:06:23 -04:00
Lincoln Stein
bd9786564c merge with main 2023-07-08 10:11:25 -04:00
Lincoln Stein
f6ecee926f version of _find_root() that works in conda environment 2023-07-08 09:02:17 -04:00
Lincoln Stein
454c2c0952 version of _find_root() that works in conda environment 2023-07-08 09:01:05 -04:00
blessedcoolant
3333805821 feat: Add App Version to UI 2023-07-08 21:31:17 +12:00
Lincoln Stein
8e8f9cce0f print version when --version provided at command line 2023-07-07 20:47:29 -04:00
Lincoln Stein
10d3bccf32
Mac MPS FP16 fixes (#3641)
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
2023-07-07 17:43:23 -04:00
Eugene Brodsky
97b2ec58e2
Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-07 14:18:12 -04:00
Lincoln Stein
9f58ed35cf improve user migration experience
- No longer fail root directory probing if invokeai.yaml is missing
  (test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
  with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
blessedcoolant
7aa918677e Merge branch 'main' into feat/clip_skip 2023-07-07 16:21:53 +12:00
Lincoln Stein
54f3686e3b merge with main, fix conflicts 2023-07-06 15:21:45 -04:00
Lincoln Stein
e9352227f3 add merge api 2023-07-06 15:12:34 -04:00
Lincoln Stein
90c66aab3d merge with upstream 2023-07-06 13:17:02 -04:00
Lincoln Stein
3e925fbf34 model merging API ready for testing 2023-07-06 13:15:15 -04:00
Sergey Borisov
a9e77675a8 Move clip skip to separate node 2023-07-06 17:39:49 +03:00
Sergey Borisov
04b57c408f Add clip skip option to prompt node 2023-07-06 16:09:40 +03:00
psychedelicious
c21245f590 fix(api): make list models params querys, make path /, remove defaults
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)

Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api 2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT 2023-07-05 23:13:01 -04:00
Lincoln Stein
83d3a043da merge latest changes from main 2023-07-05 19:15:53 -04:00
Lincoln Stein
5b6dd47b9f add API for model convert 2023-07-05 15:13:21 -04:00
Lincoln Stein
5027d0a603 accept @psychedelicious suggestions above 2023-07-05 14:50:57 -04:00
Lincoln Stein
021e1eca8e
Merge branch 'main' into mps-fp16-fixes 2023-07-05 13:19:52 -04:00
Sergey Borisov
0ac9dca926 Fix loading diffusers ti 2023-07-05 19:46:00 +03:00
Lincoln Stein
9edf78dd2e merge with main 2023-07-05 09:12:54 -04:00
Lincoln Stein
6112197edf convert implemented; need router 2023-07-05 09:05:05 -04:00
gogurtenjoyer
ba7345deb4
Merge branch 'main' into mps-fp16-fixes 2023-07-05 07:38:41 -04:00
psychedelicious
56d4ea3252 fix(api): improve mm routes 2023-07-05 20:08:47 +10:00
psychedelicious
5d4d0e795c fix(mm): fix up mm service types 2023-07-05 20:07:10 +10:00
Eugene Brodsky
7170e82f73 expose max_cache_size in config 2023-07-05 02:44:15 -04:00
blessedcoolant
639d88afd6 revert: inference_mode to no_grad 2023-07-05 16:39:15 +12:00
blessedcoolant
c0501ed5c2 fix: Slow loading of Loras
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-07-05 12:47:34 +10:00
psychedelicious
08d428a5e7 feat(nodes): add lora field, update lora loader 2023-07-05 12:47:34 +10:00
gogurtenjoyer
233869b56a Mac MPS FP16 fixes
This PR is to allow FP16 precision to work on Macs with MPS. In addition, it centralizes the torch fixes/workarounds
required for MPS into a new backend utility file `mps_fixes.py`. This is conditionally imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to StAlKeR7779 for patiently working to debug and fix these issues.
2023-07-04 18:10:53 -04:00
Lincoln Stein
5d099f4a49 update_model working 2023-07-04 17:26:57 -04:00
Lincoln Stein
752b4d50cf model_delete method now working 2023-07-04 10:40:32 -04:00
Lincoln Stein
c1c49d9a76 import model returns 404 for invalid path, 409 for duplicate model 2023-07-04 10:08:10 -04:00
Lincoln Stein
96bf92ead4 add the import model router 2023-07-04 14:35:47 +10:00
blessedcoolant
7e18814dd0 Add standard names for Model Loader Nodes 2023-07-04 14:35:06 +10:00
Lincoln Stein
a8a2209560 VAE loader is loading proper VAE. Unclear if it is changing the image 2023-07-04 14:35:06 +10:00
Lincoln Stein
fa8a5838d3 add vae lodaer 2023-07-04 14:35:06 +10:00
blessedcoolant
6c62f41f2e chore: Change PipelineModels to MainModels 2023-07-04 14:33:56 +10:00
Lincoln Stein
4d2c7806fc quash memory leak when compel invocation called 2023-07-03 14:12:35 -04:00
Lincoln Stein
3937428563 Merge branch 'release/invokeai-3-0-alpha' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-alpha 2023-07-03 14:11:28 -04:00
Lincoln Stein
27088610ed
Merge branch 'main' into dev/fix-unit-tests 2023-07-03 12:38:42 -04:00
Lincoln Stein
d6de11bd56 resolve merge conflict 2023-07-03 12:19:11 -04:00
Lincoln Stein
ed86d0b708 Union[foo, None]=>Optional[foo] 2023-07-03 12:17:45 -04:00
Lincoln Stein
877b187a1b Merge branch 'lstein/restore-3.9-compatibility' into release/invokeai-3-0-alpha 2023-07-03 11:01:34 -04:00
Lincoln Stein
ac9ec4e75a restore 3.9 compatibility by replacing | with Union[] 2023-07-03 10:57:40 -04:00
Lincoln Stein
2465c7987b Revert "restore 3.9 compatibility by replacing | with Union[]"
This reverts commit 76bafeb99e.
2023-07-03 10:56:41 -04:00
Lincoln Stein
76bafeb99e restore 3.9 compatibility by replacing | with Union[] 2023-07-03 10:55:04 -04:00
psychedelicious
fa169b5517 feat(nodes): add ImageCollection node in prep for batch processing 2023-07-04 00:18:27 +10:00
Lincoln Stein
aae60b6142 quash memory leak when compel invocation called 2023-07-03 10:08:10 -04:00
Lincoln Stein
b79740d61d back out torch.no_grad() 2023-07-02 23:03:24 -04:00
Lincoln Stein
fa8ccd2a94 add no_grad() to compel node invoke() method 2023-07-02 18:20:16 -04:00