Commit Graph

325 Commits

Author SHA1 Message Date
Sergey Borisov
fbbc4b3f69 Fixes 2023-07-18 16:51:16 +03:00
Sergey Borisov
bc11296a5e Disable lazy offloading on disabled vram cache, move resulted tensors to cpu(to not stack vram tensors in cache), fix - text encoder not freed(detach) 2023-07-18 16:20:25 +03:00
Sergey Borisov
ada9b06e48 Implement compel prompt nodes for sdxl 2023-07-18 01:49:45 +03:00
blessedcoolant
e039771d07 fix: Incorrect type on SDXL Model Loader 2023-07-17 21:47:41 +12:00
Lincoln Stein
cad3f96831 add model input to refiner 2023-07-16 12:38:04 -04:00
Lincoln Stein
6534288b75 refiner only has clip2 not clip 2023-07-16 12:36:38 -04:00
Lincoln Stein
0a2964d8c0 add differentiated sdxl and sdxl_refiner model loaders 2023-07-16 12:17:56 -04:00
Sergey Borisov
fe78a08e37 Fix sd1/2 models conditionings 2023-07-16 06:24:24 +03:00
Sergey Borisov
c9c2229917 Separate prompt to sdxl and sdxl-refiner, add denoising start-end fields, add l2l node(supports both sdxl and sdxl-refiner), add fp32 to vae encode 2023-07-16 06:00:37 +03:00
Lincoln Stein
ccbfa5d862 resolve conflicts 2023-07-15 19:47:50 -04:00
psychedelicious
d270f21c85 feat(nodes): valid controlnet weights are -1 to 2 2023-07-15 19:56:44 +10:00
psychedelicious
ae72f372be fix(nodes): do not use hardcoded controlnet model 2023-07-15 19:56:44 +10:00
psychedelicious
29b2e59e65 fix(nodes): fix ref to ctx mgr service, missing import 2023-07-15 19:56:44 +10:00
psychedelicious
82fa39b531 feat(nodes): add controlnet nodes type hint 2023-07-15 19:56:44 +10:00
psychedelicious
788dcbde70 fix(nodes): add missing import 2023-07-15 19:56:44 +10:00
Sergey Borisov
6ab9a5e108 Draft 2023-07-15 19:56:44 +10:00
Sergey Borisov
7093e5d033 Pad conditionings using zeros and encoder_attention_mask 2023-07-15 00:52:54 +03:00
Jonathan
9960d7ca2a
Allow ImageResizeInvocation w/h to take inputs from other nodes (#3765) 2023-07-15 05:34:13 +12:00
psychedelicious
23c1a6b9d5 fix(nodes): make ResizeLatents w/h optional
now you can connect to them in node editor
2023-07-14 21:42:42 +10:00
Sergey Borisov
9fb0b0959f Make sde and ancestral schedulers reproducible 2023-07-14 05:25:09 +03:00
Sergey Borisov
d8f88c09ea Fix pink output on a lot of samplers 2023-07-14 05:00:33 +03:00
blessedcoolant
430b9c291f fix: Loras not working correctly with Inpainting 2023-07-13 22:59:38 +12:00
psychedelicious
50bef87da7 feat(db,nodes,api): refactor metadata
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.

Additionally, we provide the unexpanded graph with the metadata API response.

Both of these are embedded into the PNGs.

- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
2023-07-13 15:40:05 +10:00
Lincoln Stein
75c5ce46bc merged SDXLModelLoader into ModelLoader invocation 2023-07-11 16:33:08 -04:00
Sergey Borisov
358ced6bab SDXL Prompt and t2l nodes draft, add fp32 to vae decode 2023-07-11 18:19:36 +03:00
Lincoln Stein
8e42502dfd partial implementation of SDXL model loader 2023-07-10 20:18:30 -04:00
Lincoln Stein
10d3bccf32
Mac MPS FP16 fixes (#3641)
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
2023-07-07 17:43:23 -04:00
Eugene Brodsky
97b2ec58e2
Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-07 14:18:12 -04:00
Sergey Borisov
a9e77675a8 Move clip skip to separate node 2023-07-06 17:39:49 +03:00
Sergey Borisov
04b57c408f Add clip skip option to prompt node 2023-07-06 16:09:40 +03:00
Lincoln Stein
83d3a043da merge latest changes from main 2023-07-05 19:15:53 -04:00
Lincoln Stein
021e1eca8e
Merge branch 'main' into mps-fp16-fixes 2023-07-05 13:19:52 -04:00
Sergey Borisov
0ac9dca926 Fix loading diffusers ti 2023-07-05 19:46:00 +03:00
Lincoln Stein
9edf78dd2e merge with main 2023-07-05 09:12:54 -04:00
gogurtenjoyer
ba7345deb4
Merge branch 'main' into mps-fp16-fixes 2023-07-05 07:38:41 -04:00
blessedcoolant
639d88afd6 revert: inference_mode to no_grad 2023-07-05 16:39:15 +12:00
blessedcoolant
c0501ed5c2 fix: Slow loading of Loras
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-07-05 12:47:34 +10:00
psychedelicious
08d428a5e7 feat(nodes): add lora field, update lora loader 2023-07-05 12:47:34 +10:00
gogurtenjoyer
233869b56a Mac MPS FP16 fixes
This PR is to allow FP16 precision to work on Macs with MPS. In addition, it centralizes the torch fixes/workarounds
required for MPS into a new backend utility file `mps_fixes.py`. This is conditionally imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to StAlKeR7779 for patiently working to debug and fix these issues.
2023-07-04 18:10:53 -04:00
blessedcoolant
7e18814dd0 Add standard names for Model Loader Nodes 2023-07-04 14:35:06 +10:00
Lincoln Stein
a8a2209560 VAE loader is loading proper VAE. Unclear if it is changing the image 2023-07-04 14:35:06 +10:00
Lincoln Stein
fa8a5838d3 add vae lodaer 2023-07-04 14:35:06 +10:00
blessedcoolant
6c62f41f2e chore: Change PipelineModels to MainModels 2023-07-04 14:33:56 +10:00
Lincoln Stein
4d2c7806fc quash memory leak when compel invocation called 2023-07-03 14:12:35 -04:00
Lincoln Stein
3937428563 Merge branch 'release/invokeai-3-0-alpha' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-alpha 2023-07-03 14:11:28 -04:00
Lincoln Stein
ed86d0b708 Union[foo, None]=>Optional[foo] 2023-07-03 12:17:45 -04:00
psychedelicious
fa169b5517 feat(nodes): add ImageCollection node in prep for batch processing 2023-07-04 00:18:27 +10:00
Lincoln Stein
aae60b6142 quash memory leak when compel invocation called 2023-07-03 10:08:10 -04:00
Lincoln Stein
b79740d61d back out torch.no_grad() 2023-07-02 23:03:24 -04:00
Lincoln Stein
fa8ccd2a94 add no_grad() to compel node invoke() method 2023-07-02 18:20:16 -04:00