maryhipp
fba25792f9
add new route for app config
2023-07-13 23:26:52 +10:00
blessedcoolant
430b9c291f
fix: Loras not working correctly with Inpainting
2023-07-13 22:59:38 +12:00
psychedelicious
5f5c93abb4
feat(app): embed PNG info in invokeai_metadata
and invokeai_graph
...
Using just `metadata` and `graph` feel a bit too generic.
2023-07-13 15:40:05 +10:00
psychedelicious
50bef87da7
feat(db,nodes,api): refactor metadata
...
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.
Additionally, we provide the unexpanded graph with the metadata API response.
Both of these are embedded into the PNGs.
- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
2023-07-13 15:40:05 +10:00
Lincoln Stein
75c5ce46bc
merged SDXLModelLoader into ModelLoader invocation
2023-07-11 16:33:08 -04:00
Lincoln Stein
25591788c1
fix conflicts
2023-07-11 15:55:10 -04:00
Lincoln Stein
dab03fb646
rename gpu_mem_reserved to max_vram_cache_size
...
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
Lincoln Stein
d32f9f7cb0
reverse logic of gpu_mem_reserved
...
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
for model caching (similar to max_cache_size).
2023-07-11 15:16:40 -04:00
Sergey Borisov
358ced6bab
SDXL Prompt and t2l nodes draft, add fp32 to vae decode
2023-07-11 18:19:36 +03:00
Lincoln Stein
f6302aa691
Merge branch 'main' into release/invokeai-3-0-beta
2023-07-11 10:57:36 -04:00
Lincoln Stein
23d8a2777e
add ability to filter list_models on list of base models
2023-07-10 21:59:32 -04:00
Lincoln Stein
8e42502dfd
partial implementation of SDXL model loader
2023-07-10 20:18:30 -04:00
Lincoln Stein
83ec4c983c
Merge branch 'main' into lstein/keep-models-in-vram
2023-07-10 18:47:05 -04:00
Lincoln Stein
c9c61ee459
Update invokeai/app/services/config.py
...
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-07-10 18:46:32 -04:00
psychedelicious
00e26ffa9a
fix(nodes): remove board_id
column from images
table
...
This is extraneous; the `board_images` table holds image-board relationships.
2023-07-10 11:30:35 +10:00
Lincoln Stein
5759a390f9
introduce gpu_mem_reserved configuration parameter
2023-07-09 18:35:04 -04:00
Lincoln Stein
2f3190ad6c
merge with main
2023-07-09 13:28:05 -04:00
psychedelicious
c5539b442c
feat(api): set max-age for images
...
Image files are immutable and we expect deletion to result in no further requests for a given image, so we can set the max-age to something thicc.
Resolves #3426
2023-07-09 22:42:05 +10:00
psychedelicious
4640969037
fix(api): fix for borked windows mimetypes registry
...
It's possible for the Windows mimetypes for js to be changed and cause content-type errors when running the app.
Explicitly set the mimetypes to rectify this. Note that the root cause is a misconfiguration on the client - not our end.
See https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
2023-07-09 11:05:01 +10:00
Lincoln Stein
b6cc4df1d8
report processing stack traces to the console
2023-07-08 19:48:32 -04:00
Lincoln Stein
5f7435955e
if models.yaml doesn't exist, rebuild it
2023-07-08 15:13:51 -04:00
Lincoln Stein
92d4486214
don't write 'version:' to the invokeai.yaml file
2023-07-08 12:06:23 -04:00
Lincoln Stein
bd9786564c
merge with main
2023-07-08 10:11:25 -04:00
Lincoln Stein
f6ecee926f
version of _find_root() that works in conda environment
2023-07-08 09:02:17 -04:00
Lincoln Stein
454c2c0952
version of _find_root() that works in conda environment
2023-07-08 09:01:05 -04:00
blessedcoolant
3333805821
feat: Add App Version to UI
2023-07-08 21:31:17 +12:00
Lincoln Stein
8e8f9cce0f
print version when --version provided at command line
2023-07-07 20:47:29 -04:00
Lincoln Stein
10d3bccf32
Mac MPS FP16 fixes ( #3641 )
...
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.
Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
2023-07-07 17:43:23 -04:00
Eugene Brodsky
97b2ec58e2
Merge branch 'main' into release/invokeai-3-0-alpha
2023-07-07 14:18:12 -04:00
Lincoln Stein
9f58ed35cf
improve user migration experience
...
- No longer fail root directory probing if invokeai.yaml is missing
(test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
blessedcoolant
7aa918677e
Merge branch 'main' into feat/clip_skip
2023-07-07 16:21:53 +12:00
Lincoln Stein
54f3686e3b
merge with main, fix conflicts
2023-07-06 15:21:45 -04:00
Lincoln Stein
e9352227f3
add merge api
2023-07-06 15:12:34 -04:00
Lincoln Stein
90c66aab3d
merge with upstream
2023-07-06 13:17:02 -04:00
Lincoln Stein
3e925fbf34
model merging API ready for testing
2023-07-06 13:15:15 -04:00
Sergey Borisov
a9e77675a8
Move clip skip to separate node
2023-07-06 17:39:49 +03:00
Sergey Borisov
04b57c408f
Add clip skip option to prompt node
2023-07-06 16:09:40 +03:00
psychedelicious
c21245f590
fix(api): make list models params querys, make path /
, remove defaults
...
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)
Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api
2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d
all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT
2023-07-05 23:13:01 -04:00
Lincoln Stein
83d3a043da
merge latest changes from main
2023-07-05 19:15:53 -04:00
Lincoln Stein
5b6dd47b9f
add API for model convert
2023-07-05 15:13:21 -04:00
Lincoln Stein
5027d0a603
accept @psychedelicious suggestions above
2023-07-05 14:50:57 -04:00
Lincoln Stein
021e1eca8e
Merge branch 'main' into mps-fp16-fixes
2023-07-05 13:19:52 -04:00
Sergey Borisov
0ac9dca926
Fix loading diffusers ti
2023-07-05 19:46:00 +03:00
Lincoln Stein
9edf78dd2e
merge with main
2023-07-05 09:12:54 -04:00
Lincoln Stein
6112197edf
convert implemented; need router
2023-07-05 09:05:05 -04:00
gogurtenjoyer
ba7345deb4
Merge branch 'main' into mps-fp16-fixes
2023-07-05 07:38:41 -04:00
psychedelicious
56d4ea3252
fix(api): improve mm routes
2023-07-05 20:08:47 +10:00
psychedelicious
5d4d0e795c
fix(mm): fix up mm service types
2023-07-05 20:07:10 +10:00
Eugene Brodsky
7170e82f73
expose max_cache_size in config
2023-07-05 02:44:15 -04:00
blessedcoolant
639d88afd6
revert: inference_mode to no_grad
2023-07-05 16:39:15 +12:00
blessedcoolant
c0501ed5c2
fix: Slow loading of Loras
...
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-07-05 12:47:34 +10:00
psychedelicious
08d428a5e7
feat(nodes): add lora field, update lora loader
2023-07-05 12:47:34 +10:00
gogurtenjoyer
233869b56a
Mac MPS FP16 fixes
...
This PR is to allow FP16 precision to work on Macs with MPS. In addition, it centralizes the torch fixes/workarounds
required for MPS into a new backend utility file `mps_fixes.py`. This is conditionally imported in `api_app.py`/`cli_app.py`.
Many MANY thanks to StAlKeR7779 for patiently working to debug and fix these issues.
2023-07-04 18:10:53 -04:00
Lincoln Stein
5d099f4a49
update_model working
2023-07-04 17:26:57 -04:00
Lincoln Stein
752b4d50cf
model_delete method now working
2023-07-04 10:40:32 -04:00
Lincoln Stein
c1c49d9a76
import model returns 404 for invalid path, 409 for duplicate model
2023-07-04 10:08:10 -04:00
Lincoln Stein
96bf92ead4
add the import model router
2023-07-04 14:35:47 +10:00
blessedcoolant
7e18814dd0
Add standard names for Model Loader Nodes
2023-07-04 14:35:06 +10:00
Lincoln Stein
a8a2209560
VAE loader is loading proper VAE. Unclear if it is changing the image
2023-07-04 14:35:06 +10:00
Lincoln Stein
fa8a5838d3
add vae lodaer
2023-07-04 14:35:06 +10:00
blessedcoolant
6c62f41f2e
chore: Change PipelineModels to MainModels
2023-07-04 14:33:56 +10:00
Lincoln Stein
4d2c7806fc
quash memory leak when compel invocation called
2023-07-03 14:12:35 -04:00
Lincoln Stein
3937428563
Merge branch 'release/invokeai-3-0-alpha' of github.com:invoke-ai/InvokeAI into release/invokeai-3-0-alpha
2023-07-03 14:11:28 -04:00
Lincoln Stein
27088610ed
Merge branch 'main' into dev/fix-unit-tests
2023-07-03 12:38:42 -04:00
Lincoln Stein
d6de11bd56
resolve merge conflict
2023-07-03 12:19:11 -04:00
Lincoln Stein
ed86d0b708
Union[foo, None]=>Optional[foo]
2023-07-03 12:17:45 -04:00
Lincoln Stein
877b187a1b
Merge branch 'lstein/restore-3.9-compatibility' into release/invokeai-3-0-alpha
2023-07-03 11:01:34 -04:00
Lincoln Stein
ac9ec4e75a
restore 3.9 compatibility by replacing | with Union[]
2023-07-03 10:57:40 -04:00
Lincoln Stein
2465c7987b
Revert "restore 3.9 compatibility by replacing | with Union[]"
...
This reverts commit 76bafeb99e
.
2023-07-03 10:56:41 -04:00
Lincoln Stein
76bafeb99e
restore 3.9 compatibility by replacing | with Union[]
2023-07-03 10:55:04 -04:00
psychedelicious
fa169b5517
feat(nodes): add ImageCollection node in prep for batch processing
2023-07-04 00:18:27 +10:00
Lincoln Stein
aae60b6142
quash memory leak when compel invocation called
2023-07-03 10:08:10 -04:00
Lincoln Stein
b79740d61d
back out torch.no_grad()
2023-07-02 23:03:24 -04:00
Lincoln Stein
fa8ccd2a94
add no_grad() to compel node invoke() method
2023-07-02 18:20:16 -04:00
Lincoln Stein
1cf61feead
print GPU device at startup
2023-07-01 20:47:11 -04:00
psychedelicious
c00aea7a6c
tests(nodes): fix nodes tests
2023-06-29 23:11:48 +10:00
Lincoln Stein
2c5b050d82
add image board support to invokeai-node-cli
2023-06-29 22:12:34 +10:00
maryhipp
45935caf1d
fix query
2023-06-29 21:14:53 +10:00
Mary Hipp
73f2092ec5
(api) add option to board delete route and logic to services
2023-06-29 21:14:53 +10:00
StAlKeR7779
ac46b129bf
Merge branch 'main' into feat/lora_model_patch
2023-06-28 22:43:58 +03:00
Lincoln Stein
72209d0cc3
Merge branch 'main' into lstein/installer-for-new-model-layout
2023-06-28 14:49:37 -04:00
blessedcoolant
75614bbba3
Merge branch 'main' into feat/nodes/cpu-noise
2023-06-28 18:22:08 +12:00
blessedcoolant
32883adf6e
Merge branch 'main' into feat/controlnet_extras
2023-06-28 17:36:21 +12:00
Kent Keirsey
fc322aa9f7
Update controlnet-aux to 0.0.6 and add LeReS
2023-06-27 23:45:47 -04:00
psychedelicious
73f63853ba
fix(nodes): use context for logger in param_easing
2023-06-27 23:30:10 -04:00
Lincoln Stein
e8ed0fad6c
autoimport from embedding/controlnet/lora folders designated in startup file
2023-06-27 12:30:53 -04:00
psychedelicious
2e14528e4c
feat(nodes): default to CPU noise
2023-06-27 13:57:31 +10:00
user1
10d2d85c83
Started to add ControlNet resize_crop and resize_fill options, but commented out, not ready to deploy yet.
2023-06-26 12:03:05 -07:00
user1
af566adf56
For MediapipeFace ControlNet preprocessor, if input image is RGBA format then convert to RGB (otherwise MediapipeFace image processing throws an error)
2023-06-26 04:29:43 -07:00
user1
873c18bc4b
Added TileResampler ControlNet preprocessor node.
...
Also fixes to SegmentAnything ControlNet preprocessor node.
2023-06-26 04:27:26 -07:00
Eugene Brodsky
7b97639961
Merge branch 'main' into lstein/installer-for-new-model-layout
2023-06-26 01:24:30 -04:00
Sergey Borisov
5cebf67ee4
Apply lora by patching lora instead of hooks
2023-06-26 03:57:33 +03:00
Lincoln Stein
160b5d7992
add support for an autoimport models directory scanned at startup time
2023-06-25 18:50:15 -04:00
user1
10e8389fa4
Commenting out LeReS ControlNet image preprocessor until release of controlnet_aux v0.0.6 (supported on controlnet_aux current main, but not on latest release v0.0.5)
2023-06-25 14:25:14 -07:00
user1
414a04774c
Added LeReS ControlNet image preprocessor.
2023-06-25 14:19:55 -07:00
Lincoln Stein
60b37b7ff4
fix model manager documentation
2023-06-25 16:04:43 -04:00
user1
b872e7a5e0
Simplifying ControlNet SAM preprocessor segmentation color mapping.
2023-06-25 12:54:48 -07:00
user1
de4064bdac
Fixed problem with with non-reproducible results from ControlNet SegmentAnything preprocessor. Cause was controlnet_aux randomization of segmentation coloring, which seems to lead to some randomization of resulting images using ControlNet seg model. Switched to using deterministic ADE20K color palette instead, which solved the problem.
2023-06-25 12:38:17 -07:00