Commit Graph

568 Commits

Author SHA1 Message Date
Brandon Rising
91112167b1 Fix syntax err 2023-07-15 23:56:48 -04:00
Brandon Rising
bd7b59910d Testing onnx in new ui updates 2023-07-14 14:24:15 -04:00
Brandon Rising
524888bf3b Merge branch 'main' into feat/onnx 2023-07-13 14:23:57 -04:00
psychedelicious
4d25d702a1 feat(api): app/config route returns available infill methods 2023-07-13 23:26:52 +10:00
maryhipp
fba25792f9 add new route for app config 2023-07-13 23:26:52 +10:00
blessedcoolant
430b9c291f fix: Loras not working correctly with Inpainting 2023-07-13 22:59:38 +12:00
psychedelicious
5f5c93abb4 feat(app): embed PNG info in invokeai_metadata and invokeai_graph
Using just `metadata` and `graph` feel a bit too generic.
2023-07-13 15:40:05 +10:00
psychedelicious
50bef87da7 feat(db,nodes,api): refactor metadata
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.

Additionally, we provide the unexpanded graph with the metadata API response.

Both of these are embedded into the PNGs.

- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
2023-07-13 15:40:05 +10:00
Lincoln Stein
25591788c1 fix conflicts 2023-07-11 15:55:10 -04:00
Lincoln Stein
dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
Lincoln Stein
d32f9f7cb0 reverse logic of gpu_mem_reserved
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
  for model caching (similar to max_cache_size).
2023-07-11 15:16:40 -04:00
Lincoln Stein
f6302aa691 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-11 10:57:36 -04:00
Lincoln Stein
83ec4c983c
Merge branch 'main' into lstein/keep-models-in-vram 2023-07-10 18:47:05 -04:00
Lincoln Stein
c9c61ee459
Update invokeai/app/services/config.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-07-10 18:46:32 -04:00
psychedelicious
00e26ffa9a fix(nodes): remove board_id column from images table
This is extraneous; the `board_images` table holds image-board relationships.
2023-07-10 11:30:35 +10:00
Lincoln Stein
5759a390f9 introduce gpu_mem_reserved configuration parameter 2023-07-09 18:35:04 -04:00
Lincoln Stein
2f3190ad6c merge with main 2023-07-09 13:28:05 -04:00
psychedelicious
4640969037 fix(api): fix for borked windows mimetypes registry
It's possible for the Windows mimetypes for js to be changed and cause content-type errors when running the app.

Explicitly set the mimetypes to rectify this. Note that the root cause is a misconfiguration on the client - not our end.

See https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
2023-07-09 11:05:01 +10:00
Lincoln Stein
b6cc4df1d8 report processing stack traces to the console 2023-07-08 19:48:32 -04:00
Lincoln Stein
5f7435955e if models.yaml doesn't exist, rebuild it 2023-07-08 15:13:51 -04:00
Lincoln Stein
92d4486214 don't write 'version:' to the invokeai.yaml file 2023-07-08 12:06:23 -04:00
Lincoln Stein
bd9786564c merge with main 2023-07-08 10:11:25 -04:00
Lincoln Stein
f6ecee926f version of _find_root() that works in conda environment 2023-07-08 09:02:17 -04:00
Lincoln Stein
454c2c0952 version of _find_root() that works in conda environment 2023-07-08 09:01:05 -04:00
blessedcoolant
3333805821 feat: Add App Version to UI 2023-07-08 21:31:17 +12:00
Lincoln Stein
8e8f9cce0f print version when --version provided at command line 2023-07-07 20:47:29 -04:00
Lincoln Stein
10d3bccf32
Mac MPS FP16 fixes (#3641)
This PR is to allow FP16 precision to work on Macs with MPS. In
addition, it centralizes the torch fixes/workarounds required for MPS
into a new backend utility `mps_fixes.py`. This is conditionally
imported in `api_app.py`/`cli_app.py`.

Many MANY thanks to @StAlKeR7779 for patiently working to debug and fix
these issues.
2023-07-07 17:43:23 -04:00
Eugene Brodsky
97b2ec58e2
Merge branch 'main' into release/invokeai-3-0-alpha 2023-07-07 14:18:12 -04:00
Lincoln Stein
9f58ed35cf improve user migration experience
- No longer fail root directory probing if invokeai.yaml is missing
  (test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
  with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
blessedcoolant
7aa918677e Merge branch 'main' into feat/clip_skip 2023-07-07 16:21:53 +12:00
Lincoln Stein
54f3686e3b merge with main, fix conflicts 2023-07-06 15:21:45 -04:00
Lincoln Stein
e9352227f3 add merge api 2023-07-06 15:12:34 -04:00
Lincoln Stein
90c66aab3d merge with upstream 2023-07-06 13:17:02 -04:00
Lincoln Stein
3e925fbf34 model merging API ready for testing 2023-07-06 13:15:15 -04:00
Sergey Borisov
a9e77675a8 Move clip skip to separate node 2023-07-06 17:39:49 +03:00
Sergey Borisov
04b57c408f Add clip skip option to prompt node 2023-07-06 16:09:40 +03:00
psychedelicious
c21245f590 fix(api): make list models params querys, make path /, remove defaults
The list models route should just be the base route path, and should use query parameters as opposed to path parameters (which cannot be optional)

Removed defaults for update model route - for the purposes of the API, we should always be explicit with this
2023-07-06 15:34:50 +10:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api 2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT 2023-07-05 23:13:01 -04:00
Lincoln Stein
83d3a043da merge latest changes from main 2023-07-05 19:15:53 -04:00
Lincoln Stein
5b6dd47b9f add API for model convert 2023-07-05 15:13:21 -04:00
Lincoln Stein
5027d0a603 accept @psychedelicious suggestions above 2023-07-05 14:50:57 -04:00
Lincoln Stein
021e1eca8e
Merge branch 'main' into mps-fp16-fixes 2023-07-05 13:19:52 -04:00
Sergey Borisov
0ac9dca926 Fix loading diffusers ti 2023-07-05 19:46:00 +03:00
Lincoln Stein
9edf78dd2e merge with main 2023-07-05 09:12:54 -04:00
Lincoln Stein
6112197edf convert implemented; need router 2023-07-05 09:05:05 -04:00
gogurtenjoyer
ba7345deb4
Merge branch 'main' into mps-fp16-fixes 2023-07-05 07:38:41 -04:00
psychedelicious
56d4ea3252 fix(api): improve mm routes 2023-07-05 20:08:47 +10:00
psychedelicious
5d4d0e795c fix(mm): fix up mm service types 2023-07-05 20:07:10 +10:00
Eugene Brodsky
7170e82f73 expose max_cache_size in config 2023-07-05 02:44:15 -04:00