Commit Graph

253 Commits

Author SHA1 Message Date
psychedelicious
3e2a948007
Merge branch 'main' into feat/model-events 2023-07-17 17:36:20 +10:00
Kent Keirsey
675a92401c
Merge branch 'main' into lstein/default-model-install 2023-07-16 19:32:59 -04:00
Lincoln Stein
6fbb5ce780 add renaming capabilities to model update API route 2023-07-16 14:17:05 -04:00
psychedelicious
48a031dbaf fix(nodes): fix typing of configuration service 2023-07-16 10:52:18 +10:00
Lincoln Stein
373beefd13 remove restoration option from invokeai.yaml 2023-07-15 18:26:19 -04:00
Lincoln Stein
6b0a158ffa Merge branch 'main' into lstein/default-model-install 2023-07-15 18:23:34 -04:00
Lincoln Stein
c90345d6a3 deprecate the face restoration option 2023-07-15 18:23:32 -04:00
Lincoln Stein
9faffa2245 revert inadvertent breaking change to config causing test failures (override) 2023-07-15 18:15:59 -04:00
Lincoln Stein
6073cb8020 add documentation on the configuration system 2023-07-15 16:14:47 -04:00
psychedelicious
7b6159f8d6 feat(nodes): emit model loading events
- remove dependency on having access to a `node` during emits, would need a bit of additional args passed through the system and I don't think its necessary at this point. this also allowed us to drop an extraneous fetching/parsing of the session from db.
- provide the invocation context to all `get_model()` calls, so the events are able to be emitted
- test all model loading events in the app and confirm socket events are received
2023-07-16 02:12:01 +10:00
psychedelicious
c7b547ea3e feat(nodes): remove references to restoration services
- remove restoration services
- remove the restore faces nodes
- update tests
2023-07-16 01:12:39 +10:00
Lincoln Stein
2faa7cee37 add rename_model route 2023-07-14 23:03:18 -04:00
Lincoln Stein
8600aad12b multiple enhancements to model manager REACT API
1. add a /sync route for synchronizing the in-memory model lists to
   models.yaml, the models directory, and the autoimport directories.

2. add optional destination_directories to convert_model and merge_model
   operations.

3. add /ckpt_confs route for retrieving known legacy checkpoint configuration
   files.

4. add /search route for finding all models in a directory located in the server
   filesystem
2023-07-14 13:45:16 -04:00
Lincoln Stein
ad076b1174 add model directory search route 2023-07-14 11:14:33 -04:00
psychedelicious
5f5c93abb4 feat(app): embed PNG info in invokeai_metadata and invokeai_graph
Using just `metadata` and `graph` feel a bit too generic.
2023-07-13 15:40:05 +10:00
psychedelicious
50bef87da7 feat(db,nodes,api): refactor metadata
Metadata for the Linear UI is now sneakily provided via a `MetadataAccumulator` node, which the client populates / hooks up while building the graph.

Additionally, we provide the unexpanded graph with the metadata API response.

Both of these are embedded into the PNGs.

- Remove `metadata` from `ImageDTO`
- Split up the `images/` routes to accomodate this; metadata is only retrieved per-image
- `images/{image_name}` now gets the DTO
- `images/{image_name}/metadata` gets the new metadata
- `images/{image_name}/full` gets the full-sized image file
- Remove old metadata service
- Add `MetadataAccumulator` node, `CoreMetadataField`, hook up to `LatentsToImage` node
- Add `get_raw()` method to `ItemStorage`, retrieves the row from DB as a string, no pydantic parsing
- Update `images`related services to handle storing and retrieving the new metadata
- Add `get_metadata_graph_from_raw_session` which extracts the `graph` from `session` without needing to hydrate the session in pydantic, in preparation for providing it as metadata; also removes all references to the `MetadataAccumulator` node
2023-07-13 15:40:05 +10:00
Lincoln Stein
25591788c1 fix conflicts 2023-07-11 15:55:10 -04:00
Lincoln Stein
dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
Lincoln Stein
d32f9f7cb0 reverse logic of gpu_mem_reserved
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
  for model caching (similar to max_cache_size).
2023-07-11 15:16:40 -04:00
Lincoln Stein
f6302aa691 Merge branch 'main' into release/invokeai-3-0-beta 2023-07-11 10:57:36 -04:00
Lincoln Stein
83ec4c983c
Merge branch 'main' into lstein/keep-models-in-vram 2023-07-10 18:47:05 -04:00
Lincoln Stein
c9c61ee459
Update invokeai/app/services/config.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-07-10 18:46:32 -04:00
psychedelicious
00e26ffa9a fix(nodes): remove board_id column from images table
This is extraneous; the `board_images` table holds image-board relationships.
2023-07-10 11:30:35 +10:00
Lincoln Stein
5759a390f9 introduce gpu_mem_reserved configuration parameter 2023-07-09 18:35:04 -04:00
Lincoln Stein
b6cc4df1d8 report processing stack traces to the console 2023-07-08 19:48:32 -04:00
Lincoln Stein
5f7435955e if models.yaml doesn't exist, rebuild it 2023-07-08 15:13:51 -04:00
Lincoln Stein
92d4486214 don't write 'version:' to the invokeai.yaml file 2023-07-08 12:06:23 -04:00
Lincoln Stein
f6ecee926f version of _find_root() that works in conda environment 2023-07-08 09:02:17 -04:00
Lincoln Stein
454c2c0952 version of _find_root() that works in conda environment 2023-07-08 09:01:05 -04:00
Lincoln Stein
8e8f9cce0f print version when --version provided at command line 2023-07-07 20:47:29 -04:00
Lincoln Stein
9f58ed35cf improve user migration experience
- No longer fail root directory probing if invokeai.yaml is missing
  (test is now whether a `models/core` directory exists).
- Migrate script does not overwrite previously-installed models.
- Can run migrate script on an existing 2.3 version directory
  with --from and --to pointing to same 2.3 root.
2023-07-07 08:18:46 -04:00
Lincoln Stein
54f3686e3b merge with main, fix conflicts 2023-07-06 15:21:45 -04:00
Lincoln Stein
e9352227f3 add merge api 2023-07-06 15:12:34 -04:00
Lincoln Stein
90c66aab3d merge with upstream 2023-07-06 13:17:02 -04:00
Lincoln Stein
3e925fbf34 model merging API ready for testing 2023-07-06 13:15:15 -04:00
Lincoln Stein
8f5fcb188c
Merge branch 'main' into lstein/model-manager-router-api 2023-07-05 23:16:43 -04:00
Lincoln Stein
f7daa6e71d all methods now return OPENAPI_MODEL_CONFIGS; convert uses PUT 2023-07-05 23:13:01 -04:00
Lincoln Stein
5027d0a603 accept @psychedelicious suggestions above 2023-07-05 14:50:57 -04:00
Lincoln Stein
9edf78dd2e merge with main 2023-07-05 09:12:54 -04:00
Lincoln Stein
6112197edf convert implemented; need router 2023-07-05 09:05:05 -04:00
psychedelicious
5d4d0e795c fix(mm): fix up mm service types 2023-07-05 20:07:10 +10:00
Eugene Brodsky
7170e82f73 expose max_cache_size in config 2023-07-05 02:44:15 -04:00
Lincoln Stein
5d099f4a49 update_model working 2023-07-04 17:26:57 -04:00
Lincoln Stein
96bf92ead4 add the import model router 2023-07-04 14:35:47 +10:00
Lincoln Stein
d6de11bd56 resolve merge conflict 2023-07-03 12:19:11 -04:00
Lincoln Stein
ed86d0b708 Union[foo, None]=>Optional[foo] 2023-07-03 12:17:45 -04:00
Lincoln Stein
877b187a1b Merge branch 'lstein/restore-3.9-compatibility' into release/invokeai-3-0-alpha 2023-07-03 11:01:34 -04:00
Lincoln Stein
ac9ec4e75a restore 3.9 compatibility by replacing | with Union[] 2023-07-03 10:57:40 -04:00
Lincoln Stein
2465c7987b Revert "restore 3.9 compatibility by replacing | with Union[]"
This reverts commit 76bafeb99e.
2023-07-03 10:56:41 -04:00
Lincoln Stein
76bafeb99e restore 3.9 compatibility by replacing | with Union[] 2023-07-03 10:55:04 -04:00